AI Is Like an Intern: Useful, Uneven, and in Need of Supervision

Key Points (For Readers on the Go)

  • AI is like an intern: often useful, sometimes surprisingly skilled, sometimes underdeveloped, and always in need of supervision.

  • AI is not equally good or bad at everything. Its abilities are uneven, and those uneven strengths are changing quickly.

  • Ethan Mollick’s post, The Shape of AI: Jaggedness, Bottlenecks and Salients,” offers a helpful way to understand why AI can be excellent at one task and frustrating at another.  

  • A poor AI result does not always mean “AI is bad.” It may mean the tool was poorly directed, used for the wrong task, or judged based on outdated assumptions.

  • The best use of AI requires clear instructions, good examples, revision, oversight, and professional judgment.

  • If AI has not been useful to you yet, the next step may be a better use case or a free training, not immediate dismissal.

AI is like an intern: capable, fast, uneven, and in need of supervision.

That is the analogy I keep coming back to when explaining AI to educators, psychologists, clinicians, and other professionals. A good intern may be excellent with some software that you don’t know, quick at organizing information, and surprisingly helpful with first drafts. That same intern may have limited experience with counseling, clinical judgment, parent/client communication, or complex ethical decisions.

You would not dismiss the intern entirely because they are still developing. You also would not hand them high-stakes work without review.

That is a useful way to understand AI.

I have been meaning to get to Ethan Mollick’s post, The Shape of AI: Jaggedness, Bottlenecks and Salients, for a while. His core point is that AI ability is “jagged.” AI can be very strong in some areas while remaining weak in others that may seem simple to humans. He also explains that when a bottleneck improves, AI can suddenly become useful for tasks it previously handled poorly.  

That matters because many people are still making broad claims like “AI is not good.” But that statement is usually too vague to be useful.

The better question is: What is AI good enough to help with today, and what still requires close human supervision?

AI Is Like an Intern: Capable, Uneven, and Still Learning

An intern rarely has a perfectly balanced skill profile.

They may be excellent at using new software but less comfortable leading a difficult meeting. They may be strong at assessment but inexperienced with counseling. They may write quickly but still need help with tone, nuance, and professional judgment.

AI is similar.

It may be useful for:

  • drafting a first version of a document

  • organizing ideas

  • summarizing non-confidential information

  • generating examples

  • creating presentation outlines

  • helping simplify complex language

  • creating role-play scenarios

  • comparing options

  • developing speaker notes

  • turning scattered ideas into a structured plan

But it still needs supervision.

AI may produce information that sounds confident but is incomplete, generic, or wrong. It may miss context. It may overstate certainty. It may create polished language that still needs a professional to verify accuracy and appropriateness.

That does not make AI useless. It means AI needs to be managed.

The “Jagged Frontier” Explains Why AI Feels So Inconsistent

Mollick’s post is useful because it explains something many professionals have already noticed: AI does not improve in a smooth, predictable way.

It can be very good at tasks we assume are difficult, while still struggling with tasks that seem easier. Mollick describes this as the jagged frontier of AI ability. He notes that AI has improved rapidly in areas such as reading, math, general knowledge, and reasoning, while some areas, including memory, remain weaker.  

That is exactly why the intern analogy works.

An intern may surprise you by doing something difficult very well. Then they may miss something obvious. The answer is not to either fully trust them or fully dismiss them. The answer is to understand their current strengths, provide structure, and review their work.

The same is true for AI.

“AI Isn’t Good” Usually Means One of Three Things

When someone says, “AI isn’t good,” I usually want to ask: Good at what?

AI is not one task, one tool, or one output. It depends on the platform, the prompt, the context, the task, and the user’s ability to evaluate the result.

Often, when people conclude that AI is not useful, one of three things is happening.

1. They Have Not Learned How to Use It Well

AI works better when users give it clear direction.

A vague prompt usually produces a vague response. A better prompt gives the AI a role, a task, an audience, a format, examples, and constraints.

For example, “Make me a presentation” is weak.

A stronger request would be: “Create 8 sparse slides for school psychologists on ethical AI use. Keep each slide under 25 words. Include detailed presenter notes with practical examples and cautions about confidentiality.”

That is the difference between giving an intern a confusing assignment and giving them a clear task with expectations.

2. They Are Using It for the Wrong Use Case

Some tasks are better suited to AI than others.

AI is often helpful for first drafts, brainstorming, summarizing, organizing, and generating options. It is riskier when the task requires high-stakes judgment, confidential information, legal interpretation, diagnosis, crisis response, or decisions where the user cannot evaluate the quality of the answer.

The problem is not always the tool. Sometimes the problem is the assignment.

You would not ask a first-year intern to independently manage a complex crisis case. But you might ask them to draft a resource list, summarize general background information, or prepare a first version of a handout for review.

That is how we should think about AI.

3. Their View Is Based on an Older Version of AI

This is increasingly important.

AI tools are improving quickly. A use case that was frustrating six months ago may now be practical. A tool that once produced clunky results may now produce something useful with the right direction.

Mollick’s post makes this point well through the idea of bottlenecks. A bottleneck is the part of a system that limits what the whole system can do. When that weak point improves, the tool can suddenly become useful in new ways.  

That is exactly what I have seen with presentation development. Use the newest model and pay for it. At $20 a month, all of the models are a steal!

PowerPoint Is a Good Example of a Moving Bottleneck

Until recently, I would not have recommended AI for creating decent PowerPoint slides.

AI could generate content, but the slides were often too dense, awkwardly structured, visually weak, or not useful for a real presentation. The output often required so much editing that it was not much of a time-saver.

That has changed.

In the last few months, Claude has become my go-to tool for creating sparse, editable slide drafts with strong speaker notes. I still review and revise the slides. I still give clear instructions. I still shape the final product. But the tool has crossed a threshold where it is genuinely useful for that task.

Mollick makes a similar point about presentations. He notes that both Claude and ChatGPT have improved at creating PowerPoint decks, and he describes how improvements in image generation are changing what AI can do with slides and visual communication.  

That does not mean AI is now good at every part of presentation design.

It means the bottleneck moved.

And that is the broader lesson: do not permanently write off a use case based on an old experience.

Using AI Well Requires Supervision, Not Blind Trust

The intern analogy also keeps us grounded ethically.

A good supervisor does not say, “The intern is smart, so I do not need to check their work.”

A good supervisor also does not say, “The intern made one mistake, so they are useless.”

Instead, a good supervisor asks:

  • What is this intern ready to do?

  • What support do they need?

  • What should I review carefully?

  • What should I not delegate?

  • Where are they improving?

  • Where are they still unreliable?

That is the mindset professionals need with AI.

AI can support work, but it should not remove accountability. The human user remains responsible for deciding whether the output is accurate, appropriate, ethical, and useful.

Practical Examples: Better and Worse Uses of the AI Intern

Better Use: Drafting a First Version

A school leader needs a short explanation of a new attendance initiative for families.

AI can draft a parent-friendly version, offer multiple tones, simplify the reading level, and create versions for email, newsletter, and website use. A human still reviews it for accuracy, district policy, and local context.

Better Use: Creating Presentation Notes

A clinician is preparing a training on AI literacy.

AI can create sparse slides and detailed speaker notes, suggest examples, and help organize the flow. The presenter still decides what belongs, checks claims, and edits the material to match their voice.

Better Use: Brainstorming Options

A team is trying to explain AI to staff members who are skeptical or overwhelmed.

AI can generate analogies, discussion questions, scenarios, and training activities. The team still chooses what fits their culture and ethical expectations.

Worse Use: Final Decisions Without Review

A professional asks AI to make a high-stakes decision and accepts the answer without checking it.

That is not responsible use. AI may sound polished even when it is missing key information.

The worst Use: Asking AI to Do Work the User Cannot Evaluate

A user asks AI to complete a task where they do not have enough expertise to judge whether the output is correct.

That is risky. The more consequential the task, the more important human expertise becomes. Do not do this.

Ethical Considerations: The Intern Still Needs Guardrails

Thinking of AI as an intern makes the ethical issues clearer.

Privacy

Do not share confidential, personally identifiable, student, patient, or client information with tools that are not approved for that purpose. Privacy protections, vendor agreements, and organizational policies matter.

Bias

AI can reproduce bias from training data, user prompts, or flawed systems. Human review should include attention to fairness, representation, and potential harm.

Transparency

In many professional contexts, people should know when AI is being used in meaningful ways. The level of disclosure depends on the setting, task, and professional standards.

Accuracy

AI output should be checked. This is especially important when the output includes facts, recommendations, citations, summaries, or claims about research.

Accountability

AI does not hold a license, professional role, or ethical responsibility. The person using AI remains accountable for the final product.

Equity

AI access and AI literacy are not evenly distributed. Some professionals receive training, paid tools, and institutional support. Others are left to figure it out alone. That gap matters.

The Real Skill Is Learning How to Manage the AI Intern

The professionals who benefit most from AI are not necessarily the most technical.

They are often the people who learn how to:

  • give clear instructions

  • choose appropriate use cases

  • provide examples

  • ask for revisions

  • verify output

  • protect confidential information

  • know when not to use AI

  • update their assumptions as tools improve

That is the practical skill set.

AI literacy is not just knowing what ChatGPT, Claude, Gemini, NotebookLM, or Copilot are. It is knowing how to work with these tools in a way that is useful, ethical, and realistic.

And if someone has tried AI once or twice and decided it is not good, I would encourage them to take a free training, watch a demonstration, or sit with someone who uses it well.

Not because AI is always useful.

Because it is often more useful than people realize once they learn how to supervise it.

Final Takeaways

  • AI is best understood as an intern: useful, uneven, fast, and in need of supervision.

  • A poor AI output may reflect a poor prompt, the wrong use case, or an outdated impression of what the tool can do.

  • AI’s abilities are jagged, meaning it may be strong in some areas and weak in others.

  • AI tools keep improving, so professionals need to revisit old assumptions.

  • The safest and most productive approach is structured experimentation with clear human oversight.

AI Use Disclosure - Portions of this post were drafted with the assistance of an AI writing tool and revised by the author for accuracy, clarity, and professional judgment.

Next
Next

Can AI Help Psychologists Show Empathy?