Stop Calling AI a “Junior Engineer”

Matt McKennaMatt McKenna
3 min read

There’s a common shorthand in tech circles: “Treat your AI like a junior engineer.” It’s meant to set expectations that you need to give it clear tasks, review its work, don’t let it push to prod, etc, etc.

But the analogy doesn’t sit right with me. The phrase carries weight and it falls unfairly on real junior engineers.

A quick note on the term “junior”

I want to acknowledge upfront that “junior” is not a perfect term. It’s vague, sometimes patronizing, and can be tied more to time served than to actual capability. In this context, I’m using it intentionally because it’s the phrase that gets used in these AI comparisons and trainings.

Junior engineers are people, not metaphors

A junior engineer is a human being. They’re capable of learning from experience, synthesizing context across domains, showing initiative, and growing rapidly. They have motivations, feelings, and ambitions. They can build deep understanding, ask critical questions when something doesn’t make sense, apply judgment, and come up with novel solutions in ways that no language model can.

LLMs don’t grow easily¹. They don’t mature into someone you trust with architectural decisions or tough tradeoffs. They don’t start to anticipate edge cases, advocate for users, or question a spec because something feels off. They don’t take responsibility. They don’t improve with mentorship. They just respond based on patterns.

So when we call an AI a “junior engineer that needs hand holding,” we’re not just making a lazy analogy. We’re erasing the path that real people take to develop into leaders. We risk under investing in the very people who have that potential, because we’ve convinced ourselves a tool can stand in for them.

We need the junior engineers to grow, gain experience, build confidence, and the ability to make difficult decisions. And to make the comparison even more unfair…

LLMs don’t have the same capacity as junior engineers

LLMs don’t learn the way people do. They don’t develop understanding. They don’t carry memory across tasks². They can’t reflect or introspect or ask for clarification. They can produce code, sure, even elegant and useful code, but they do so without comprehension. Having to constantly “reteach” the same concepts is exhausting. A junior engineer might need reminders, but then they grow. They internalize. They become the teacher.

LLMs don’t know what the code is for. They don’t understand how it fits into a product, or how that product fits into someone’s life. They have no sense of the impact it might have on the people who use it.

Describing LLMs as junior engineers misleads people about what these tools are and are not capable of. It sets the wrong expectations and erases the fundamental differences between real cognitive development
and probabilistic pattern matching.

Why I care

Words shape how we work. If we start thinking of AIs as “almost-human,” we will misuse them and undervalue the actual humans we hire to be on our teams.

LLMs are useful. But they’re tools, not teammates.


Footnotes

  1. Sure new models are coming more and more frequently, but they take massive amounts of energy and new material.

  2. Unless you explicitly engineer that behavior, and even then the context window is way too small to fully synthesize understanding across tasks.

0
Subscribe to my newsletter

Read articles from Matt McKenna directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Matt McKenna
Matt McKenna

Android GDE @ Square he/him #BlackLivesMatter #StopAsianHate