Are AI Tools Like GPT-4 Making Us Overconfident?

Simone AndreaniSimone Andreani
3 min read

Large language models (LLMs) like GPT-4 have transformed how we get information. They generate fluent responses, answer tough questions, and mimic human conversation with ease. But there's a growing concern: these tools might be feeding the Dunning-Kruger effect—the tendency for people with low knowledge to overestimate their expertise.

What Is the Dunning-Kruger Effect?

Coined by psychologists David Dunning and Justin Kruger in 1999, the Dunning-Kruger effect describes a mental blind spot. People who know little about a subject often lack the self-awareness to recognize their own ignorance. The less they know, the more confident they may feel.

How AI Makes the Problem Worse

1. Instant answers create false confidence
LLMs offer slick, immediate responses. But just because the answer sounds right doesn’t mean the user understands the topic. The speed and polish can trick people into thinking they "get it"—when they really don’t.

2. Professional tone masks superficiality
AI-generated content often comes across as articulate and authoritative. That style can hide the fact that the substance may be shallow or even wrong. This veneer of credibility leads users to overrate their own grasp of complex issues.

3. Reliance replaces real learning
Why read a textbook when GPT can explain it in two seconds? That mindset can erode deep learning habits. As users lean more on AI, they may lose the drive to study thoroughly or think critically.

4. Feedback loops reinforce bias
LLMs respond based on user prompts. If the input has bias, the output often does too—without offering a counterpoint. This can trap users in echo chambers, where bad ideas get repeated and reinforced.

Why This Matters

In the workplace
Superficial knowledge can lead to overconfidence—and bad decisions. In high-stakes fields like healthcare, law, or engineering, that can have serious consequences. To fight this, organizations need cultures where asking tough questions and seeking expert input is encouraged.

In education
Students might skip the hard work and go straight to AI for answers. That undermines critical thinking and problem-solving—skills they’ll need when the AI can’t help.

In public discourse
People armed with AI soundbites may argue with misplaced confidence. This can fuel division and reduce conversations to battles of bravado, not thoughtful debate.

How to Push Back

1. Teach critical thinking
Schools and companies should train people to question what they read—including AI responses. Understanding should matter more than just "knowing stuff."

2. Use AI as a supplement, not a substitute
LLMs work best when paired with real learning. They’re tools—not replacements for expert guidance or deep study.

3. Build transparency into AI
Future models should clearly signal what they don’t know or when their answers are based on limited data. That kind of honesty can keep users grounded.


AI can be a powerful ally. But if we’re not careful, it can also boost our confidence while hollowing out our competence. The key is using it wisely—with humility, skepticism, and a commitment to real understanding.

What’s your take? Let’s talk—drop your thoughts in the comments.

0
Subscribe to my newsletter

Read articles from Simone Andreani directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Simone Andreani
Simone Andreani

Hi, I’m Simo — Engineering Manager and Growth Tech Lead at Homerun Technologies, where I help shape the technology behind ProntoPro, HomeRun, and Armut. My focus: building teams and systems that scale, learn, and deliver real impact. I thrive at the intersection of product, people, and purpose. Over the past decade, I’ve led diverse, cross-functional teams through platform overhauls, rapid experiments, and high-stakes launches — always with calm, clarity, and a belief in ownership over output. I care deeply about psychological safety, continuous learning, and making sure our solutions don’t just work, but make a difference. What drives me? Bringing care and connection to tech — whether that means coaching engineers, running value-mapping workshops, or turning setbacks into momentum. Outside of work, I love sharing what I’ve learned about team growth and engineering leadership, and am always up for a conversation about craft beers, hiking, or the latest in tech.