The Illusion of AI Intelligence: How Our Brains Trick Us

Gerard SansGerard Sans
4 min read

In an era where large language models and image generators are producing increasingly convincing outputs, one uncomfortable truth often gets overlooked: the sense that AI is "smart" is largely an illusion — and it's not just the AI's fault. It's ours.

Our brains are wired to take shortcuts, prioritize speed over precision, and impose structure and meaning where there often is none. This deeply ingrained cognitive machinery, while useful for everyday survival, makes us especially vulnerable to overestimating the coherence, accuracy, and even creativity of AI-generated content.

Why AI Seems Smarter Than It Is

AI models like ChatGPT or image generators such as Midjourney or DALL·E produce outputs that often feel polished, insightful, or realistic. But this sense of quality is often disconnected from the underlying content.

The core problem: our brains are too good at "filling in the blanks."

In Images: Gist Over Detail

When we look at an AI-generated image, our perception operates primarily through gist processing — we quickly grasp the overall structure, lighting, composition, and emotional tone. This happens within milliseconds, thanks to System 1 thinking (fast, intuitive, and automatic).

But System 1 isn't detail-oriented. It doesn't easily catch the extra fingers, impossible reflections, warped limbs, or nonsensical object interactions that frequently appear in generated images. Our brains "fill in" what's missing or flawed, smoothing over the image into something that feels good enough.

Only a slow, focused, System 2 evaluation (conscious, effortful, and analytical) can spot the distortions. But we rarely engage System 2 unless we're prompted to scrutinize.

In Text: Structure Over Substance

The same illusion occurs in AI-generated text. If a paragraph is well-formatted, uses familiar vocabulary, and follows a logical progression in structure (like having clear headings, bullet points, or transitions), we tend to interpret it as well-reasoned — even when the actual content is hollow, incorrect, or self-contradictory.

This is due to our reliance on structural heuristics and predictive reading. We read with the expectation that coherence will be maintained, so we often gloss over logical flaws or factual gaps. As long as the text conforms to expected patterns, we assume it's meaningful.

Again, engaging System 2 thinking — slowing down, critically evaluating claims, checking sources, and questioning assumptions — is the only reliable way to assess substance over surface.

Why Better Evaluation Tools Are Critical

Much of AI research still relies on human-centered evaluation — using tasks like multiple choice questions, truth judgments, or subjective ratings. These methods often fall into the same cognitive traps as end users: relying on fast, superficial judgments that fail to catch deeper inconsistencies or structural flaws.

Benchmarks like "true/false" or "choose the best answer" encode human biases into datasets, reinforcing the illusion of competence. These approaches often reward surface-level plausibility, not deep understanding or robustness. The result is inflated performance scores that don't reflect real-world reliability.

To address this, the field needs a shift toward systematic, tool-based evaluation frameworks that:

  • Analyze statistical distributions of answers rather than single judgments

  • Detect semantic drift, logical inconsistency, or redundancy at a structural level

  • Test models on out-of-distribution data to probe for generalization rather than memorization

  • Include automated stress-testing that simulates edge cases and adversarial prompts

  • Use multi-layered metrics — not just accuracy or F1, but also coherence, factual grounding, novelty, and traceability

Tools like these can help reduce reliance on System 1-style quick judgments in the research pipeline itself. They enable more transparent, reproducible, and fine-grained analysis, revealing not just whether a model "gets the right answer," but why and how it performs across varying conditions.

Without such rigor, AI systems may continue to pass evaluations while still lacking the robustness, trustworthiness, or depth we attribute to them.

Escaping the Illusion: Cognitive Awareness

To avoid being fooled by AI's apparent intelligence, we must consciously toggle from System 1 to System 2 thinking — especially when accuracy, ethics, or safety is at stake.

Here are some practical strategies:

  1. Separate Form from Content
    Ask: Is this well-written or just well-formatted? Is this image realistic or just familiar in composition?

  2. Zoom In on Details
    Look for anomalies in images: fingers, reflections, depth. In text: logic gaps, vague claims, unsupported conclusions.

  3. Use Slow Thinking Intentionally
    Pause and ask critical questions. What is the claim here? What evidence is being presented? Does this follow logically?

  4. Compare with Ground Truth
    Whenever possible, cross-reference with known facts, reliable sources, or original data.

Final Thoughts

AI is not conscious, creative, or intelligent in the human sense. What we're witnessing is pattern replication, polished by scale. The illusion of intelligence arises not from what AI can do, but from how we interpret it.

Recognizing this illusion isn't about dismissing AI's utility — it's about becoming more aware of our own cognitive limits. In doing so, we not only improve our judgment about AI, but also become better thinkers in a world increasingly shaped by machine-generated information.

0
Subscribe to my newsletter

Read articles from Gerard Sans directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Gerard Sans
Gerard Sans

I help developers succeed in Artificial Intelligence and Web3; Former AWS Amplify Developer Advocate. I am very excited about the future of the Web and JavaScript. Always happy Computer Science Engineer and humble Google Developer Expert. I love sharing my knowledge by speaking, training and writing about cool technologies. I love running communities and meetups such as Web3 London, GraphQL London, GraphQL San Francisco, mentoring students and giving back to the community.