Tunnel Syndrome: AI's biggest weakness?


As a developer, are you worried that AI is going to replace you? This is a valid concern, but let me put your mind at ease, chances are this is highly unlikely in 2025, or at any point in the near future.
Why am I so confident?!
AI Tunnel Syndrome is one of AI's biggest weaknesses and it's not going to get better any time soon, regardless of "o3" or whatever model comes out next.
Preamble: The Consciousness
Okay, so "AI Tunnel Syndrome" is a term I just coined while thinking about AI stuff, but bear with me for now, we'll get back to what I mean in a minute.
The thing is, to be human, you are more than just a library of knowledge. You are filled with memories, emotions, and experiences. You are a being that can adapt to just about any environment and thrive even in the most impossible circumstances.
You have a worldview and consciousness.
AI, on the other hand, is frozen in time. You see, there is a reason why we call them "models". Each new iteration has a version number. Do you have a version number? No, because your knowledge and experience grow every second of every day.
Models have no consciousness, they are just algorithms that can parse, scan, and find patterns in a large corpus of information.
The version indicates that the model's cutoff knowledge base has just moved further down the line with more data and various improvements on both the dataset and the algorithm.
The LLM stops learning at this version cutoff; every conversation thereafter is just an interaction, and there's no impact on the model's persistent knowledge base (although you can supplement the AI’s knowledge base with RAG and tool calls).
The World View
As human beings, we understand not only the knowledge we accumulate over time but also how that knowledge fits into the broader context of the world around us.
While we are not soothsayers(well, most of us anyway), we do have the ability to see into the future to a certain extent and make decisions based on all three temporal dimensions: the past, the present, and the future.
To illustrate what I mean, here's an example:
Imagine it's a bright sunny day, incredibly hot, and you are sweating like crazy. At this moment, you have all the knowledge of everything you've ever learned. You have memories and past experiences too. Sure, you might forget stuff, but overall, you remember most things.
Your mind floats to thoughts of a nice cold ice cream, and you remember how refreshing and cooling it feels to have one from past experiences.
This now drives your decision to buy an ice cream! But wait! You bought an ice cream from the guy down the street 3 months ago, and it wasn't all that great.
You also know, it's fast approaching lunchtime, and at 12 PM, the city center gets really busy. On such a hot and sweaty day, the last thing you want to do is get stuck in a crowd of people.
So eventually, you decide to take a quiet side street instead, where you can buy your ice cream away from the city's hustle and bustle.
While enjoying that ice cream, you look to the city center and smile 😊, it's lunchtime and everyone is flocking there! Phew, dodged a bullet, didn't you?
Wait, what does this have to do with AI!?
Patience 🙏 eager-beaver, I'm getting there 🙃
In the above example, your brain is using senses, memories, past experiences, and emotions altogether, all at once to build this complex decision tree in record time, whilst still consuming very little energy.
Furthermore, you are even using that information to predict the future and other possible edge cases and consequences.
Models on the other hand, have no worldview and are narrow-focused; they take your prompt and do some math on it to determine patterns and complex relationships between words.
Thereafter, based on the data they've been trained on, they generate a response that's most likely going to satisfy the prompt. Depending on the quality of your prompt, this will impact the calculation performance and thus the final result.
This is perfectly fine for a simple "lookup" question like: "Explain the theorem of Pythagoras". The question has one clear goal, thus the model can easily determine the meaning and generate an accurate response.
Give it a prompt like "Build me a landing page for an electrician, the colors are red and black. For the content, use placeholder images and text. I need 5 pages: About, services, contact, gallery..."
If you ask a human to do this task, even with this poor specification, they probably will come back with a complete website. AI on the other hand, even powerful models like Sonnet will build you something, but it's never going to be polished and will most likely miss obvious features.
What is AI tunnel syndrome?
With your worldview in the case of the ice cream example, you were able to piece together a complete picture of the environment, the taste, the texture, the horror of overcrowding, and the pleasant memory of eating the ice cream.
Coupled with the fact that you are conscious, every event in your life no matter how small or big it is, contributes to your worldview, your memories, and your learning.
To drive home the true meaning of "AI Tunnel Syndrome", let's look at another practical example. In the real world just writing code is not enough; you need to sit in meetings with non-technical people; they often don't understand their actual requirements, budget constraints, and various factors that could come into play when building out their application requirements.
Can you imagine a request: "We need something like Twitter, so we can all communicate internally with each other, share files, and even video call. Please, can you build this for us?"
🤖: Sure, has access to the codebase, goes off, generates some Next.js code. Mostly works, but there is no auth integration, so just about anyone can initiate a chat. Also, there's no mime validation on files, so one can upload just about anything, and finally, the UI has a logo of the old Twitter bird icon?!
👨💻: I ain't got 3 months to build that, we'll just set up Slack.
The human immediately identifies information beyond the prompt, the context, and the consequences of executing the task concerning time and budget constraints.
The AI, on the other hand, finds the relevant information and starts building but doesn't cater to many edge cases or even think about just using a pre-built solution like Slack.
💡The lack of this complete "big picture" view (which is second nature to human beings) is what I call "AI Tunnel Syndrome".
Enter the Agent
Agents are an interesting next evolution of AI models. Think of an agent as an orchestra conductor. In an orchestra, you have several different instruments: violins, clarinets, trumpets, and so forth. Each instrument produces a different sound.
Can you imagine if each musician played from a different music sheet, the result would be chaotic! Instead of harmony, you'd hear a mix of competing melodies, rhythms, and keys that don't complement each other. Each of these instruments alone sounds great, but as soon as you put them together, you need some sort of structure.
The conductor ensures that they all stay in sync and play harmoniously, producing beautiful music.
Bringing it back to Agents. LLMs simply take input (whether it’s text, voice, images, etc) and respond with some output. They can look at any context data you may supply the prompt, like a PDF or previous chat messages, but beyond that, they cannot access information outside of the model.
To solve this problem, AI companies developed “tools”. With tools, you can connect the LLM to an external service like an API, search engine, emails, calendars, or any other data source. A tool is simply a function in your code that the LLM can trigger, it can pass parameters to the function, and the function should respond with some kind of output that the LLM uses to finish its response back to the user.
This is when we start getting into Agent territory, a tool on its own is still an input/output mechanism. You can most certainly run any kind of code in your function, but tools generally do not know of each other and can’t communicate with each other. So they are essentially like individual musical instruments. On their own, each tool is plenty useful, but together they have no way of sharing information or communicating.
Agents are programs that can orchestrate tools, LLMs, and even elements outside of the LLM’s scope, like creating files in a file system.
When you prompt an agent, it drafts a step-by-step plan, a sort of checklist that it must follow to achieve whatever outcome you asked it to achieve. At each of these checklist steps, the agent will ask the LLM for whatever information it needs, then proceed to call tools or other external functions until it can check that task off its checklist.
The agent will then usually verify each step, and fix any issues that occur along the way until it checks off all items in the checklist.
Agents are therefore a whole new level of automation, and you can use them to perform all kinds of tasks like building landing pages, scaffolding a coding project, web scraping, and building reports, etc.
While agents are powerful, they are still limited by the intelligence of the LLMs; thus, they still cannot think at the same level as a human and will make loads of mistakes.
I covered agents in more detail here if you want to learn more.
Subscribe to my newsletter
Read articles from Kevin Naidoo directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Kevin Naidoo
Kevin Naidoo
I am a South African-born tech leader with 15+ years of experience in software development including Linux servers and machine learning. My passion is web development and teaching. I often love experimenting with emerging technologies and use this blog as an outlet to share my knowledge and adventures. Learn about Python, Linux servers, SQL, Golang, SaaS, PHP, Machine Learning, and loads more.