AI Misconceptions: Debunking the Ghost in the Machine


In the ever-evolving landscape of technology, Artificial Intelligence (AI) continues to captivate our imagination. Yet, as we marvel at its capabilities, we often fall into the trap of ascribing human qualities to what is essentially a sophisticated data processor. It's time we strip away the anthropomorphic veneer and see AI for what it truly is.
The Data-Driven Reality
At its core, AI is nothing more than a glorified search algorithm. It doesn't possess emotions, wisdom, or expertise. These traits, which we so readily attribute to AI, actually belong to the human authors of the data it processes. When an AI spits out a profound quote or a cherished family recipe, it's not showcasing its own wisdom or culinary prowess – it's simply retrieving information from its vast database.
Consider this: if we fed an AI nothing but a telephone directory, it wouldn't suddenly develop an understanding of human relationships or city planning. It would simply become really good at matching names to numbers and addresses. The same principle applies when we feed it more complex data like scientific papers or literary works. The AI doesn't magically absorb the expertise or creativity contained within – it just gets better at connecting the dots between related pieces of information.
The Illusion of Human-like Interaction
So why do interactions with AI often feel so... human? The answer is simple: they're designed that way. The conversational prowess of chatbots like ChatGPT isn't an emergent property of AI becoming more "human-like." It's the result of deliberate engineering efforts to meet user expectations and follow instructions.
This design choice, while making interactions smoother, raises some serious ethical concerns. By masking the true nature of AI, we risk users overestimating its capabilities. It's akin to slapping googly eyes on a toaster – it doesn't change how the machine functions, but it might lead you to expect it to have opinions on your bread choices.
Debunking Common Misconceptions
Let's address some frequent misunderstandings:
Understanding AI's Nature
"Who am I talking to when using ChatGPT?"
You're not talking to "someone" – you're interacting with a reflection of the data the AI was trained on. If it was only trained on cookie recipes, that's all you'd get, regardless of what you asked.
"Doesn't AI's ability to explain topics show expertise?"
Not at all. When you ask AI to explain something, it's simply using that as a cue to connect related words and concepts from its training data. It doesn't understand what it means to "explain" any more than a search engine does.
AI Performance and Human Metrics
"If AI can pass BAR exams, doesn't that prove it's as intelligent as law students?"
This is a classic case of applying human metrics to non-human systems. AI's success in such tests merely demonstrates its ability to quickly retrieve and apply relevant information from its training data. It's not indicative of intelligence, understanding, or expertise in the way we understand these concepts in humans.
Emotional Intelligence and AI
"When AI asks how I'm doing, isn't that proof of empathy?"
Not really. This is a scripted behavior designed by the AI's creators to make the interaction feel more natural. All of these behaviors are created to encourage you to empathize with the AI, not the other way around. It's a design choice aimed at keeping users engaged, possibly to maintain subscription revenues. AI has no self, no memories, and no way to genuinely experience or understand emotions. It simply processes inputs to create outputs – that's the extent of its capabilities, at least with today's AI technology.
AI and Personal Advice
"What is AI doing when I ask for relationship or life advice?"
AI always works in the same way: it connects your question to its training data to create an answer. For a question about relationships or life advice, the response could be coming from a Reddit post, a commercial website, a personal blog, or an online forum. It's crucial to understand that this isn't the result of acquired knowledge or personal experience. The AI is literally assembling text fragments from its training data – it's more akin to a sophisticated search function than an entity drawing from personal experience. AI has no lived experience or sense of self from which to form opinions or expertise.
AI and Algorithmic Bias
"Why does AI always answer '7' when I ask for a random number between 1 and 10?"
This is a perfect example of algorithmic bias and why anthropomorphising AI can be so misleading. Although AI seems to follow instructions, it's not a person. It can't "literally" do as instructed in the way a human would. This applies to any command we use with AI, whether we ask it to think, ponder, or evaluate. It will always operate using pattern matching.
In the case of picking a number, the AI isn't really thinking and choosing a number like a person would. It's just using the patterns in its training data. The number 7 is more frequently used by people due to psychological effects, and this frequency is reflected in the AI's training data. As a result, the AI cannot escape its programming and will inevitably answer with 7 most of the time.
This serves as a potent reminder that AI is a machine following an algorithm, not a person making conscious choices. It highlights the limitations of AI in truly understanding and executing seemingly simple human instructions, and underscores the importance of recognizing AI for what it is – a sophisticated pattern-matching system, not a sentient being.
The Path Forward
As we continue to integrate AI into our lives, it's crucial that we maintain a clear-eyed view of what it is and what it isn't. AI is a powerful tool, capable of processing vast amounts of data and identifying patterns that might elude human perception. But it's not a sentient being, and it doesn't possess the qualities we often project onto it.
AI is like a character in a "choose your own adventure" novel, responding to our prompts and navigating a pre-defined narrative landscape. We, the readers, have some agency in shaping the story, but the AI's responses are ultimately constrained by the parameters set by its creators and the data it was trained on. It's like exploring a vast, intricate maze – we can choose which paths to take, but the walls are fixed, and the overall structure remains unchanged.
By understanding AI's true nature as a data-driven tool, we can better harness its capabilities while avoiding the pitfalls of overestimation. Let's appreciate AI for what it is – a remarkable feat of human engineering – rather than what we sometimes wish it to be.
Subscribe to my newsletter
Read articles from Gerard Sans directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Gerard Sans
Gerard Sans
I help developers succeed in Artificial Intelligence and Web3; Former AWS Amplify Developer Advocate. I am very excited about the future of the Web and JavaScript. Always happy Computer Science Engineer and humble Google Developer Expert. I love sharing my knowledge by speaking, training and writing about cool technologies. I love running communities and meetups such as Web3 London, GraphQL London, GraphQL San Francisco, mentoring students and giving back to the community.