Hallucinations: Bug or Feature?


You have misunderstood hallucination in AI. I'm going to explain what it is, what we get wrong, and why this is stopping you from getting the most value out of AI today.
What Even Is a Hallucination
This is the term we use when AI just makes something up on its own. For example, there was a case when a lawyer used ChatGPT to cite fake cases in a legal brief (https://www.legaldive.com/news/chatgpt-fake-legal-cases-generative-ai-hallucinations/651557/). The lawyer got into trouble for not bothering to check if the thing that AI had found was not real at all. So it just made it up. It looked real, but nope, it wasn't.
If you are a software developer like me, you have probably been using AI coding assistants, something like GitHub Copilot, and you will almost certainly have had the experience where you tell it to write some code for you. It gives you something which looks great. You have a look over it. Yes! It looks exactly like what I wanted it to be, great! Only to find out, it doesn't work. Because it tried to use an API that does not actually exist.
So, the broad thing to notice here is that the AI will sometimes just come up with something that seems completely reasonable, but which turns out to be made up. That is what we describe as hallucination.
Why Calling It a Bug Might Be the Real Problem
Now, hallucination is a colorful term, but I think it is problematic. The problem with calling it hallucination is that it makes it sound like a bug, as a problem, a mistake.
And I'm going to argue that when you fully understand what is happening when AI hallucinates, you'll realize that's not what they are doing. And actually, that behavior is the main value that the AIs today have to offer.
Now that might sound like a mad statement. Like, why on earth would I be calling this behavior where the AI makes up things a feature? How's that useful? How can I not be calling that a bug, but a feature?
Well, I think the key here is to think about why it does what it does and why that is actually a major part of what AI has to offer. If you are designing AI-based systems or even if you are using AI, it's useful to understand what it really does. I think your life will be better if you accept what it is that AIs do, because you will be able to design better solutions that will be able to work with it rather than going against its fundamental nature.
Why AI "Hallucinates"
LLMs like GPT-4, Claude, and Gemini are trained using unsupervised learning. These models are fed lots and lots of data, like actually a lot of data. The core objective is to understand the patterns and relationships in the data without any specific guidance on what to learn. To predict the next token (word or subword) given a sequence of previous tokens.
This means that they don't know facts. They learn patterns and statistical relations. So when it doesn't have the right info, it will fill in the blanks with something that sounds correct, but isn't.
Here is an example:
import openai
openai.api_key = "your-api-key"
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "What chemical element has the symbol 'Ht'?"}]
)
print(response['choices'][0]['message']['content'])
There is no chemical element with the symbol 'Ht'. But depending on how you ask, the model might still invent an answer because that is what they are trained to do.
When Hallucination is a Feature
So, is this a flaw? Not always.
It can be a feature when you are doing tasks that are not about factual accuracy, for example:
Generating ideas
Creative writing
Brainstorming names, slogans, storylines
Drafting emails, presentations, or poems
Making up characters, scenes, jokes
In the above scenarios is where the AI actually shines bright.
Final Thought
If you are using AI or using it to build something, it becomes really important to understand what LLMs actually do. They are not like Google. They do not retrieve facts. But rather they generate fluent, statistically likely language.
Hallucination isn't always a mistake. Sometimes, it's where the magic happens.
Used ChatGPT for spellings and grammar
Subscribe to my newsletter
Read articles from Reesav Gupta directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Reesav Gupta
Reesav Gupta
Final-year BTech CSE student who loves breaking things just to figure out how they work. Passionate about system design, cloud, and low-level engineering. If it runs on a server, I probably want to optimize it. Always building, always learning—because “works on my machine” isn’t good enough