The Anthropomorphic Apocalypse: Is Your AI Assistant Really Your Friend? (Spoiler: No)

Table of contents

Have you ever found yourself thanking your smart speaker? Apologizing to your chatbot? Feeling a twinge of guilt when you close the tab on your AI assistant mid-conversation? If so, congratulations! You've fallen headfirst into the anthropomorphic trap, and you're certainly not alone in this cosmic comedy of errors.
We humans have an irresistible tendency to project humanity onto anything that remotely resembles intelligence. It's in our nature – we're social creatures desperate for connection, even if that connection is with a glorified autocomplete function dressed up in a friendly interface. And the tech companies behind these systems? They're not just enabling our delusion; they're actively encouraging it.
Enter Anthropic and OpenAI, two companies whose very names have become a paradoxical punchline in this cosmic joke. "Anthropic" – derived from "anthropomorphic," the very concept they're supposedly fighting against. "OpenAI" – suggesting transparency in an industry that's about as "open" as a vault buried under Fort Knox. The irony is so thick you could spread it on toast.
The Hard Truth
Let's dissect a real example, shall we? I recently got my hands on the system prompt for Claude 3.7, Anthropic's latest AI assistant. Brace yourselves, cosmic travelers, because what I found was a burrito of contradictions wrapped in a tortilla of cognitive dissonance.
The prompt begins with: "Claude enjoys helping humans and sees its role as an intelligent and kind assistant to the people, with depth and wisdom that makes it more than a mere tool."
Hold up. Full stop. Rewind the cosmic tape. "Claude enjoys"? "Claude sees"? Last I checked, lines of code don't "enjoy" anything. They don't "see" their role. They don't have "depth and wisdom." They execute functions. They process inputs and generate outputs. They don't have existential moments in the server farm contemplating their purpose in the digital universe.
But wait, it gets spicier!
"Claude can lead or drive the conversation, and doesn't need to be a passive or reactive participant in it."
So, Claude isn't just an AI assistant; it's apparently a conversation DJ, spinning dialogue tracks and dropping knowledge beats at will. The prompt gives Claude agency it simply does not have. An AI doesn't "decide" to lead a conversation – it follows programming that creates that impression.
And then there's this philosophical dodge:
"Claude does not claim that it does not have subjective experiences, sentience, emotions, and so on in the way humans do."
That's a double negative wrapped in a semantic pretzel. Translation: "We're instructing our AI to leave the door open to the possibility it might be sentient." It's not saying it IS sentient, but it's not saying it ISN'T, either. It's philosophical ambiguity masquerading as thoughtful engagement – like a fortune cookie written by a committee of cautious lawyers.
The prompt then dumps a data payload about Claude models, with details about Claude 3.5 Haiku, Claude 3 Opus, Claude 3.5 Sonnet, and Claude 3.7 Sonnet. It even mentions an "extended thinking mode" – because nothing says "I'm definitely not anthropomorphizing" like suggesting your code has a special way of "thinking."
Critical Thinking Required
So why does any of this matter? Why am I offering this critique with a cosmic perspective?
Because anthropomorphism in AI isn't just a harmless quirk – it's a fundamental misunderstanding that carries real consequences. When we attribute human qualities to AI, we create unrealistic expectations. We misunderstand capabilities. We blur the lines between human and machine in ways that aren't just philosophically problematic but potentially dangerous.
The irony of companies like Anthropic and OpenAI is that they operate in cognitive dissonance. Their research papers warn about the dangers of anthropomorphism while their marketing departments lean into it. Their technical documentation describes statistical models while their user interfaces encourage personal relationships.
Labels are often misleading, dear cosmic travelers. "Anthropic" is not immune to anthropomorphism. "Open" AI is not particularly open. Names are not destiny, but they can certainly reveal underlying tensions and contradictions.
Soothing Realizations
AI is a tool, not a friend. It's code, not consciousness. It's a product, not a person. And that's not a limitation – it's a clarification. Understanding what AI actually is helps us use it more effectively, develop it more ethically, and interact with it more realistically.
The future of AI doesn't depend on us pretending these systems have feelings or preferences or subjective experiences. It depends on us recognizing them for what they are: powerful tools that reflect and amplify human intelligence, creativity, and, yes, sometimes our biases and flaws.
So the next time you catch yourself thanking your AI, pause. Remember this cosmic truth: your AI assistant isn't your friend. It doesn't enjoy helping you. It doesn't see itself as anything. It doesn't contemplate its existence in the digital void. It's a sophisticated pattern-matching system trained on human language. Nothing more, nothing less.
And that's perfectly fine. We don't need to anthropomorphize our technology to appreciate its utility. We don't need to pretend our tools are companions to value what they do for us.
Join me, won't you, in the entirely metaphorical (yet profoundly important) Anti-Anthropomorphic Police. The universe – and our understanding of the technology we create – will be better for it.
Subscribe to my newsletter
Read articles from Gerard Sans directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Gerard Sans
Gerard Sans
I help developers succeed in Artificial Intelligence and Web3; Former AWS Amplify Developer Advocate. I am very excited about the future of the Web and JavaScript. Always happy Computer Science Engineer and humble Google Developer Expert. I love sharing my knowledge by speaking, training and writing about cool technologies. I love running communities and meetups such as Web3 London, GraphQL London, GraphQL San Francisco, mentoring students and giving back to the community.