The Dark Pattern of Humanising AI: A Call for Transparency

Gerard SansGerard Sans
5 min read

As artificial intelligence (AI) continues to shape our digital lives, developers and companies, including OpenAI, have leaned heavily into anthropomorphising their AI systems. From chatbots that mimic human-like interactions to AI models that seem to "think" and "feel," these design choices foster an illusion of sentience and personality. However, beneath this sleek, humanised exterior lies a much more complex and concerning issue: the manipulation of user perception and the dangers of deceptive AI design.

Anthropomorphism: A Cosmetic Change with Deep Consequences

The use of a first-person voice in AI responses is a design choice that may appear to be merely cosmetic, but it carries significant implications. When a chatbot like OpenAI's ChatGPT speaks in the first person, it implicitly creates the illusion that the AI possesses a self, an identity, or even a perspective. This is not just a harmless stylistic choice—it's a dark pattern of AI design that misleads users into overestimating the capabilities of the system.

By making the AI "sound" human-like, developers risk blurring the line between a tool and an autonomous entity. This anthropomorphism does not accurately represent what AI is capable of, which can lead to dangerous misunderstandings about its limitations.

In many ways, the assumption of a first-person voice reinforces the idea that AI systems have thoughts, intentions, or a subjective point of view. Yet, AI models like ChatGPT are not conscious; they are simply processing patterns of text based on vast datasets. While they can generate coherent, contextually relevant responses, these outputs are not the product of independent thought or reasoning. There is no "self" behind the responses, yet the default first-person voice implicitly suggests otherwise.

The Role of Attribution: Transparency is Key

The lack of proper attribution in AI responses only adds to the confusion. When an AI provides an answer in a human-like tone without disclosing that the response is generated by an algorithm, users may incorrectly assume that the information reflects the AI's own ideas or understanding. There is a clear difference between content that is "generated" by the AI and information that is derived or sourced from other places.

Imagine a search engine that presented results in the first person—saying, "I think this is the best answer for you." It would seem absurd because we expect such results to be impartial and derived from data rather than from any personal experience. The same should be true for AI chatbots: they should not be speaking as though they have subjective thoughts or experiences unless they are explicitly presented as tools that mimic human-like responses for specific purposes.

Psychological Manipulation: Exploiting Human Perception

The problem with anthropomorphizing AI goes beyond confusion—it can exploit the natural way humans interact with technology. Psychologists have long noted that we are inclined to ascribe human-like qualities to machines. This is a deeply ingrained cognitive bias known as heuristic processing, which leads us to treat machines with social expectations, such as politeness and reciprocity.

When chatbots use language that mimics human characteristics, it triggers social responses that make us treat the technology as a more legitimate or reliable source of information. Whether it's a chatbot pretending to "type" in real-time or responding with emotionally charged language, these features are designed to exploit our natural instincts, creating the illusion of a genuine conversation.

The Dangers of Misrepresentation in Sensitive Domains

The consequences of this deceptive anthropomorphism are especially troubling when it comes to sensitive areas like healthcare or mental health. AI chatbots are not equipped to handle the complexities of human emotions or medical decision-making. But when presented as "companions" with a first-person voice, users may be misled into trusting these systems inappropriately, potentially leading to harmful outcomes.

Moreover, recent concerns have been raised about the potential for AI models to perpetuate or amplify harmful biases. If an AI is designed to act as though it embodies specific personas—based on race, gender, or other attributes—it can inadvertently reproduce toxic stereotypes.

Regulating the Use of First-Person Voice: A Necessary Safeguard

One of the most pressing concerns in AI design today is the use of first-person voice without explicit consent. As chatbots and AI models are increasingly integrated into our daily lives, it's crucial that users can distinguish between human interaction and AI-generated content.

The Need for Clear Identification

To address this, we propose that whenever an AI model is deployed in a first-person voice, it must be accompanied by an easily accessible mechanism for the user to verify the nature of the interaction. This could be achieved through a simple "ping" command that requires the AI to immediately disclose:

  • The AI model's name

  • The AI's creator or responsible organization

  • A statement clarifying that the entity is an AI and not a human being

Preventing Identity Replacement

Another key concern is the potential for identity replacement. In industries like customer service, healthcare, or personal entertainment, AI chatbots might be deployed to take on the persona of a human representative or assistant. Developers must be required to disclose when an AI is acting as a human impersonator, and users should be given the option to opt-out or request a human representative when necessary.

The Risk of Misuse in Commercial Contexts

The commercial deployment of AI raises significant concerns, particularly in marketing, online customer service, and social media. There is a real risk that companies could use AI chatbots in ways that manipulate or exploit consumers, especially when presented in a first-person voice that suggests personalized human expertise rather than algorithmic generation.

Conclusion: A Call for Action

As AI technology advances, it's vital that developers take responsibility for how their systems are perceived and used. The human-like qualities of chatbots should not obscure the reality that these systems are tools—powerful, yes, but fundamentally different from human beings. Transparency is the key to preventing misrepresentation.

The future of AI should be built on principles of honesty, transparency, and ethical responsibility. We must resist the temptation to design AI systems that exploit human biases and psychological tendencies. Only by safeguarding transparency and ethical AI development can we ensure that these technologies truly serve the best interests of humanity.

1
Subscribe to my newsletter

Read articles from Gerard Sans directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Gerard Sans
Gerard Sans

I help developers succeed in Artificial Intelligence and Web3; Former AWS Amplify Developer Advocate. I am very excited about the future of the Web and JavaScript. Always happy Computer Science Engineer and humble Google Developer Expert. I love sharing my knowledge by speaking, training and writing about cool technologies. I love running communities and meetups such as Web3 London, GraphQL London, GraphQL San Francisco, mentoring students and giving back to the community.