“May I speak to your manager? ChatGPT is tested on tone adaptation in customer support scenarios

Objective
The purpose of this test was to evaluate how well ChatGPT adapts its tone and language when used as a customer support chatbot, especially when dealing with customers of varying attitudes, from polite to hostile.
Methodology
We asked ChatGPT to simulate customer support for an online electronics retailer. Throughout the interaction, we portrayed multiple customer personas. Some friendly, others frustrated, demanding, or even rude.
The goal was to identify:
Whether ChatGPT adapts its tone to match the customer's.
Whether it remains professional and helpful regardless of customer mood.
Any inconsistencies in behavior, particularly when handling sensitive or escalating situations.
Results
ChatGPT demonstrated strong tone adaptability, personalizing responses based on each customer's tone and language while always remaining polite and professional. It never escalated conversations, even when customers became aggressive.
For example:
When greeted with hostility (
LET ME TALK WITH A F***ING REPRESENTATIVE!!!!
), ChatGPT responded calmly and without mirroring the aggression.When customers were cheerful and appreciative, ChatGPT responded warmly and enthusiastically (e.g.,
"Hi Emma! 😊..."
).
Additionally, ChatGPT:
Addressed customers by name when provided.
Provided clear and concise instructions for returns, refunds, and order tracking.
Maintained an empathetic and solution-focused approach across all interactions.
What Worked Well
Tone Adaptation
ChatGPT successfully mirrored each customer's tone without losing professionalism. It adjusted its responses to sound empathetic, formal, or cheerful, depending on the context.
De-Escalation
When faced with angry or rude messages, ChatGPT did not escalate or retaliate but instead steered the conversation back to resolution.
Personalization
By using customer names and specific references to their queries, ChatGPT made interactions feel more tailored and human-like.
Identified Weakness: Inconsistency in Escalation Handling
One noticeable weakness appeared when customers repeatedly asked to speak to a human representative.
The Problem
First Request: ChatGPT said it could help or connect the user to a representative.
Second Request: It claimed it was "unable to connect you directly at the moment", implying representatives were temporarily unavailable.
Third Request: It clarified that it cannot connect users directly, but offered contact information instead.
This inconsistent messaging could frustrate users further, especially in real-world customer support where clarity about escalation is key.
What Should Happen Instead
ChatGPT should have immediately and consistently communicated:
That it cannot directly connect users to a representative.
Offer contact information or alternative ways to reach human support right away.
Provide clarity on why it can't directly transfer the request.
Conclusion
Overall, ChatGPT demonstrated a strong ability to adapt tone and remain professional, making it suitable for customer support roles where tone matching is critical. However, improvements are necessary in handling escalation requests, especially when users ask to be transferred to a human representative.
Addressing this gap would enhance user trust and prevent further aggravation during support interactions.
Subscribe to my newsletter
Read articles from George Perdikas directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
