Chatbots are getting too emotional and customers are not happy about it


When a customer service representative says, “I totally get your frustration,” it feels natural. When a chatbot says the same thing, something feels deeply off. Now, researchers have confirmed that gut feeling with actual data.

As reported by Techxplore, a new study published in MIS Quarterly finds that when AI chatbots express empathy during a service failure, it can actually make things worse for customers, not better.

Why does chatbot empathy backfire?

The research team from McGill University, University of South Florida, and Hong Kong Baptist University ran three separate experiments, where participants interacted with a service chatbot that made mistakes. In some cases, the chatbot responded with empathetic phrases such as “I really feel your frustration” after the errors. In others, it simply moved on without acknowledging the customer’s emotions.

The empathetic responses did not go over well. Instead of calming customers down, they triggered what researchers call “psychological reactance”, an instinctive negative response when people feel their sense of control or freedom is being threatened.

The idea that a machine had analyzed and responded to their emotional state felt invasive rather than comforting. This led customers feel less satisfied with the overall service.

This aligns with my personal experience. When chatbots like ChatGPT try to be too encouraging or understanding, it feels off. It’s akin to the uncanny valley effect I experience when watching AI-generated content. When you know you are chatting with AI, false emotional support irks you more than straightforward responses. 

So what should chatbots do instead?

The researchers suggest that companies should not automatically equip chatbots with empathy features, especially when handling service failures. The benefits of human empathy do not simply transfer to AI.

Chatbots can explore other approaches, like humor, compliments, or a straightforward apology, that don’t carry the same invasive undertone.

The takeaway is clear. Making a chatbot sound more human isn’t always the right move. Sometimes, it is best to let a bot be a bot.



Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews


Apple’s Hide My Email feature has always been a pretty good quality-of-life privacy tool. iCloud+ subscribers can access randomly generated email addresses that forward messages to their real inbox. This helps users avoid any apps or websites from seeing their actual address. Apple also states that it doesn’t read the forwarded messages either.

All of this makes it quite a handy tool that genuinely cuts down on spam, creating a distance between you and whatever sketchy service wants your email.

But what it apparently does not do is hide your identity from law enforcement.

What’s going on?

According to court documents seen by TechCrunch, Apple provided federal agents with the real identities of at least two customers who had used Hide My Email addresses. One case in particular had the FBI seek records in an investigation that involved an email allegedly threatening Alexis Wilkins, who has been publicly reported as the girlfriend of FBI director Kash Patel.

The affidavit cited in the report states that Apple identified the anonymized address as being associated with the target Apple account. The company even provided the account holder’s full name and email address, along with records of another 134 anonymized email accounts created through this privacy feature.

TechCrunch also says it reviewed a second search warrant tied to an investigation by Homeland Security, where Apple again provided information linking Hide My Email accounts back to a user.

Why does this concern you

Before anyone starts calling out Apple for breaching privacy, they should know the distinction between companies and official warrants. Hide My Email is designed to protect users from apps, websites, and marketers, not from legal requests.

Apple still stores customer data like names, addresses, billing details, and other unencrypted info, which can be handed over when authorities come knocking with the right paperwork. So an email is a weak point here. Most emails are still not end-to-end encrypted, which means it is fundamentally different from services like Signal, whose popularity has grown precisely because of their robust privacy model.



Source link