Chatbots & Emotions


With chatbots, insurance companies are available to their customers around the clock in Messenger and WhatsApp. The systems receive claim reports or reply routine requests flexibly, as expected by modern consumers nowadays. But they are missing a little detail.

Modern customers no longer want to communicate with their bank or insurance company during fixed business hours. Instead, they want to get some quick informational material about a rate or report a claim as soon as possible, whenever and wherever it works for them. With chatbots, insurance companies offer a round-the-clock, asynchronous communication channel to their customers. The language-trained algorithms reliably identify the meaning of the user's entry and react accordingly to the events they were programmed for. But customers can easily tell that these are machines.

Chatbots still fail the Turing test

Current chatbot and speech recognition technology still lack any glimmer of human empathy. These bot employees cannot tell if customers are excited or nervous, whether they are asking practical questions or just want to let off some steam. With their stoic replies within a decision tree (they have been programmed with a variety of options) chatbots are detectable as machines. Even the latest systems fail the well-known Turing test today.

In this scenario, devised back in 1950, a machine tries to convince a human evaluator that it is also a human just through communication. If the human evaluator cannot tell the difference, then the machine would be assumed to think like a human.

In consequence, the "blind spot" in the area of emotions can be counterproductive. Customers react differently to a message depending on whether they are upset or calm, whether they had had positive experiences with the company or negative ones.

AI will detect emotions  

To remedy this weakness of chatbot systems, intensive work is being conducted worldwide to train AI to recognize emotions. Not an easy undertaking. Whereas in voice systems the loudness and tone of the caller indicates their mood, this is understandably much harder in purely text-based communication. Typing speed and error rates could be indicators, for example. In both cases, the phrases and expressions used by users point to their mood.

This alone can be a solid basis to deliver hints to the system about the right way to handle the customer. But a more complete picture can only be formed by adding meta information. An open case in the customer's history or a previous claim can help better categorize the policyholder's request.

Bots have to know more about customers

For the bot to be able to adequately react to customers, it must understand the actual semantics of the request, categorize it, and provide a suitable answer or introduce appropriate measures. The system must "understand" the customer, that is, be able to categorize whether an appeasing tone is necessary or whether the request should best be forwarded to a human right away. For that, it needs to "know" the customer, that is, to have the most comprehensive access possible to all the policyholder's data saved by the company. This requires a free data exchange between core systems and other data storage devices. But as we know, sadly not all the silos by far have been taken down in insurance companies. So there is still much to do for our chatbots to become more empathetic. But taking down data silos offers insurers many advantages even today. Therefore, this topic should be tackled head on.

 

Do you have any questions or comments? Then please leave us a comment.

All articles

Are you interested in products of adesso insurance solutions?

You will find here an overview of all software solutions for all insurance lines - for policy management, claims management, claims processing, product modelling or for general digitalisation.

Go to product page
}