Skip to main content

How to Build a Better Chatbot

|

If you do nothing else, focus on building trust through intentional design

Illustration of a chatbot icon on a blue wavy pixelated background.
iStock/JuSun

Companies have deployed chatbots for decades, but it is only in recent years their popularity has skyrocketed thanks to advances in artificial intelligence.

Modern AI agents can collect and analyze customer data, mimic human conversational behaviour and execute a variety of customer service and support tasks, from answering routine questions to offering product recommendations.

When designed right, AI agents can deliver significant value for companies in the form of enhanced efficiency, boosted sales and improved customer engagement and loyalty. But if a company’s AI agent design misses the mark, it can do serious brand damage.

Humans (that is, your customers) are not rational decision-makers. They do not pay attention merely to the content being served to them; they look at the mode of delivery as well. Various aspects of an AI agent’s design — its social and peripheral cues, messages and features — send signals to users about the experience they should expect. They play a significant role in how much users engage, trust and act on the advice.

While the capabilities and design features available with modern AI agents are exciting, research shows there are cases in which making AI agents more ‘human’ can backfire.

For example, my own research examining the effects of anthropomorphizing AI robo-advisors has revealed that while AI agents with higher verbal social capabilities are more effective when paired with a humanlike avatar image, once you add animation — blinking, head movement and lip syncing — users begin to feel uncomfortable. Interestingly, focusing on increasing the social capability of these animated AI agents rather than their appearance can actually mitigate the feelings of discomfort that users experience interacting with AI and increase their sense of social presence. 

Are You Ready for a ChatGPT World?
Readers Also Enjoyed Are You Ready for a ChatGPT World?

Small mistakes can undermine the entire experience. It is important for companies to take the time to carefully consider their objectives and select an AI agent model and delivery that suits their purpose and delivery channel as well as their customers’ service expectations.

A more basic virtual agent, such as that used by Expedia, could be a good customer support option for businesses with a clear understanding of the common questions and tasks their customers require assistance with. Conversely, a more sophisticated AI agent could be well suited for a retail setting, where you want your customer to engage more with your products and brand (L’Oreal comes to mind)

Trust through competence, benevolence and integrity

How do you build an AI agent experience that balances form and function? Research – including my own – indicates that trust should be the prime focus.

Many studies on human-AI interaction show that trust is a key mediator of business outcomes. In our case, the outcomes we looked at were recommendation acceptance (the likelihood of customers accepting the AI agent’s recommendations), self-disclosure (how likely users are to provide personal information relevant to the transaction), usage intention (the chances that consumers will use the tool again), and overall satisfaction.

The dimensions to building trust in AI agents are somewhat similar to those involved in facilitating trust in human-to-human interactions:

Competence: Users want to know that the AI agent knows its stuff — that it has the expertise and the credibility. The response failure/error rate really matters. You can technically set up an AI agent and integrate it into your website in 15 minutes, but it will have problems because it is not trained. When an AI agent is repeatedly asking for clarification or does not understand the text prompt, such failures bug people and lead them to have less trust in the offering. By contrast, AI agents with domain expertise, and carefully trained with supervised machine learning techniques, can be perceived as more trustworthy.

Benevolence: AI agents do not have the agency intentions that humans do, but their design elements can still signal to users that they are there to help. Matching the task would be helpful here. This is also where the knowledge of your customer comes into play. How do they want to be spoken to? What information are they looking for? You risk eroding trust if users cannot find the information they’re looking for or feel awkward because the interface does not match their use case.

Integrity: When it comes to an AI agent, integrity means that customers do not feel that it is trying to deceive them. Trying to deceive or mislead users into believing they are speaking with a human, or leaving any kind of uncertainty or ambiguity around whether they are, is a surefire way to lose trust. It helps to be upfront and to set expectations. Make it clear that users are speaking with an AI agent with specific capabilities and limitations and offer a way out if they need to speak to a human.

Perhaps the most important takeaway is that businesses should take the time to do it right. When it comes to AI agents, small mistakes can undermine the entire customer experience and negatively impact your bottom line.

This article first appeared in Marketing News Canada.