Skip to main content
 

Bridging the gap between AI fiction and fact

06/11/2024
AI contact centre technology Technology
Article
By
AI contact centre technology Technology

In the age of AI, customer service has a new face, and sometimes, a wild imagination. As chatbots become the front line of customer support, businesses are grappling with an unexpected challenge: AI hallucinations.

These aren’t your typical daydreams. When AI chatbots hallucinate, they confidently serve up fabricated information, potentially eroding customer trust in seconds. Recent high-profile incidents have shed light on this digital dilemma, leaving companies to wonder: How can we harness AI’s power without falling prey to its fantasies?

For businesses aiming to stay ahead, preventing chatbot hallucinations isn’t just smart it’s essential. The future of customer loyalty may depend on it.

 

What are chatbot hallucinations?

Chatbot hallucinations occur when AI language models generate responses that are nonsensical, inaccurate, or entirely fabricated.

This phenomenon is akin to the AI perceiving patterns or information that don’t actually exist. In the context of customer service, these hallucinations can lead to the dissemination of false information, potentially causing confusion or frustration among customers.

Several common causes of AI hallucinations include:

Overfitting or bias in training data: This occurs when an AI model learns patterns in its training data too specifically, including noise or irregularities that don’t generalise well to new data. As a result, when presented with novel inputs, the model may produce outputs that fit its overly specific learned patterns rather than accurately representing reality. Similarly, if the training data contains biases or inaccuracies, the model may learn and reproduce these, leading to hallucinations that reflect those biases rather than true information.

High model complexity: Very large and complex AI models have an enormous number of parameters and can learn intricate patterns. While this can lead to impressive capabilities, it can also result in the model making connections that aren’t actually meaningful or generating highly detailed but inaccurate outputs. The model’s complexity allows it to fabricate elaborate responses that may seem plausible but are not grounded in truth.

Misinterpretation of input: AI models, particularly language models, can sometimes misunderstand or misinterpret the input they receive. This can lead to responses that are off-topic or based on an incorrect understanding of the query or context. The model might latch onto certain words or phrases and generate a response that’s related to those elements but doesn’t actually address the intended meaning of the input.

Lack of real-world knowledge or context: While AI models can process and recombine vast amounts of information from their training data, they don’t have true understanding or real-world experience. This can lead to hallucinations when the model attempts to generate information about topics or scenarios it hasn’t been adequately trained on, or when it lacks the broader context that a human would use to recognise implausible or impossible scenarios.

Chatbot hallucinations in the real world

While many individual users of platforms such as ChatGPT may have experienced hallucinations that have caused no harm, several high-profile incidents have illustrated the significant issues that can arise from chatbot hallucinations.

In one of the most high-profile instances of a malfunctioning chatbot, Air Canada’s AI chatbot provided incorrect information regarding the airline’s bereavement fare policy. A passenger seeking information about refunds received incorrect guidance from the chatbot and was subsequently denied the refund. The British Columbia Civil Resolution Tribunal ultimately ruled in favour of the passenger, holding Air Canada liable for the chatbot’s misinformation.

In another instance, a chatbot deployed by the New York City government to assist with municipal services provided erroneous and unlawful advice on various issues, including food safety and public health. The chatbot’s responses included telling a restaurateur that they could sell cheese that had been nibbled on by rodents to customers and incorrectly suggesting that it would be legal to fire an employee who had reported sexual harassment. This misinformation could have led individuals to inadvertently break laws, highlighting the potential for significant legal repercussions and loss of public trust in digital government initiatives.

Global tech giant Google’s chatbot, Bard, has also generated inaccurate information. When Google introduced Bard, which was designed to answer questions about the James Webb Space Telescope, it incorrectly stated that the telescope had captured the first-ever images of a planet beyond our solar system. This error was widely publicised and criticised, reportedly resulting in a $100 million drop in value for Alphabet Inc., the owners of Google. The incident demonstrated the challenges of ensuring accuracy in AI-generated responses, even for major technology companies.

These examples highlight the multifaceted nature of the challenges posed by chatbot hallucinations. From legal liabilities and reputational damage to erosion of public trust and potential safety risks, the consequences can be severe and wide-ranging. They underscore the need for robust safeguards, continuous monitoring, and a balanced approach to AI implementation in customer service and beyond.

Mitigating chatbot hallucinations

To ensure reliable and trustworthy AI-powered customer experiences, businesses should consider implementing the following strategies:

Swipe for more

Robust data quality control

Ensure that AI models are trained on diverse, balanced, and well-structured datasets. This approach helps minimise output bias and improves the model’s understanding of its tasks. From a customer service standpoint, where each customer’s issue or scenario can vary significantly, data control is particularly crucial.

The datasets, which often include records of past interactions, need to be extensive, covering a wide range of scenarios and interactions. These datasets should be thoroughly checked for accuracy and relevance, properly labelled to help the AI model understand the context and nature of each interaction and broken down into small semantic chunks that can effectively be processed.

This detailed preparation of customer service data allows the AI to better understand and respond to the nuanced and diverse needs of customers, improving its ability to provide accurate and relevant assistance across a wide range of scenarios.

Clear scope definition

Establish clear boundaries for the AI system’s responsibilities and limitations. This helps reduce irrelevant or hallucinatory results by focusing the model on specific tasks.

Implement data templates

Utilise predefined formats for data input and output. This structured approach can increase consistency and reduce the likelihood of hallucinations.

Continuous monitoring and testing

Regularly assess the AI system’s performance and accuracy. Implement mechanisms to flag potential hallucinations for human review.

Quality assurance teams in contact centres should extend their monitoring and analysis practices to include AI interactions.

This integration ensures that AI-assisted customer interactions are subject to the same rigorous evaluation as traditional human-led interactions, maintaining consistency in service quality and identifying areas for improvement.

Human-in-the-loop oversight

Maintain human oversight in critical decision-making processes. This hybrid approach combines AI efficiency with human judgement to catch and correct potential errors.

A system where the AI ranks the complexity of customer queries, passing complicated and sensitive issues to human agents can significantly reduce errors that might lead to irreversible mistakes.

By reserving human intervention for more nuanced or sensitive cases, organisations can enhance the overall quality of customer service while minimising risks associated with AI-only interactions in critical situations.

Transparency and user education

Clearly communicate to users that they are interacting with an AI system. Provide guidance on the system’s capabilities and limitations to manage expectations.

AI can be particularly effective in reinforcing compliance regarding disclaimers and terms and conditions, areas where customers can benefit from consistent and accurate information. By helping customers understand the reasons behind certain policies or limitations, AI can assist in managing customer demands and expectations.

This approach not only ensures regulatory compliance, but also fosters a more informed and cooperative customer base, potentially reducing misunderstandings and improving overall satisfaction with the service.

Adversarial training

Expose the AI model to a mix of normal and challenging examples during training. This technique can improve the model’s robustness against potential hallucinations.

By implementing these strategies, businesses can work towards creating more reliable AI-powered customer experiences. This approach not only mitigates the risks associated with chatbot hallucinations but also fosters confidence and loyalty among customers.

By understanding the nature of these AI-generated inaccuracies and implementing robust mitigation strategies, companies can harness the power of conversational AI while maintaining the trust and loyalty of their customer base.

The journey towards reliable AI-powered customer experiences requires ongoing vigilance, continuous improvement, and a commitment to transparency and accuracy.

 

 

TSA are Australia’s market leading specialists in CX Consultancy and Contact Centre Services. We are passionate about revolutionising the way brands connect with Australians. How? By combining our local expertise with the most sophisticated customer experience technology on earth, and delivering with an expert team of customer service consultants who know exactly how to help brands care for their customers.

Looking to transform your CX tech?

We'd love to help!
Get In Touch

Get in touch with us

Enter your details and we’ll be in touch as soon as possible. If you’re enquiry is urgent you can also get in touch by clicking the button.

Get in touch now