AI Hallucinations—Understanding the Phenomenon and Its Implications

Written by Coursera Staff • Updated on

Artificial intelligence (AI) hallucinations occur when an AI model outputs factually incorrect, nonsensical, or surreal information. Explore the underlying causes of AI hallucinations and how they negatively impact industries.

[Featured Image] Two coworkers discuss AI hallucinations on their computers in an open-space office.

AI hallucinations occur due to flaws in an AI model’s training data or other factors. Training data is “flawed” because it’s inaccurate or biased. Then, hallucinations are essentially mistakes—often very strange—that AI makes because it has learned to predicate its output on faulty data. 

A wide variety of industries and sectors use AI technology, and the use of AI in the business world is likely to expand. Occupations that utilize AI include retail, travel, education, customer service, and health care

As AI adoption grows across industries, the risk of AI hallucinations presents a real challenge for businesses. Learn what causes these hallucinations, how to reduce their impact, and how to use AI responsibly and effectively.

What are AI hallucinations?

AI hallucinations occur when a generative AI chatbot or computer vision systems output incorrect or unintelligible information due to the model’s misunderstanding of patterns in its training data. This data may contain factual errors and biases. 

AI hallucinations vary from simple incorrect query responses to downright surrealistic output—textual nonsense or impossible visual output. 

Common AI hallucinations include: 

  • Historical inaccuracies

  • Geographical errors

  • Incorrect financial data

  • Inept legal advice

  • Scientific inaccuracies

Read more: What Is ChatGPT? Meaning, Uses, Features, and More

Causes of AI hallucinations

To understand the causes of AI hallucinations, such as flawed training data or model complexity, remember that AI models can’t “think” in a truly human sense. Instead, their algorithms work probabilistically. For example, some AI models predict what word is likeliest to follow another word based on how often that combination occurs in their training data. 

Underlying reasons for AI hallucinations include:

Training data limitations

One problem with AI training is input bias, which is present in the wide range of data programmers use to train AI models. AI models might produce inaccurate and biased hallucinations as if they were reliable information. 

Model complexity 

If an AI model is so complex that it lacks constraints limiting the kind of outputs it could produce, you may see AI hallucinations more frequently. To address hallucinations directly, you can take measures to limit the probabilistic range of an AI model’s learning capacity. 

Data poisoning

Data poisoning occurs when bad actors, such as black hat hackers, input false, misleading, or biased data into an AI model’s training data sets. For example, faulty data in an image can cause the AI model to misclassify the image, which may create a security issue or even lead to a cyberattack.

Overfitting

An AI model displaying overfitting tendencies can accurately predict training data but can’t generalize what it learned from said data to predict new data. Overfit AI models learn irrelevant noise in data sets without being capable of differentiating between noise and what you meant for them to learn.

For example, let’s say you’re training an AI model to recognize humans and feed it photos of people. If many of those images show people standing next to lamps, the model might mistakenly learn that lamps are a feature of people—and eventually start identifying lamps as people.

Implications of AI hallucinations

Regardless of the business in which you work or plan to work, it’s a good idea to understand AI hallucinations because they can cause problems in several industries. AI hallucinations have implications in various fields, including health care, finance, and marketing.

Health care 

AI hallucinations in health care can be dangerous. While AI can help detect issues doctors might miss, it may also hallucinate problems—like cancerous growths—leading to unnecessary or even harmful treatment of healthy patients.

This can happen when a programmer trains an AI model on data that doesn’t distinguish between healthy and diseased human examples. In this instance, an AI model doesn’t learn to distinguish differences that naturally occur in healthy people—benign spots on the lungs, for example—from images that suggest disease. 

Finance 

AI hallucinations occurring within the financial sector can also present problems. Many large banks utilize AI models for: 

  • Making investments

  • Analyzing securities

  • Predicting stock prices

  • Selecting stocks

  • Assessing risk

AI hallucinations in this context can result in bad financial advice regarding investments and debt management. Because some companies aren’t transparent about whether or not they use AI to make recommendations to consumers, some consumers unwittingly place their trust in technology that they assume is, in fact, a trained expert with sophisticated critical thinking skills. The widespread use of hallucination-prone AI in the financial sector could lead to another recession. 

Marketing

In terms of marketing, you might have worked for years to develop a specific tone and style to represent your business, but if AI hallucinations produce information that is false, misleading, or does not align with how you typically interact with your customers, you might face the erosion of your brand’s identity. Consequently, this might also disrupt the connection you worked to establish with your customers. 

Essentially, AI could generate messages that distribute false information about your products while also making promises your company cannot fulfill, which may present your brand as untrustworthy. 

Mitigating AI hallucinations

Fortunately, when dealing with AI hallucinations, strategies like data quality and user education can help mitigate their impact. Take a look at a few of the strategies for reducing the occurrence of AI hallucinations:

Data quality improvement

One way to reduce the possibility of AI hallucinations is for programmers to train AI models on high-quality data. Data should be diverse, balanced, and well-structured. Simply put, AI output quality correlates to input quality. You’d be just as likely to give faulty information if you read a historical book with factual inaccuracies. 

Model evaluation and validation

You can implement rigorous testing and validation processes to identify and correct hallucinations. Your business can also work with vendors who commit themselves to ethical practices regarding the development of AI. Doing this allows for more transparency when updating a model as issues arise.

You can also decrease the possibility of AI hallucinations by limiting your AI model's capabilities with strong prompts, which can improve its output. Another option is pre-defined data templates, which can help your AI model output more accurate content.

Also, use filters and predefined probabilistic thresholds for your AI model. If you limit an AI’s capability to predict quite so broadly, you may cut down on hallucinations. 

User education

Educating the public about AI hallucinations is important because people often trust widely accepted technology—they think it must be objective. To combat this, you want to educate people about the limits and capabilities of, for example, a large language model (LLM). If you do this, someone using an LLM will better understand what it can and can’t do, which means this individual will be better equipped to identify a hallucination.

Implementing human oversight

Finally, to help prevent AI hallucinations, you can introduce human oversight. It may be the case that you can’t fully automate your AI model because you could have someone review outputs for any signs of hallucinations.

It’s also advantageous to work closely with subject matter experts (SME). They can correct factually incorrect data in specialized fields. 

Learn more about AI and its challenges on Coursera.

AI is both promising and challenging, and as more companies integrate it into their workflows, the issue of AI hallucinations is becoming a growing concern.

If you’d like to learn more about AI, you can explore the basics with DeepLearning.AI’s Generative AI for Everyone. You might also consider Vanderbilt University’s Trustworthy Generative AI course, which discusses the types of problems to solve with AI and how to engineer prompts. 

Keep reading

Updated on
Written by:

Editorial Team

Coursera’s editorial team is comprised of highly experienced professional editors, writers, and fact...

This content has been made available for informational purposes only. Learners are advised to conduct additional research to ensure that courses and other credentials pursued meet their personal, professional, and financial goals.