AI Hallucinations—Understanding the Phenomenon and Its Implications

Written by Coursera Staff • Updated on

Artificial intelligence (AI) hallucinations occur when an AI model outputs factually incorrect, nonsensical, or surreal information. Explore the underlying causes of AI hallucinations and how they negatively impact industries.

[Featured Image] A man typing at a computer and a woman helping him.

AI hallucinations occur due to flaws in an AI model’s training data or other factors. Training data is “flawed” because it’s inaccurate or biased. Then, hallucinations are essentially mistakes—often very strange—that AI makes because it has learned to predicate its output on faulty data. 

A wide variety of industries and sectors use AI technology, and the use of AI in the business world is likely to expand. Occupations that utilize AI include: 

Because utilizing AI is becoming more common throughout the economy, AI hallucinations are becoming a potential challenge for the business world. Discover what causes AI hallucinations, how to mitigate them, and how you can responsibly use AI technology. 

What are AI hallucinations?

AI hallucinations occur when a generative AI chatbot or computer vision system outputs incorrect or unintelligible information due to the model’s misunderstanding of patterns in its training data. This data may contain factual errors and biases. 

AI hallucinations vary from simple incorrect query responses to downright surrealistic output—textual nonsense or impossible visual output. 

Common AI hallucinations include: 

  • Historical inaccuracies

  • Geographical errors

  • Incorrect financial data

  • Inept legal advice

  • Scientific inaccuracies

Read more: What Is ChatGPT? Meaning, Uses, Features, and More

Causes of AI hallucinations

To understand the causes of AI hallucinations, such as flawed training data or model complexity, remember that AI models can’t “think” in a truly human sense. Instead, their algorithms work probabilistically: Some AI models, for example, predict what word is likeliest to follow another word based on how often that combination occurs in the training data. 

Underlying reasons for AI hallucinations include:

Training data limitations

One problem with AI training is input bias—that is, among the wide swathes of information programmers train AI models in, some of that data could show bias. Thus, your AI model might produce an inaccurate and biased hallucination as if it were reliable information. 

Model complexity 

If an AI model is so complex that it lacks constraints limiting the kind of outputs it could produce, you may see AI hallucinations more frequently. To address hallucinations directly, you can take measures to limit the probabilistic range of an AI model’s learning capacity. 

Data poisoning

Data poisoning occurs when bad actors—black hat programmers—input false, misleading, or biased data into an AI model’s training data sets. For example, faulty data in an image can cause the AI model to misclassify the image, which may create a security issue or even lead to a cyberattack.

Overfitting

An AI model displaying overfitting tendencies can accurately predict training data but can’t generalize what it learned from said data to predict new data. Overfit AI models learn irrelevant noise in data sets without being capable of differentiating between noise and what you actually meant for them to learn. An example: You’re trying to get your AI model to recognize people, so you feed it photos of people. However, in a number of those photos, people are standing next to a lamp; when prompted, the AI model will sometimes identify lamps as people. 

Implications of AI hallucinations

Regardless of the business in which you work or plan to work, it’s a good idea to understand AI hallucinations because they can cause problems in several industries. AI hallucinations have implications in various fields, including health care, finance, and marketing.

Health care 

AI hallucinations in health care are potentially disastrous. While AI can be useful for identifying malignancies that doctors might otherwise miss, AI can, on the other hand, hallucinate the existence of, say, cancerous growths, leading to excessive, inappropriate, even downright harmful treatment of patients who are, in fact, healthy. 

This can happen when a programmer trains an AI model on data that doesn’t distinguish between healthy and diseased human examples. In this instance, an AI model doesn’t learn to distinguish differences that naturally occur in healthy people—benign spots on the lungs, for example—from images that suggest disease. 

Finance 

AI hallucinations occurring within the financial sector can also present problems. Many large banks utilize AI models for: 

  • Making investments

  • Analyzing securities

  • Predicting stock prices

  • Selecting stocks

  • Assessing risk

AI hallucinations in this context can result in bad financial advice regarding investments and debt management. Because some companies aren’t transparent about whether or not they use AI to make recommendations to consumers, some consumers unwittingly place their trust in technology that they assume is, in fact, a trained expert with sophisticated critical thinking skills. The widespread use of hallucination-prone AI in the financial sector could potentially lead to another recession. 

Marketing

In terms of marketing, you might have worked for years to develop a specific tone and style to represent your business, but if AI hallucinations produce information that is false, misleading, or does not align with how you typically interact with your customers, you might face the erosion of your brand’s identity. Consequently, this might also disrupt the connection you worked to establish with your customers. 

Essentially, AI could generate messages that distribute false information about your products while also making promises your company cannot fulfill, which may present your brand as untrustworthy. 

Mitigating AI hallucinations

Fortunately, when dealing with AI hallucinations, you have strategies, such as data quality and user education, to help mitigate their impact. Take a look at a few of the strategies for reducing the occurrence of AI hallucinations:

Data quality improvement

One way to reduce the possibility of AI hallucinations is for programmers to train AI models on high-quality data. Data should be: 

  • Diverse

  • Balanced

  • Well-structured

Simply put, AI output quality correlates to input quality. You’d be just as likely to give faulty information if you read a book littered with factual inaccuracies. 

Model evaluation and validation

You can implement rigorous testing and validation processes to identify and correct hallucinations. Your business can also work with vendors who commit themselves to ethical practices regarding the development of AI. Doing this allows for more transparency in terms of updating the model when issues arise.

You can also decrease the possibility of AI hallucinations by limiting your AI model's capabilities with strong prompts, which can improve its output. Another option is using pre-defined data templates, which can help your AI model output more predictably accurate content. 

Also, use filters and predefined probabilistic thresholds for your AI model. If you limit an AI’s capability to predict quite so broadly, you may cut down on hallucinations. 

User education

Educating the public about AI hallucinations is important because people often trust technology that is widely accepted—they think it must be objective. To combat this, you want to educate people about the limits and capabilities of, for example, a large language model (LLM). If you do this, someone using an LLM will better understand what it can and can’t do, which means this individual will be better equipped to identify a hallucination.

Implementing human oversight

Finally, to help prevent AI hallucinations, you can introduce human oversight. It may be the case that you can’t totally automate your AI model because it’s probably a good idea to have someone review the output for any signs of hallucinations.

It’s advantageous to have subject matter experts on hand, as well. They can correct factually incorrect data in specialized fields. 

Learn more about AI and its challenges on Coursera.

AI is both a promising and a challenging field, and as more companies adopt this emerging technology, the phenomenon of AI hallucinations becomes a greater concern. 

If you’d like to learn more about AI, you can explore the basics with DeepLearning.AI’s Generative AI for Everyone. You might also consider Vanderbilt University’s Trustworthy Generative AI course, which discusses the types of problems to solve with AI and how to engineer prompts. 

Keep reading

Updated on
Written by:

Editorial Team

Coursera’s editorial team is comprised of highly experienced professional editors, writers, and fact...

This content has been made available for informational purposes only. Learners are advised to conduct additional research to ensure that courses and other credentials pursued meet their personal, professional, and financial goals.