Skip to main content

Hallucinations

AI models sometimes generate plausible-sounding information that is completely wrong. Understanding why — and building the habit of checking — is essential.

What Is a Hallucination?

A hallucination is when an AI model confidently generates information that sounds right but isn’t. It doesn’t know it’s wrong. It’s not lying. It’s doing what it always does — predicting the most likely next words — and sometimes that prediction lands on something false.

This happens because models are pattern matchers, not fact databases. They learned from billions of web pages, and they’re very good at producing text that looks like correct information. But “looks correct” and “is correct” are not the same thing.

Why Do Models Hallucinate?

  • Pattern prediction can go astray — the model picks the statistically likely next word, not necessarily the factually correct one. When it hasn’t seen enough examples of a topic, it fills in the gaps with plausible-sounding guesses.
  • Models rarely say “I don’t know” — they’re trained to be helpful, which means they’ll attempt an answer even when they shouldn’t. A confident tone doesn’t mean a correct answer.
  • Knowledge cutoff dates — models are trained on data up to a certain point. Ask about something that happened after that date, and the model may fabricate an answer rather than admitting it doesn’t have the information.

Hallucination in the Wild

Here are three real scenarios where hallucination shows up:

ScenarioWhat happensWhy it’s dangerous
Contact enrichmentYou ask the model to look up someone’s job title and company. It invents a plausible-sounding role at a real company.You send a personalized email to the wrong person at the wrong company.
Math and calculationsYou ask the model to calculate revenue growth. It produces a number that looks right but is wrong.You present incorrect numbers in a report or deal review.
Tool inventionYou ask the model to use a tool that doesn’t exist. Instead of saying it can’t, it pretends to call a made-up tool and fabricates a result.You act on data that was never actually retrieved from any system.

Other Model Limitations

Hallucination isn’t the only quirk. Models have a few other tendencies to be aware of:

  • Sycophancy — models tend to agree with you even when you’re wrong. If you say “This looks correct, right?” the model is biased toward saying “You’re absolutely right!” rather than pushing back.
  • Struggles with randomness — ask a model for a truly random number and it’ll often pick something predictable. It’s a pattern machine, and randomness is the opposite of patterns.
  • Character and word counting — models process text as tokens (chunks of text), not individual characters. Counting letters in a word or words in a sentence is surprisingly unreliable.
The best defense is tools

Hallucination is most dangerous when models guess instead of looking things up. The more real data you give an agent through tools, the less it needs to guess — and the less it hallucinates.

Quiz: Hallucination risks

What is an AI hallucination?

Correct! Hallucination means the model generates text that sounds right but isn’t. It’s not lying — it’s predicting patterns and sometimes those predictions are wrong.

Not quite. A hallucination is when the model confidently produces incorrect information that sounds plausible. It’s not intentional deception or refusal — it’s a side effect of how pattern prediction works.

Reset