How to identify AI hallucinations
PURPOSE
AI hallucinations are confident-sounding outputs that are false, unsupported, or invented.
WARNING SIGNS
Look for fake citations, overly specific claims without sources, invented policies, wrong dates, impossible steps, and answers that do not match known facts.
CHECK CAREFULLY
Verify important claims against trusted sources or official documentation.
BE SKEPTICAL
Confidence does not equal correctness.
BEST PRACTICE
The more important the decision, the more verification is required.