GPT's 'Hallucinations' Explained

GPT's 'Hallucinations' Explained

8 September 2025

OpenAI has identified the cause of persistent 'hallucinations' in large language models, where AIs generate plausible but incorrect answers. The issue arises from the training process and evaluation methods that inadvertently reward guessing. Current benchmarks often prioritise accuracy, encouraging AI to provide an answer even when uncertain. This is because models are incentivised to be good 'test-takers', where guessing improves performance.

GPT-5 demonstrates a reduction in these errors. OpenAI suggests reforming benchmarks to value uncertainty and penalise incorrect answers to further minimise confidently wrong outputs. By tweaking how AIs are evaluated, the aim is to train models to be less about 'fake it till you make it' and more about measured, reliable responses. The company is working to make AI systems more useful and reliable.

OpenAI's findings highlight that hallucinations are not inevitable and that language models can abstain when uncertain. The company hopes that clarifying the nature of hallucinations will push back on common misconceptions.

AI generated content may differ from the original.

Published on 8 September 2025
aiopenaigptmachinelearninghallucinations
  • OpenAI unveils GPT-5 Enterprise

    OpenAI unveils GPT-5 Enterprise

    Read more about OpenAI unveils GPT-5 Enterprise
  • OpenAI: Better Models Incoming?

    OpenAI: Better Models Incoming?

    Read more about OpenAI: Better Models Incoming?
  • GPT-5 Gets Friendlier Update

    GPT-5 Gets Friendlier Update

    Read more about GPT-5 Gets Friendlier Update
  • GPT-5: Incremental AI Advance

    GPT-5: Incremental AI Advance

    Read more about GPT-5: Incremental AI Advance