OpenAI's GPT-5 demonstrates advancements over previous models but still exhibits a tendency to 'hallucinate' information. Despite improvements, the model can generate incorrect or fabricated details in approximately 10% of its responses. This tendency raises concerns about its reliability as a sole source of information.
Nick Turley, Head of ChatGPT, advises users to utilise GPT-5 as a supplementary tool rather than a primary reference. He suggests cross-referencing its outputs with trusted sources to ensure accuracy. While GPT-5 shows promise, its imperfection necessitates a cautious approach, particularly in critical decision-making contexts. The AI model is better when paired with something that provides a better grasp of the facts, like a traditional search engine or a company's specific internal data.
Related Articles
GPT-5 Gets Friendlier Update
Read more about GPT-5 Gets Friendlier Update →ChatGPT Model Selection Returns
Read more about ChatGPT Model Selection Returns →ChatGPT Adds Third-Party Connectors
Read more about ChatGPT Adds Third-Party Connectors →GPT-5 Model Unveiled by OpenAI
Read more about GPT-5 Model Unveiled by OpenAI →