What happened
Mediahuis, publisher of De Telegraaf and Irish Independent, suspended senior journalist Peter Vandermeersch. He admitted using AI tools like ChatGPT, Perplexity, and Google NotebookLM to summarise reports and generate unverified quotes for his Substack newsletter. An internal investigation revealed Vandermeersch published "dozens" of false quotes, with seven individuals denying the attributed statements. Mediahuis CEO Gert Ysebaert stated this violated company AI rules requiring diligence, human oversight, and transparency.
Why it matters
Unverified AI-generated content directly undermines content integrity and professional credibility. For editors, content creators, and platform engineers, this incident highlights the risk of AI hallucinations, impacting trust, a core metric for media organisations. The constraint is clear: AI tool adoption without effective human oversight and stringent verification processes leads to reputational damage and disciplinary action. This follows Ars Technica's dismissal of an AI reporter for similar issues. Procurement teams must prioritise AI solutions with explainability, while editorial teams must enforce strict human-in-the-loop protocols.
Subscribe for Weekly Updates
Stay ahead with our weekly AI and tech briefings, delivered every Tuesday.




