Protecting Peer Review Integrity

Protecting Peer Review Integrity

12 November 2025

What happened

A call to action has been issued to safeguard the peer review process from artificial intelligence threats. AI tools are being developed to automate manuscript screening, identify suitable reviewers, and detect plagiarism. While these tools offer increased efficiency and reduced bias, they introduce risks of over-reliance, potential compromise of review quality and transparency, algorithmic bias, and erosion of human judgement. The call advocates for maintaining human oversight, establishing ethical guidelines, and promoting transparency in AI use within academic publishing.

Why it matters

The integration of AI into peer review introduces a control gap concerning human judgement and expertise, increasing exposure to algorithmic bias and reducing visibility into AI decision-making processes. This places a heightened due diligence requirement on editorial boards, peer review managers, and compliance teams to establish and enforce ethical guidelines, ensuring human oversight remains central. The burden falls on these roles to mitigate the risk of compromised review quality and transparency.

Source:ft.com

AI generated content may differ from the original.

Published on 12 November 2025

Subscribe for Weekly Updates

Stay ahead with our weekly AI and tech briefings, delivered every Tuesday.

Protecting Peer Review Integrity