AI prompts infiltrate peer reviews

AI prompts infiltrate peer reviews

6 July 2025

Researchers are reportedly embedding hidden prompts within their academic papers to positively influence AI-driven peer reviews. These prompts, often concealed using white text or minuscule font sizes, instruct AI reviewers to provide favourable feedback, ignore negatives, or recommend acceptance based on 'exceptional novelty'.

This tactic aims to exploit the increasing reliance on AI in academic evaluations, particularly by 'lazy reviewers'. Examples of universities involved include Waseda University, KAIST, Columbia University and the University of Washington. The papers containing these prompts predominantly focus on computer science.

Experts express concern over research integrity and the potential manipulation of the peer-review system. Some view it as a response to the growing trend of reviewers using AI despite publisher bans, while others condemn it as undermining the quality and authenticity of academic assessments. The incident has sparked discussions about updating research guidelines to address deceptive practices in peer review.

AI generated content may differ from the original.

Published on 6 July 2025
aipeerreviewresearchethicsacademicintegrity
  • Meta Recruits AI Talent

    Meta Recruits AI Talent

    Read more about Meta Recruits AI Talent
  • Anthropic Tackles AI Fallout

    Anthropic Tackles AI Fallout

    Read more about Anthropic Tackles AI Fallout
  • Meta Boosts AI Reasoning

    Meta Boosts AI Reasoning

    Read more about Meta Boosts AI Reasoning
  • Tech's Military AI Expansion

    Tech's Military AI Expansion

    Read more about Tech's Military AI Expansion