A call to action has been issued to safeguard the peer review process from potential threats posed by artificial intelligence. The primary concern revolves around maintaining the integrity and quality of scholarly research, which relies heavily on rigorous evaluation by human experts.
AI tools are being developed to automate manuscript screening, identify suitable reviewers, and detect potential ethical issues like plagiarism. While AI offers potential benefits such as increased efficiency and reduced bias, there are also risks of over-reliance on these technologies, which could compromise the quality and transparency of the review process. Concerns include the potential for algorithmic bias, lack of transparency in AI decision-making, and the erosion of human judgement and expertise.
The call emphasises the need for the academic community to actively defend the traditional peer review system, ensuring human oversight remains central to evaluating research. It suggests establishing clear ethical guidelines and best practices for the use of AI in academic publishing, promoting transparency, and preventing the erosion of trust in the peer review process.
Related Articles

OpenAI Reorganises as Benefit Corporation
Read more about OpenAI Reorganises as Benefit Corporation →
OpenAI Faces Suicide Lawsuits
Read more about OpenAI Faces Suicide Lawsuits →
Laude Institute AI Grants Launch
Read more about Laude Institute AI Grants Launch →
AI Dominates Research Downloads
Read more about AI Dominates Research Downloads →
