What happened
Artificial intelligence research conferences restricted the use of Large Language Models (LLMs) for generating papers and reviews. This action followed a surge of low-quality AI-generated submissions and peer reviews. The previous implicit allowance for LLM integration into academic writing and review processes has been replaced with explicit limitations, altering the conditions for content submission and evaluation within these academic forums. Authors' capability to leverage LLMs for direct content creation is now constrained.
Why it matters
This introduces an increased compliance burden for researchers submitting to artificial intelligence conferences, requiring adherence to new LLM usage policies. For peer reviewers, it raises the oversight burden to identify and manage submissions potentially violating these restrictions, weakening the implicit control of traditional review processes over content originality. Conference organisers face heightened due diligence requirements to enforce these new guidelines and maintain academic integrity standards.
Related Articles

OpenAI: ChatGPT 'Code Red'
Read more about OpenAI: ChatGPT 'Code Red' →
SoftBank OpenAI Investment Expansion
Read more about SoftBank OpenAI Investment Expansion →
AI Polished Response Generation
Read more about AI Polished Response Generation →
AI Advertising Competition Intensifies
Read more about AI Advertising Competition Intensifies →
