Security firms are facing a deluge of AI-generated vulnerability reports that appear legitimate but are ultimately flawed. These 'AI slop' reports, crafted using large language models, use technical jargon convincingly, but contain fabricated details and non-existent code references that fail under scrutiny.
The influx of these bogus reports strains resources, forcing experts to spend time debunking AI-generated claims instead of addressing genuine vulnerabilities. Some security researchers suggest that the rise of AI-generated reports could erode trust in bug bounty programs, potentially leading to the departure of legitimate researchers and the withdrawal of organisations from these initiatives.
To combat this, some projects are considering measures such as requiring disclosure of AI use in submissions, implementing reputation-based gating, and introducing submission fees to deter low-quality reports. The challenge lies in distinguishing genuine vulnerabilities from AI-generated noise to maintain the integrity and effectiveness of bug bounty programs.