What happened
Google's Threat Intelligence Group reported criminal hackers used an AI model to discover and weaponise a zero-day vulnerability in a popular open-source web-based system administration tool. The attempted attack, leveraging a Python script, aimed to bypass two-factor authentication, though valid credentials were still required. Google notified the software maker, allowing a patch before the attack caused damage, marking the first confirmed instance of AI-driven zero-day exploitation by malicious actors.
Why it matters
AI-driven cyberattacks are no longer theoretical, shifting the threat landscape for security architects and platform engineers. This incident demonstrates AI's capability to identify previously unknown software flaws, a mechanism that reduces the time and expertise required for exploit development. Procurement teams must now factor in the increased risk of AI-generated zero-days when evaluating software, particularly open-source tools, as the ability to guard against such sophisticated, rapidly discovered vulnerabilities becomes a critical constraint. This follows Anthropic's announcement last month that its Mythos AI model identified thousands of zero-day vulnerabilities across major operating systems and browsers.




