What happened
Google Threat Intelligence Group (GTIG) published findings on malicious actors' AI abuse, including Gemini, for cyberattacks. GTIG identified threat actors (e.g., UNC6418, UNC2970, UNC795) using AI for intellectual property theft, surveillance, and novel malware creation. AI capabilities facilitated rapid target profiling, convincing phishing messages, and malicious code generation. For instance, UNC795 used "agentic AI capabilities" for code auditing, and HONESTCUE malware used Gemini to receive malicious code. Google intervened to block these attempts, per GTIG's report.
Why it matters
Cybersecurity architects and incident response teams face an escalating threat landscape as AI accelerates attacker capabilities. AI's ability to rapidly profile targets, generate sophisticated phishing content, and create novel malware families, as detailed by GTIG, reduces attack preparation time and increases attack efficacy. This shifts the burden onto defence, where advanced detection and prevention tools capable of identifying AI-generated threats become critical. This follows recent reports of AI coding bots disrupting services, underscoring the immediate operational impact of AI in cyber warfare.
Subscribe for Weekly Updates
Stay ahead with our weekly AI and tech briefings, delivered every Tuesday.




