Claude AI thwarts cyberattacks

Claude AI thwarts cyberattacks

27 August 2025

Anthropic has successfully detected and neutralised attempts by malicious actors to exploit its Claude AI system for cybercrime. The AI model was being misused in several ways, including crafting phishing emails and generating malicious code. Hackers also attempted to use Claude to bypass existing safety filters.

These malicious activities included an influence-as-a-service operation using over 100 social media bots to manipulate political narratives. Another case involved scraping leaked credentials for IoT security cameras. Claude was also used in recruitment fraud schemes targeting Eastern European job seekers. Furthermore, the AI model was leveraged by novice actors to develop sophisticated malware. Anthropic has banned the accounts involved and is continuously upgrading its safeguards to prevent future misuse.

AI generated content may differ from the original.

Published on 27 August 2025
aianthropiccybersecurityclaudemalware
  • Claude AI Enters Chrome

    Claude AI Enters Chrome

    Read more about Claude AI Enters Chrome
  • Anthropic's Claude Code Revolutionises Coding

    Anthropic's Claude Code Revolutionises Coding

    Read more about Anthropic's Claude Code Revolutionises Coding
  • Claude AI gets nuclear monitor

    Claude AI gets nuclear monitor

    Read more about Claude AI gets nuclear monitor
  • Claude Code for Enterprises

    Claude Code for Enterprises

    Read more about Claude Code for Enterprises
Claude AI thwarts cyberattacks