Weekly NewsletterSubscribe →

The Vendor Reckoning3 Mar 2026

security news

AI safety is under the microscope. Concerns escalate with AI-generated CSAM surges, model distillation attacks, and platform safety failures. Amazon and Microsoft face operational disruptions from AI tools and data breaches, highlighting infrastructure risks. Legal battles over AI's role in self-harm and trade secret theft further complicate the landscape, demanding robust safety protocols and stringent data protection.

Recent security events

Reports AI CSAM Surge

Reports AI CSAM Surge

NCMEC's CyberTipline logged over one million reports related to generative AI, creating immediate legal and operational risks for platform engineers and founders. Unmoderated open-source models enable indistinguishable illicit content, increasing burdens on security architects and legal teams for moderation and victim identification.

Read more about Reports AI CSAM Surge
OpenAI Employee Fired for Insider Trading

OpenAI Employee Fired for Insider Trading

OpenAI fired an employee for using confidential information on prediction markets, confirmed internally. This incident, alongside 77 suspicious trades identified by Unusual Whales, highlights significant compliance and security risks for tech companies regarding information leakage.

Read more about OpenAI Employee Fired for Insider Trading
NanoClaw Enforces Agent Isolation with Containers

NanoClaw Enforces Agent Isolation with Containers

NanoClaw's new security model isolates AI agents in ephemeral containers, preventing information leakage and limiting host access. This architectural approach hardens agentic workflows for security architects and platform engineers, shifting focus from application-level checks to containment.

Read more about NanoClaw Enforces Agent Isolation with Containers
Human Review Leads to Increased Safety Reporting

Human Review Leads to Increased Safety Reporting

OpenAI filed 75,027 CyberTipline reports to NCMEC in H1 2025. Anthropic reported 859 images to NCMEC between April 2024 and March 2025. The gap largely reflects differences in platform type — OpenAI processes billions of images via DALL-E and Sora, while Claude is primarily text-based.

Read more about Human Review Leads to Increased Safety Reporting
Meta Investigates AI Disability Profiles

Meta Investigates AI Disability Profiles

Meta investigates AI-generated social media profiles sexualising disabled people on Instagram, revealing critical failures in platform moderation and generative AI tool safeguards. Content moderation teams must re-evaluate detection, while AI developers must audit datasets for biases producing harmful outputs.

Read more about Meta Investigates AI Disability Profiles
AI System Misleads Surgeons, Causes Injuries

AI System Misleads Surgeons, Causes Injuries

AI integration into safety-critical systems increases liability and regulatory exposure for technology providers. The TruDi system's post-AI increase to over 100 unconfirmed malfunctions and 10 alleged injuries demonstrates the cost of failure.

Read more about AI System Misleads Surgeons, Causes Injuries
Google API Keys Expose Gemini Data

Google API Keys Expose Gemini Data

Google API keys, once public identifiers, now grant access to private Gemini data and incur charges without warning. This creates significant security and cost risks for platform engineers and security architects.

Read more about Google API Keys Expose Gemini Data
Protester Arrested at AI Summit

Protester Arrested at AI Summit

Police arrested Youth Congress President Uday Bhanu Chib and seven others for a "shirtless" protest at the India AI Impact Summit. This event reveals security vulnerabilities at high-profile tech events, requiring robust, multi-layered vetting beyond basic digital credentials for access.

Read more about Protester Arrested at AI Summit
Anthropic Alleges Model Distillation Attacks

Anthropic Alleges Model Distillation Attacks

Allegations of industrial-scale model distillation by Chinese AI companies on Anthropic's Claude models raise significant national security and intellectual property concerns. This activity, involving millions of exchanges, highlights a critical gap in international AI governance and model protection.

Read more about Anthropic Alleges Model Distillation Attacks
Google Exposes AI Cyberattack Methods

Google Exposes AI Cyberattack Methods

AI's acceleration of attacker capabilities, including rapid target profiling and novel malware creation, escalates the threat landscape for cybersecurity teams. This necessitates prioritising advanced detection and prevention tools to counter AI-generated threats, shifting the burden onto defence.

Read more about Google Exposes AI Cyberattack Methods
OpenAI's Safety Protocols Under Scrutiny

OpenAI's Safety Protocols Under Scrutiny

AI platform providers face increased scrutiny over internal safety protocols after OpenAI identified a school shooter's violent chats but deemed them not an "imminent and credible risk" for law enforcement referral. This highlights a critical gap in proactive threat mitigation.

Read more about OpenAI's Safety Protocols Under Scrutiny
Banned Mass Shooter Account

Banned Mass Shooter Account

Platform liability for AI usage is shifting from content generation to threat intelligence. OpenAI banned the account of the Tumbler Ridge mass shooter eight months before the attack, exposing the gap between automated policy enforcement and real-world escalation thresholds.

Read more about Banned Mass Shooter Account
Bans Agency Accounts with Automated Systems

Bans Agency Accounts with Automated Systems

Automated security systems without human escalation paths are blocking enterprise revenue. Meta is systematically banning newly created, ID-verified work accounts for ad agencies, locking specialists out of the appeal process and halting campaign operations for high-value customers.

Read more about Bans Agency Accounts with Automated Systems
AI coding bot disrupts Amazon service

AI coding bot disrupts Amazon service

Platform engineers face severe infrastructure risks as AI coding tools gain execution capabilities. Following a Financial Times report that an AI bot took down an Amazon service, teams must enforce strict zero-trust boundaries to prevent automated tools from destroying live environments.

Read more about AI coding bot disrupts Amazon service
FT flags OpenClaw privacy risks

FT flags OpenClaw privacy risks

Agentic AI creates privacy risks for enterprise security teams. The Financial Times identified specific privacy problems with the OpenClaw AI social network, following OpenAI's recent hiring of its founder. Teams must enforce strict data isolation for autonomous AI tools.

Read more about FT flags OpenClaw privacy risks