OpenAI Agentic AI Risks Emerge

OpenAI Agentic AI Risks Emerge

18 July 2025

OpenAI has launched an agentic AI model capable of operating a computer on a user's behalf, marking a significant advancement with accompanying safety concerns. This technology allows the AI to directly interact with personal data through logged-in websites, essentially operating in 'takeover mode'. While OpenAI has implemented safeguards, the expanded capabilities and broader user reach elevate the overall risk profile.

Prompt injections, where malicious instructions hidden in web pages manipulate the AI, pose a particular threat. Such injections could trick the agent into unintended actions, like sharing private data or taking harmful actions on logged-in sites. The AI is also treated as having high biological and chemical capabilities under the Preparedness Framework, activating associated safeguards.

Agentic AI introduces risks, including potential misuse, security vulnerabilities, ethical dilemmas, and unpredictable behaviour. Protecting personal data and preventing harmful actions are key challenges as AI agents gain more autonomy. Continuous monitoring, ethical constraints, and fail-safe mechanisms are essential for responsible deployment.

AI generated content may differ from the original.

Published on 18 July 2025

Subscribe for Weekly Updates

Stay ahead with our weekly AI and tech briefings, delivered every Tuesday.

OpenAI Agentic AI Risks Emerge