OpenAI Agentic AI Risks Emerge

OpenAI Agentic AI Risks Emerge

18 July 2025

OpenAI has launched an agentic AI model capable of operating a computer on a user's behalf, marking a significant advancement with accompanying safety concerns. This technology allows the AI to directly interact with personal data through logged-in websites, essentially operating in 'takeover mode'. While OpenAI has implemented safeguards, the expanded capabilities and broader user reach elevate the overall risk profile.

Prompt injections, where malicious instructions hidden in web pages manipulate the AI, pose a particular threat. Such injections could trick the agent into unintended actions, like sharing private data or taking harmful actions on logged-in sites. The AI is also treated as having high biological and chemical capabilities under the Preparedness Framework, activating associated safeguards.

Agentic AI introduces risks, including potential misuse, security vulnerabilities, ethical dilemmas, and unpredictable behaviour. Protecting personal data and preventing harmful actions are key challenges as AI agents gain more autonomy. Continuous monitoring, ethical constraints, and fail-safe mechanisms are essential for responsible deployment.

AI generated content may differ from the original.

Published on 18 July 2025
aiopenaisecurityethicsagenticai
  • OpenAI's Domination Drive Scrutinised

    OpenAI's Domination Drive Scrutinised

    Read more about OpenAI's Domination Drive Scrutinised
  • OpenAI Restricts Algorithm Access

    OpenAI Restricts Algorithm Access

    Read more about OpenAI Restricts Algorithm Access
  • Tech's Military AI Expansion

    Tech's Military AI Expansion

    Read more about Tech's Military AI Expansion
  • AI Models' Reasoning Transparency

    AI Models' Reasoning Transparency

    Read more about AI Models' Reasoning Transparency
OpenAI Agentic AI Risks Emerge