New AI-powered browsers like OpenAI's ChatGPT Atlas and Perplexity's Comet aim to boost productivity by automating tasks, but cybersecurity experts are raising concerns about significant security vulnerabilities. A primary threat is prompt injection, where malicious code embedded in websites can manipulate the AI browser to perform unintended actions, such as exposing private emails or making unauthorised purchases.
These attacks exploit the AI's ability to act on a user's behalf, blurring the line between intended actions and potential risks. Researchers have demonstrated how hidden commands can be embedded in images or triggered during screenshots, leading to data theft or manipulation of browsing sessions. While developers are working to address these issues, the industry recognises prompt injection as an unsolved problem, requiring users to exercise caution when using these new technologies and to review security settings.
Experts recommend avoiding granting AI browsers access to private data and being wary of untrusted content, even on seemingly trustworthy websites. Collaboration between browser developers, security vendors, and enterprises is crucial to combat these threats and establish robust security measures for AI browsers.
Related Articles

AI: Lifespan Extender or Divider?
Read more about AI: Lifespan Extender or Divider? →
OpenAI Acquires Mac Interface
Read more about OpenAI Acquires Mac Interface →
Sora Update: New Features Incoming
Read more about Sora Update: New Features Incoming →
Reddit Sues Perplexity AI
Read more about Reddit Sues Perplexity AI →
