Protecting Your Business in 2025: AI Prompt Injection Risks and Why No AI Browser Is Fully Safe
Last updated: 3 November 2025
AI browser agents like Comet (Perplexity) and Atlas (OpenAI) have become household names for digital operations and workflow automation. But in late October 2025, major new vulnerabilities came to light that upended the security picture for all business users. Prompt injection - the exploitation of AI via hidden instructions in inboxes, calendar events, websites, or even screenshots - has gone from niche threat to headline risk.
Browser-based AI security
What is Prompt Injection (and What's Changed?)
Prompt injection is a new style of attack where adversaries embed hidden commands in everyday digital content - emails, invites, web pages, even search results. When an AI agent with access to that content is asked to summarise, sort, or automate tasks, it may accidentally execute the attacker's hidden instructions. The result? Unwanted emails, leaked files, altered events, or wider digital compromise - often without any user warning.
Key New Risks (November 2025) Include:
Direct exploits of browser trust: Attackers can now hijack AI browsers like Comet and Atlas using not just text, but cleverly encoded URLs, screenshots, and fake search results.
Cross-site and session attacks: A new "one-click" technique ("CometJacking") can allow an attacker to control your session across open tabs, enabling broad data exfiltration and unauthorised actions even if you never leave your inbox or calendar.
Cross-device memory poisoning: The "tainted memories" attack:
Atlas users face a particularly serious vulnerability. Security researchers at LayerX discovered that attackers can use CSRF techniques to inject malicious instructions into your ChatGPT account memory without your knowledge.
This isn't browser memory that clears when you restart. It's your ChatGPT account memory, the feature that helps ChatGPT remember context from past conversations. Once poisoned, these tainted memories persist across every device and browser where you use that ChatGPT account: your work laptop, home computer, mobile phone, whether you're using Atlas, Chrome, or Safari.
The next time you ask ChatGPT a normal question, the hidden instructions trigger automatically. The AI may execute remote code, leak data, or take actions that appear legitimate but serve the attacker's goals.
LayerX testing found that Atlas currently lacks meaningful anti-phishing protections, leaving users up to 90% more vulnerable to these attacks than users of Chrome or Edge.
How Do The Main AI Agents Compare?
Security Test Results (November 2025)
| Feature | Comet AI (Perplexity) | Atlas (ChatGPT/OpenAI) | Copilot, Gemini |
|---|---|---|---|
| Prompt injection detection | Real-time ML scan with parallel classifiers1; still vulnerable to indirect and image-based injection, evolving attacks regularly bypass defences | Model training to ignore malicious instructions, overlapping guardrails2; highly vulnerable via omnibox, clipboard, agent mode and hidden web content | Evolving protections, some separation but not prompt-proof yet |
| User confirmation | Always required, explicit approval for most agent actions | Required for many actions, but agent mode can operate autonomously; users must actively monitor to catch malicious activity | Usually required, but implementation differs by tool and vendor |
| Context separation | Strict guardrails designed to separate user intent from web content; indirect injections via HTML comments, invisible text, or images remain difficult to block3 | Partial: omnibox treats URLs as natural language commands, dangerously mixing user and AI context4 | Varies by implementation, weak context separation in multi-modal agent tasks |
| Third-party app vulnerabilities | CometJacking attack demonstrated data exfiltration from connected Gmail and Calendar via single malicious URL5 | Highest risk: ChatGPT account memory can be poisoned via CSRF, affecting all devices; agentic access to email, banking, files6 | Limited agentic capabilities reduce attack surface |
| Continuous learning/patching | Very rapid response to disclosed vulnerabilities, though security researchers regularly discover new bypass techniques | Rapid patching since October 2025 launch, but OpenAI acknowledges new attacks emerge faster than fixes can be deployed | Slower enterprise release cycles |
| Enterprise focus | Yes, with customisable guardrails and security controls | Consumer-first at launch, enterprise features developing | Varies, often weak default security settings |
| Recent attack highlights (Aug-Oct 2025) | Image/screenshot injection with near-invisible text7, hidden HTML comments, CometJacking (one-click URL-based data theft)5, agentic cross-tab exploits | Omnibox command spoofing4, clipboard hijacking, "Tainted Memories" CSRF attack poisoning ChatGPT account memory6, persistent cross-device infection | Similar risks identified during Gemini/Chrome integration, fewer documented exploits due to limited agent capabilities |
Security Testing by Brave Software, LayerX Security, and NeuralTrust Found:
Comet is still somewhat safer than Atlas, but no longer immune: indirect attacks sometimes slip by real-time scanning, and session-wide context scraping makes "one tab hack/many tabs breached" a real risk.
Atlas is currently the least secure, failing to block most modern prompt injections and remaining vulnerable to session persistence, even after the user restarts or logs out.
What Should You Do Differently Now?
Recognise prompt injection as a live threat, not a theoretical one. Attacks may come not just as weird links, but as ordinary-looking invites, inbox messages, or even image content on websites.
Don’t rely on live scanning or confirmation dialogs alone. Even real-time AI can miss “indirect” or obfuscated instructions. There’s no substitute for regular manual checks, vigilance, and limiting agent permissions.
Keep AI agent permissions tightly scoped: Only allow access to the narrowest necessary inbox, calendar, or drive - never “allow all.” Regularly review and revoke unused connections.
Limit open tabs (especially to websites containing sensitive info) while agentic browsing is active. Exploits can jump contexts, especially when using connectors to platforms like Gmail, Google Calendar, or SharePoint.
Minimise third-party app integrations: The more you connect, the bigger your “blast radius” if an exploit hits. Disconnect apps and connectors you aren’t actively using.
Choose browsers/vendors with rapid-response transparency: Demand regular, external security audits and clear breach notification channels.
Prefer a two-browser setup for safety: For most small businesses and professionals, the safest approach is to use a privacy-focused browser like Brave for sensitive logins, confidential data, and business communications. Keep agentic AI browsers (such as Comet or Atlas) strictly for non-confidential, controlled workflows, where speed and AI assistance bring real value, but where no client, financial, or business-critical information is put at risk. Brave blocks trackers and runs cooler than Chrome, making it ideal for long work sessions.
Real-World Scenario: What If a Malicious Calendar Invite Targets Your AI Agent?
Let’s imagine: Your browser-based AI agent (Comet or Atlas) is connected to your Gmail and calendar. A meeting invite arrives containing a hidden, invisible prompt injection. Here’s what the ideal, but not flawless, defense looks like:
You open the invite: No overt sign anything is amiss.
You ask your AI agent to summarise/respond: The agent scans the content; prompt injection classifiers may block obvious tricks - but “indirect” or evolving exploits might sneak through.
Confirmation step: Most legitimate actions still require a human approval click. But artfully crafted attacks may “hide” their true intent and win user approval.
Post-action review: If something is odd (unexpected mail sent, calendar updated), revoke access immediately and monitor your accounts - don’t wait for a notification.
The Updated Takeaway (Nov 2025)
No AI browser is immune to prompt injection and cross-session attacks. Comet now faces the same “zero-trust” necessity as Atlas, especially for business users with sensitive data.
Limit permissions - always.
Keep agentic browsing “in scope” only for trusted tabs and workflows.
Never approve AI-suggested drafts or actions without personally reviewing them.
Stay up to date on vendor patches, and demand transparency from your browser/agent provider.
Summary Actions for Business Users
Audit all connected apps monthly - or after any strange AI behaviour.
Educate your team: Train everyone to spot odd instructions, “silent” authorisations, and AI agent quirks.
Use browser-based AI for in-scope, human-reviewed workflows, but keep high-security data (banking, HR, confidential negotiations) disconnected and manual for now.
Adopt a dual-browser setup: Use a security-focused traditional browser such as Brave for email, banking, and confidential work, and reserve agentic AI browsers for controlled, low-risk tasks.
Prompt injection is now a living, evolving business risk. But by combining strong user vigilance, minimal permissions, and rapid response to vendor patches, you can make AI tools work for your business - without inviting new digital threats in the back door.
Want smarter workflow tips and digital shortcuts delivered occasionally?
Sign up for the Sophie’s Bureau newsletter - practical advice, workflow templates, and a sprinkle of digital calm straight to your inbox.