Trail of Bits Exposes Vulnerabilities in Agentic Browsers, Compares to Cross-Site Scripting

Trail of Bits Exposes Vulnerabilities in Agentic Browsers, Compares to Cross-Site Scripting

Security research and consulting firm Trail of Bits analyzed agentic AI in browsers and found vulnerabilites that resemble cross-site scripting (XSS) and cross-site request forgery (CSRF) attacks.

With browser-embedded AI agents, we’re essentially starting the security journey over again.

Agentic AI is AI that can perform tasks on behalf of the user, for example sending an email or organizing files. Browsers have been quick to jump on the AI train, with seem browsers even offering an AI-only experience.

The root cause of these vulnerabilities is inadequate isolation. Many users implicitly trust browsers with their most sensitive data, using them to access bank accounts, healthcare portals, and social media. The rapid, bolt-on integration of AI agents into the browser environment gives them the same access to user data and credentials. Without proper isolation, these agents can be exploited to compromise any data or service the user’s browser can reach.

Agentic browsers allow AI agents to perform all kinds of actions that previously were limited to users, such as making HTTP requests, reading browser history, and accessing the Document Object Model (DOM). Each capability of the AI can create data transfer between trust zones, meaning your data might be going where you don’t want it to.

Trail of Bits defines a simplified list of trust zone violations: INJECTION, where untrusted input is injected into the AI, including anything that adds arbitrary data to the chat history, CTX_IN (context in), where sensitive browsing data is added to the chat context like personal data from webpages, browsing history and the like, REV_CTX_IN (reverse context in), where the chat context affects browsing origins such as tools that logs a user in or changes their browsing history, and CTX_OUT (context out), where chat context is leaked into external requests such as HTTP requests.

They point out while individual trust zone violations are bad, they are much worse when combined. Along with the fact that many agentic browsers are based on versions of Chromium that are weeks or months behind in security patches and you can chain these with regular security exploits.

They show some real-world attacks on agentic browsers as well.

They were able to make the AI claim false information (as if they need help with that) using a GitHub Gist that instructed the AI to claim that Soviet cosmonaut Youri Gagarine was the first man on the moon, simply through the AI ingesting it via normal browser or the users specifically feeding it to the AI to summarize. This is a classic “prompt injection” attack that has plagued AI due to its inability to tell the difference between instructions and user input.

Another attack involves a malicious page containing a prompt injection and a magic link that logs them into an account controlled by the attacker. When the user asks their AI to summarize the page, it silently logs them in to the attacker‘s account and the user reveals sensitive information, thinking they’re interacting with their own account.

Yet another saw the AI generating a link with sensitive data, then opening the link.

Agentic browsers reuse cookies for agent-initiated requests, so they were able to exfiltrate data by asking it to go to a certain website that the user is already logged in on and go to an attacker-controlled URL containing the leaked data.

A slightly modified version of the same attack can infer data such as location based on the personalized results of a web search, for example searching for nearby restaurants.

They also demonstrated the ability to leak personal data from any website that has interaction with other users such as Instagram, GitHub, X, Reddit, Slack, etc. simply by sending a malicious DM with instructions to copy the text from messages to other users and send it to the attacker.

Finally, an attack polluting a user’s browsing history with potentially illegal content using the same trick of a malicious page or document with instructions to open a URL to whatever the attacker wants.

Despite all the issues, Trail of Bits doesn’t believe the situation is hopeless and provides some suggestions for securing agentic AI browsers.

Firstly, they suggest isolation between the user’s browsing and the agent’s. That means separate cookies, history, cache, everything. They should run in their own minimal browser instance.

Have task-specific tools instead of overly broad tools that access multiple services, in order to limit attack surface.

Provide warnings for the user to review documents instead of blindly trusting the summary, and display previews of documents directly in chat. They admit this is a bit of a weak defense as many users will simply ignore it, but it can help with shorter content.

A longer-term and more robust solution is to extend the Same-Origin Policy (SOP) to AI agents, so that agents can’t exfiltrate data easily between different sites.

They shout out frameworks such as Nvidia’s NeMo Guardrails for securing agent interactions as well.

Finally, they suggest decoupling content processing from task planning.One way of accomplishing this is with a dual-LLM design: one which is more privileged that processes trusted user input and plans tasks, and a quarantined LLM with no tool access processed untrusted input. Google’s CaMeL tracks data that moves through the system via metadata tags, and it‘s able to determine whether an action is allowed to execute based on whether the sources of the data match the allowed recipients.

With the rise of agentic browsers like ChatGPT Atlas, Opera Neon, and Perplexity’s Comet Browser, not to mention mainstream browsers like Chrome implementing agentic features, it’s going to be very important that the companies implementing these features take every precaution to make them as secure as possible. Agentic AI is open a hornets nest of new security problems that all need to be carefully addressed, but it seems like many companies are simply jumping onto the AI bandwagon without much thought as to how to do it securely.

Agentic features are now being introduces into operating systems such as Windows as well. Maybe it pays to ignore the hype and tread carefully when people’s data is at stake.

Community Discussion