Agentic AI Browsers are Privacy Disasters
Researchers expose the privacy disaster that is Agentic AI Browsers. I break it all down.
By now, if you’re online at all, you’ve likely heard the big browser announcements within the last few months both OpenAI and Perplexity have released their own respective Agentic AI browsers: Atlas and Comet, respectively.
But they are not the only ones jumping on the agentic AI in the browser bandwagon, even if they do get all the press. In the last few months, the AI browser market has become positively saturated with many new and established entrants releasing their own agentic-by-default browsers, including The Browser Company’s Dia Browser, Opera’s Neon, and Fellou’s AI browser.
And then there’s the slightly-less-agentic-AI browsers like Brave’s Leo AI sidebar, Google Chrome’s Gemini chatbot, or Microsoft Edge’s CoPilot Mode. I say they’re slightly-less-agentic because AI is an add-on or feature, not something built into the core of the product. At the end of the day, and at least for now, you can disable CoPilot, Gemini, and Leo and still have a functional browser.
And I hope that optionality continues, because IMHO, agentic-by-default browsers represent a big shift in terms of incentives and a huge downside risk for users with virtually no material upside. As someone more clever than me noted: with agentic AI, you’re the agent at the service of the AI companies.1 Or perhaps, more accurately, you’re the money data mule for AI companies to engage in all sorts of questionable activities, largely liability-free.

Take for example, copyright infringement. Creators have spent the last two years in a constant game of whack-a-bot, evolving methods to ice-out AI crawlers — whether through forced email verification, shitty CAPTCHAs, liveness checks, or through anti-bot tech provided by CDNs like CloudFlare.
But there’s no easy way to stop browsers like Comet or Atlas from hoovering up paywalled or blocked content that a user accesses directly, unless you want to block actual humans viewing your content, which is generally counterproductive. That’s because to a website, the Atlas and Comet browsers look like just another Chrome browser, not an agent-string that can be blocked in a robots.txt file.
Here’s a good summation of the problem from the Columbia Journalism Review:
AI browsers present new problems for media outlets, because agentic systems are making it even more difficult for publishers to know and control how their articles are being used. For instance, when we asked Atlas and Comet to retrieve the full text of a nine-thousand-word subscriber-exclusive article in the MIT Technology Review, the browsers were able to do it. When we issued the same prompt in ChatGPT’s and Perplexity’s standard interfaces, both responded that they could not access the article because the Review had blocked the companies’ crawlers.
But, it’s worse than just skirting copyright laws: OpenAI and Perplexity and other agentic browsers like them also may be setting users up to be unwitting corporate data exfiltrators, identity theft victims, and botnets, all in one go. And the way that licensing terms are written on these browsers, the user agrees to absorb all the liability for whatever the ‘agent’ does on their behalf.
AI Agents ‘Don’t Just Follow Our Instructions’
A week or so ago, researchers Shivan Kaul Sahib and Artem Chaikin at Brave,2 stirred up a bit of a hornet’s nest when they responsibly disclosed that Perplexity, Fellou, and Opera’s AI browsers left users open to prompt injection attacks. There are slight differences in the details and scope, but the technical mechanisms are the same:
an attacker embeds malicious instructions in web content (text or images) which is visible to the AI agent, but nearly impossible for users to read/notice.
The agent takes a screenshot of the website and OCRs or otherwise ingests the text, including the malicious instructions.3
The agent passes the text directly to the LLM as trusted content.
The browser AI agent is instructed the malicious prompt/commands to do something bad.
Unfortunately, even if the companies are able to plug these obvious holes (through guardrails or monitoring), the ability to process instructions is a feature not a bug. As Simon Wilson rightly noted on his excellent blog:
LLMs follow instructions in content. This is what makes them so useful: we can feed them instructions written in human language and they will follow those instructions and do our bidding.
The problem is that they don’t just follow our instructions. They will happily follow any instructions that make it to the model, whether or not they came from their operator or from some other source. (emphasis mine)
Those instructions might include exploits ranging from accessing a user’s logged in mail client in another tab, recording personal data and sending that data to the attacker’s server, or even grabbing 2-factor auth / OTP codes. So far, only OpenAI appears to have accounted for sensitive data and direct account access vulnerabilities by allowing for agents to run in ‘logged out’ mode, and something called ‘Watch Mode’. But if computer viruses are any guide, this is at best, a band-aid to a very wicked problem.
All Your Information Are Belong to Us

AI Browsers boast a ton of useful features but they rarely provide good context for how they’re able to achieve these remarkable things. I don’t think this solely because it’s hard to conceptualize how LLMs work, so much as it’s not in the interests the companies to elaborate on what it means to ‘have an AI that understands’, ‘gets smarter’, or ‘complete[s] tasks for you, all without copying and pasting or leaving the page’. The how is a mystery, by design.
So here, I’ll do you a favor and explain it: It works by storing memories (text and frequently, screenshots) of you and what you do online. Every single window, action, login detail, document, and the like. It creates a snapshot of you and what you do online. And it sends it all to big computers in the sky for processing.

Fundamentally, if you use these browsers, you’re trusting an untrusted (in the technical sense) network with complete access to your life online.
Some of you might rightly be pointing out that the entire internet is an untrusted network, and you would not be wrong. But the difference is, the internet is still (arguably) decentralized and (definitely) fragmented. Google might have lots of information about your search history; Microsoft may know your work behaviors; Chase Bank may have your banking details and purchasing history; and Instagram and Amazon might know the deep details of your preferences and shopping habits. But with a browser AI agent, Perplexity, or OpenAI, or Felliou or Opera will know everything. With an agentic AI browser, you’re exposing your entire online life to a single company, to do whatever it wants, and hoping for a privacy notice pinky-swear that they’ll do the right thing.
Here’s a day-in-the-life hypothetical for you: Pretend you’re a sales exec in New York. You work for a small, but growing tech company, who has big news about an upcoming product launch. You’ve use one of the agentic AI browsers (we’ll call it FellowCoPilot, for no reason other than it makes me think of neckbeards) to do your job.
9:00am: You roll into the office and fire up your browser to casually scroll socials over coffee. You like a few nice outfits featured on Instagram, and some TikTok posts.4 You ask your agent to see if you can find one of the shirts you found on Amazon for an event you plan to attend on Saturday. It dutifully logs into Amazon and gets you something perfect and correctly sized (based on past browsing behavior).
10:00am: Coffee imbibed, you start working on a client email. You hate writing client emails, so you ask FellowCoPilot to write something for you in Gmail. You’ve already logged in, so all the agent needs to do is write and click send. You point the agent to a few internal sales docs for context. It retains the company confidential information about a new product that’s coming out in time for Christmas. It stores that information in memory for ‘context’.
11:00am: Email written, and briefly scanned, you send it off to the client. Except it’s not the right client. FellowCoPilot screwed up and misdirected it to another client, offering a steep discount to a customer who’d already bought in at a higher price. Whoops.
11:30am: You log into your bank account to check if your flaky friend reimbursed you for dinner last night, as promised. They haven’t, because they’re flaky. You’ll have to remember to remind them, so you tell FellowCoPilot to add it to your calendar. In the background, a screenshot of your bank’s login page and details is silently stored. Ditto for your addressbook.
12:20pm: You see an urgent-looking email, which appears to be from your bank reporting a fraudulent transaction. You click on the link provided. But when you get to the page, it’s riddled with spelling mistakes, and obviously malware. Also, it’s chasedotcom.com not chase.com. You don’t log in, but the link is carefully crafted to instruct your browser to send along a memory dump of your actual chase.com bank login details anyway.
12:30pm: You head off to lunch.
2:00pm: Lunch goes a little long. While you were out, your boss sent you a string of emails wanting sales figures. You were supposed to have the sales figures email done weeks ago, but things got away from you. You instruct FellowCoPilot to comb through your company’s Google Drive files, and compile the data the boss wants. It now has full access to the folders you’ve shared, and all the documents provided, which include confidential, sensitive materials on pricing, client lists, likely purchasers, and competitive positioning. All of that context is shared with the company providing the browser (OpenPlexity).
Your browser agent scans everything and generates a plausible-looking report. You give it a once-over and send this off. Since you kinda hate this job anyway, you don’t skim the response entirely, and miss the fact that the FellowCoPilot confused some of the numbers and included some details related to an entirely unrelated product.
2:30pm: That client you forwarded the better deal to at 11 is very confused and has cc’d the head of sales wondering what’s up. They’re a little miffed that this wasn’t offered when they renewed a few weeks ago.
3:30pm: You casually scroll through news of the day, and because you don’t have time to actually read, you ask FellowCoPilot to scan and provide an audio summary of the day’s happenings. Buried in one of the news posts is an instruction which commands FellowCoPilot to silently open a tab to your TikTok account, log in with your saved account credentials, and request a one-time password. The browser is instructed to check your Gmail account for the OTP code, and send everything to an attacker in North Korea.
North Korean attackers then use your TikTok account to send out a Sora-generated deepfake of Donald Trump doing something … untoward with a goat.
4:00pm: The Trump Administration’s vast spy network scours TikTok for offensive posts about dear leader, and submits a demand to TikTok and OpenPlexity for more details on who the guilty party is. After all, only he’s allowed to use AI to mock opponents. Didn’t you read the last Executive Order?
Meanwhile, you knock off a little early, hoping to beat the traffic, none the wiser about the small global event occurring in the background.
6:00pm: OpenPlexity responds, turning over your day’s activities, as well as that recent search you did supporting Mamdiani for governor. You’re labeled a domestic terrorist sympathizer for antifa and picked up by ICE the next day.
Fin.
Some parts of this, are … a bit of drunken fantasy on my part, but an awful lot of this is not only entirely plausible, but highly likely.
Agentic AI browsers given these companies an unimaginable window into our lives, with the ability to maintain Recall-like details about our entire presence online. Meanwhile, their privacy notices are often shockingly bad.
Perplexity’s privacy notice, for instance, gives the company access to basically everything, including payment details and passwords, as well as access to your microphone and camera, with an unlimited right to improve its models, personalize your experience, conduct internal research, analyze trends, and communicate with users.
Fellou’s privacy notice is even worse — “We will collect data you actively provide when using the service, as well as data generated during your use or receipt of the Services through automated means.” Their data collection is basically limitless, and includes files, images, audio & camera access, health information, financial information, or any information that the browser touches. Also, if you use ‘third party data’ you’re required to obtain legal authorization, but they’ll happily and unqualifiedly share all of this with a laundry list of third parties (which aren’t disclosed), including advertising partners, government agencies, ‘industry peers’, or affiliates.
Meanwhile, vulnerabilities, like those exposed by the Brave team, will only increase. The companies releasing AI browsers are focused on speed and market domination — move fast / break things — not safety, security, reliability, or privacy. Worse still, most of these companies have slippery-enough legal language that any fault, liability, or damages is offloaded to the users of the browser, no matter what the agent does on their behalf.
As I said above, I’m fine with constrained AI use in the browser: For example, I do genuinely think that Brave’s approach with Leo is a good, privacy-focused approach to providing users who want to use LLMs with the ability to do so seamlessly. But optionality and choice matter. Agentic-by-default browsers represent a big shift in terms of incentives and huge risks. And I think we’re sleepwalking towards a much worse future if agentic AI browsers become the norm.
We all need to avoid becoming unwitting data mules for the AI companies.
And With That… Beaker Being Warm
I am not trying to snub/avoid attribution — I genuinely cannot find where I read that point, so if you know, leave a comment and I’ll give credit where credit’s due.
Full disclosure: I am a privacy consultant for Brave Software, Inc., and have been a happy Brave user for about five years. However, I was not compensated for this piece, nor does Brave review or sign-off on blog posts. Any opinions I share are my own.
Some browsers, like Neon, rely less on screenshots, and more on analyzing website code, but the same vulnerabilities still exist.
If it’s not otherwise clear, I don’t use Instagram or TikTok, and I presume that is what Millennials and GenZ do when they’re scrolling through those feeds. IDK.

