As Data Privacy Week highlights the growing risks surrounding personal information online, new research from Incogni sheds light on a largely overlooked source of data exposure: AI-powered browser extensions.
Incogni’s newly-released 2026 study finds that 52% of AI-branded Chrome extensions collect user data, with nearly one in three collecting personally identifiable information (PII).
The findings are based on an analysis of 442 AI-powered Chrome extensions across eight categories, nearly doubling the scope of last year’s research.
Browser extensions operate at a uniquely powerful level within the browser ecosystem. Depending on the permissions they are granted, extensions can read on-screen content, monitor activity, modify webpages, and inject scripts into sites users trust. As AI-powered tools become more popular, the amount of personal data accessible through these extensions continues to grow.
“Data Privacy Week reminds us that the biggest risks aren’t only found in mainstream apps; they’re often hiding in the tools we use every day,” says Darius Belejevas, head of Incogni.
“Browser extensions can have intimate access to our digital lives, yet their risks are rarely discussed. Our research aims to pull back the curtain on these vulnerabilities, turning hidden threats into clear data so users can stay in control.”
Key findings from the study include:
- 52% of AI-powered Chrome extensions collect at least one type of user data, while 29% collect PII.
- Among extensions with over 2 million downloads, Grammarly and QuillBot ranked as the most potentially privacy-damaging, based on the amount of data collected and the permissions required.
- 10 extensions were classified as both high risk-likelihood and high risk-impact, meaning they could plausibly be misused and cause significant harm.
- Programming and mathematical aids were the most privacy-intrusive category on average, followed by meeting assistants and audio transcribers.
- Audiovisual generators and text/video summarisers were, on average, the least privacy-invasive categories.
As AI tools rapidly move from novelty to default productivity aids, browser extensions have become one of the fastest-growing, and least scrutinised, points of data access. Unlike standalone apps, extensions can observe and modify activity across nearly every website a user visits.
Incogni’s findings suggest that as AI adoption accelerates, so does the volume of personal data flowing through tools that often operate quietly in the background.
Permissions amplify privacy risk
Every extension in the dataset required some level of browser permission. One of the most sensitive is ‘scripting’, which essentially gives an extension the power to change what the user sees or captures what they type on a website. The study found that 42% of extensions required this, potentially affecting up to 92-million users.
While many permissions are necessary for legitimate functionality, Incogni’s researchers found that privacy risks increase when extensions request access that cannot be reasonably justified by their stated purpose.
Popularity doesn’t equal low risk
To better reflect real-world exposure, the study also examined the ten most-downloaded extensions in the dataset. Several popular tools combined extensive data collection with broad permissions, increasing the potential impact on user privacy if ownership, policies, or security practices change.
The study emphasises that privacy risk is not binary. An extension may have a low likelihood of malicious use today, but its permissions determine how much damage it could cause under different circumstances.
What to watch for
Incogni’s researchers identified several warning signs users should consider before installing AI-powered browser extensions:
- Permissions that exceed the extension’s stated purpose, such as writing assistants requesting access to all websites or location data
- Extensions that require scripting or broad “read and change all data” permissions without clear justification
- Vague or incomplete disclosures about data collection practices
- Tools that process sensitive inputs, such as emails, documents, or meeting audio, without transparency about where that data is stored or sent
“While AI functionality often depends on access to on-screen content, there is a fine line between technical necessity and data overreach,” says Belejevas. “If a tool is asking for permissions that go beyond what’s needed to deliver the feature, users should be very skeptical about why that access is being requested in the first place.”