You’ve seen the ads. AI that books your flights, clears your inbox, preps your meetings, and texts your contractor — all before your first coffee. Click, click. Instant personal assistant.
What the ads leave out is the fine print. Somewhere in every setup flow, between the glowing UI and the “Get Started” button, is a smaller button that says “Allow.” That’s the moment you hand over a key — not a password, not a login, a persistent digital key. And most of the time, you never get it back.
The key doesn’t come with an expiration date. The tool you connected today, forgot about next month, and stopped paying for in the spring? Still has access. Token still live. Door still open.
That’s OAuth — the protocol behind nearly every “Sign in with Google” and “Connect your account” button on the internet. It was built to make digital life easier. It did. And in the age of AI agents, it became the access layer that makes the risk real.
The key you forgot you gave out
OAuth wasn’t built to create a surveillance problem. It was built to solve a real one. Before it existed, apps asked for your password directly. You’d hand it over and spend the rest of the year hoping the startup you trusted didn’t get breached, sell your credentials, or lose them in a leaky database.

OAuth fixed that. Instead of your password, you grant a token — a scoped, limited key — to the app. Elegant, when you think about it. The app gets what it needs. You keep your password to yourself.
The catch: that token doesn’t expire. The consent prompt doesn’t say “for the next 90 days.” It says “Allow” — and it means forever, or until you go manually hunt it down and revoke it. Which, statistically, you won’t.
“These grants tend to last a very long time,” said Aaron Parecki, a longtime OAuth contributor who has spent years helping shape the protocol’s evolution. “That is an artifact of optimizing for user experience.”
He’s right. No one wants an OAuth consent prompt every two days. So we designed for convenience, and now we have the digital equivalent of a junk drawer — except instead of dead batteries and mystery cables, it’s live access tokens connected to your email, your calendar, your files, and your corporate Slack.
Agents just put this problem on a rocket ship
For years, OAuth sprawl was a manageable annoyance. Most employees connected a handful of apps — a CRM integration here, a scheduling tool there — and IT teams at the largest organizations at least tried to keep tabs on it.
Then came the agents.
Google is testing Remy, a personal AI agent designed to manage your digital life around the clock. Anthropic’s Cowork is already generally available, connecting Claude to your files, your calendar, your Slack, and your browser. Every major AI platform is racing to put an agentic layer on top of your data, and every one of those agents needs permission to get there. Via OAuth.
“We are seeing an explosion of connections between things right now,” Parecki said. “AI is just amplifying everything.”
The scale shift is not subtle. Traditional OAuth sprawl happened gradually — one app, one employee, one grant. Agentic AI creates grants in bulk, for purposes that aren’t always obvious at the time, connected to accounts whose data exposure is rarely factored into the approval. A token tied to a newly provisioned analyst account is one thing. The same token tied to the CEO’s inbox — eight years of sensitive correspondence, plus access to Drive, Calendar, and Salesforce — is something categorically different.
Abhishek Agrawal, CEO of Material Security, put it plainly: “There is no scope that says ’email from the last six months’ versus ’email for all time.’ That granularity doesn’t exist in the protocol today.”
So when your AI agent wants access to your Gmail to help you prep for a meeting, it doesn’t get a peek. It gets the keys.
The question no one can answer
Security teams have a visibility problem that predates the agents. Then agents just made it much worse.
When an employee connects an AI tool to their corporate wiki, their personal calendar, or their Google Workspace account, that connection happens between the employee and the third-party service. The enterprise identity provider — the system that’s supposed to govern who has access to what — isn’t in the room.

“The enterprise IDP and enterprise admins don’t even see that happening unless the thing they’re connecting to exposes it,” Parecki said. “It’s a huge blind spot right now in the enterprise.”
Most SaaS startups don’t build the admin tooling that would make those grants visible. They’re building features, not governance dashboards. So the connections accumulate, invisible and unaudited, while the enterprise’s perimeter controls watch the wrong door.
New research from Material Security puts numbers on what security teams already know in their gut: 80% of security leaders consider unmanaged OAuth grants a critical or significant risk. Nearly half — 45% — are doing nothing to monitor them at scale. A third are running manual processes: spreadsheets, ad hoc reviews, the occasional “has anyone checked this lately?”
Spreadsheets are not a threat response capability. They’re a record of how much exposure an organization doesn’t know it has.
The attack you didn’t see coming
The theoretical risk became very concrete, very recently.
On April 19, Vercel disclosed a security incident that started not with a phishing email or a zero-day exploit, but with an AI tool a Vercel employee had connected to their Google Workspace account. The tool was Context.ai, a third-party AI office suite. Months earlier, a Context.ai employee had downloaded a malicious video game exploit — later identified as Lumma Stealer malware — on their workstation.
That malware harvested OAuth tokens. Attackers used one to access Context.ai’s AWS environment. From there, they pivoted: at least one Vercel employee had previously authorized Context.ai with broad “Allow All” Google Workspace permissions, and that OAuth token was still live. The attackers used it to walk into Vercel’s internal systems. Then they walked into Vercel’s customers.
A threat actor using the ShinyHunters name claimed responsibility. The alleged dataset — internal database contents, API keys, GitHub tokens, source code repositories — was listed on BreachForums for $2 million. Attribution remains unverified; actors previously linked to ShinyHunters have denied involvement.

It’s a double supply chain attack: compromise the AI tool, use the AI tool’s OAuth access to compromise its customers, then pivot further. At no point did anyone need to guess a password. At no point did MFA fire. The token was legitimate. The integration was trusted. The access had been granted, in good faith, by a real user who clicked “Allow All” and moved on with their day.
Ashley Rose, CEO of Living Security, framed the compounding risk clearly: “Token theft. OAuth tokens are increasingly the target of attackers because they bypass MFA and often survive password changes. A forgotten app with a live token is a backdoor you don’t know is open.”
The Vercel incident isn’t an edge case. It’s a preview. Rose calls it the supply chain problem: “When a small vendor with broad permissions gets compromised, every user it ever touched is exposed.” The threat isn’t the apps you remember. It’s the ones you don’t.
Why the end user can’t fix this alone
If you’re thinking “users just need to be more careful about what they connect,” you’ve met the enemy and it is not the user.
Rose has spent years studying how humans actually behave inside security systems, and her verdict is unsparing: “We’ve been running this experiment in the enterprise for two decades. The result is in: making humans the last line of defense doesn’t work. It didn’t work for phishing, it didn’t work for password hygiene, and it won’t work for permission management.”

The design of the consent flow is working against users, not with them. The “Allow” button appears at peak intent — you want the app, you want it now — and the path of least resistance is to click through. There’s no equivalent moment for revocation. You grant access in a moment of motivation. You’d revoke it in a moment of audit. Those moments almost never come.
Agrawal sees the same dynamic from the enterprise side. “Most end users are operating on the principle: if I’m able to do it, it must mean I’m allowed to do it,” he said. “They’re looking for the security team to have implemented the guardrails.”
The problem is, in most organizations, those guardrails don’t exist yet. The agents are already running. The tokens are already out there. And the security team is still building the spreadsheet.
What has to change
The protocol is evolving, but deliberately — which is appropriate for a standard that powers a significant slice of the internet and can’t afford to break.
Parecki has spent the past two years working on what he calls enterprise managed authorization — a mechanism that puts the corporate identity provider back in the middle of agentic connections, so that when Claude or any other AI tool requests access to a company resource, the IDP sees it, governs it, and can revoke it. The MCP community is working toward the same concept under the name “enterprise managed authorization.”
“You don’t need every SaaS startup to build sophisticated governance interfaces,” Parecki said. “If the corporate wiki only issues access tokens in response to tokens the enterprise IDP has signed, you move governance to the IDP. One place. One policy.”
Infrastructure players are also moving. Cloudflare recently added managed OAuth support to its Access platform — now in open beta — so that AI agents can authenticate against internal applications through a governed, standards-based flow rather than through ungoverned tokens. It’s a signal that the plumbing is starting to catch up to the threat.
On the product side, Material Security’s OAuth Threat Remediation Agent takes a different approach: continuous behavioral monitoring of every connected app, not just new ones at installation. The agent evaluates vendor reputation, scope risk, and the blast radius of the connected account — and can revoke high-risk grants automatically before harm is done.
Rose’s ask is more fundamental: redesign the consent experience itself. A quarterly permissions statement, she argues — structured like a bank statement, showing every active, dormant, and forgotten grant in plain English — would give users one meaningful moment, four times a year, where the system is actually on their side.
“The goal isn’t to turn people into security analysts,” she said. “It’s to give them one moment, four times a year, where the system is on their side.”
The door is open
I went back and cleaned up my Google account after pulling that list. Revoked a dozen apps. Felt briefly virtuous about it.
Then I remembered I haven’t checked my Microsoft account. Or my GitHub. Or my LinkedIn. Or the half-dozen other platforms where I’ve clicked “Allow” and moved on with my life.
This is the condition most of us are in — enterprise or consumer, security-conscious or not. The tokens are out there. Some of them are connected to accounts with years of sensitive data. Some of them belong to apps that have changed ownership, changed purpose, or quietly gone sideways. Most of them have never been audited.
And now the AI agent era is here, promising to do more for us than any app we’ve connected before — booking flights, drafting emails, managing projects, acting in our name. Every one of those agents needs a token to get started.
“A misconfigured agent isn’t a privacy leak,” Rose said. “It’s an action taken in your name.”
The door is open. The question is whether we figure out who’s walking through it before they do.
Nobody’s watching your back out here
Here’s what the enterprise conversation tends to skip: all of it — the IDP governance, the behavioral monitoring, the blast radius assessment, the automated remediation — is built for organizations. It assumes there’s an IT team, an identity provider, a security policy, and someone whose job is to watch the dashboard.

Most people don’t have any of that.
The CEO who connected an AI scheduling assistant to her personal Gmail. The freelance developer who wired three AI coding tools to his GitHub account and his Google Drive. The 16-year-old who gave an AI agent access to his Spotify, his contacts, and his calendar to build some kind of vibe-based social planner and hasn’t opened the app since March. None of them have an enterprise IDP sitting in the middle. None of them have a security team running quarterly audits. None of them have anyone watching what those tokens are doing at 2 a.m.
This isn’t a new problem. It’s the latest version of one we’ve been living with since smartphones handed every app on earth a permission slip and hoped users would read the fine print.
The old story was mobile app permissions. Apps asking for your camera, your contacts, your location. Some of them needed it. A lot of them didn’t. Most users clicked through without thinking, and the data moved anyway.
OAuth looks cleaner than that. More modern. More secure. In a lot of ways, it genuinely is — it eliminates password sharing and creates a formal consent step. But it recreates the same pattern in cloud form. A service asks for access. The user clicks Allow. Data starts moving in the background.
The difference is where the permission lives — and how long it stays. A mobile app accessed your contacts from your phone. An OAuth-connected app uses a token to reach into your email, your calendar, your files, your source code, your cloud storage, through APIs. The app doesn’t need to remain installed on your device. It can sit in the cloud, keep refreshing its access, and continue operating long after you’ve forgotten it exists.
That makes OAuth less like a broken lock than a forgotten spare key. You gave it out deliberately. You had good reasons at the time. The key still works.
The protocol isn’t the scandal. The problem is over-permissioning, weak visibility, and trust that persists longer than memory. The old app-permission story was about apps asking for too much. The OAuth version is about apps keeping too much, for too long — and nobody watching what they do with it.
In the enterprise, at least, the tools are arriving. The protocol is evolving. Someone, eventually, will be paid to care. For everyone else, the checklist below is the closest thing to a security team you’ve got.
Your OAuth Sprawl: Go Check Right Now
Here’s the thing about your connected apps: the list is longer than you think, and the door is open until you close it. Below are direct links to the permissions screens for the platforms most likely to have accumulated years of stale tokens on your behalf. Set aside 15 minutes. You’ll be uncomfortable. Do it anyway.
A broader directory of OAuth review pages across additional platforms is maintained at indieweb.org/appaccess.
| Platform | Where to go | What to do |
| myaccount.google.com/connections | Review Sign in with Google and Access to your Google Account. Remove anything you don’t recognize or haven’t used recently. | |
| Microsoft (personal) | account.microsoft.com/privacy/app-access | Check apps and services with account access. Revoke anything unfamiliar. |
| Microsoft (work/school) | myapps.microsoft.com | Open your profile settings and review app permissions. Admins can also manage this in Entra ID. |
| Apple | account.apple.com | Look for Sign in with Apple / Apps using Apple ID. Stop using apps you no longer trust. |
| Meta/Facebook | facebook.com/settings?tab=applications | Review connected apps, games, and websites. Remove the old ones. |
| instagram.com/accounts/manage_access | Check Active apps. Remove anything you haven’t touched in months. | |
| X (Twitter) | twitter.com/settings/connected_apps | Settings → Security and account access → Apps and sessions → Connected apps. |
| linkedin.com/psettings/permitted-services | Remove third-party services you no longer use. | |
| Amazon | amazon.com/ap/adam | Select Remove next to any app you want to cut off. |
| GitHub | github.com/settings/applications | Review Authorized OAuth Apps and GitHub Apps. Revoke unused developer tools. |
| GitLab | gitlab.com/-/profile/applications | Revoke old OAuth applications and integrations. |
| Slack | slack.com/apps/manage | Review installed apps in each workspace. Remove anything you don’t recognize or no longer need. |
| Atlassian | id.atlassian.com/manage-profile/apps | Review Apps with access to your Atlassian account and revoke unused grants. |
| Dropbox | dropbox.com/account/connected_apps | Remove linked apps you no longer use. |
| Discord | User Settings → Authorized Apps | Deauthorize any apps you don’t recognize. |
| reddit.com/prefs/apps | Revoke apps you no longer use. | |
| Yahoo | login.yahoo.com/account/security | Check External connections and delete unused access. |
| Zoom | marketplace.zoom.us/user/installed | Review installed apps. Remove anything stale. |
| Salesforce | Salesforce Setup | Search Connected Apps OAuth Usage and revoke app access. |
| Adobe | account.adobe.com | Check connected apps and integrations under security/account settings. |
What to look for: Apps from companies you don’t recognize. Apps connected to accounts with years of sensitive data — your primary email, your work inbox, your file storage. Apps that were useful once and forgotten. Any app that requests more access than its function obviously requires.
You won’t fix this once and be done. But you can start today, and set a reminder to come back in 90 days. That’s closer to a security posture than anything you’ll find in the terms of service you agreed to the last time you clicked Allow.