Alphabet reported Wednesday that its own AI models, including Gemini, now process more than 16 billion tokens per minute via direct API use, up from 10 billion the previous quarter—a roughly 60% jump.
That is good news for anyone selling AI infrastructure: chipmakers, networking suppliers, cloud providers and software companies. It is less comforting for security and governance teams trying to understand what all that activity represents.
Alphabet’s 16 billion-token-per-minute figure does not measure security risk. But it does measure something security teams should care about: AI systems are moving from experiments into production-scale workflows.
And once those workflows need access to real systems, OAuth becomes part of the security story.
When OAuth Tokens Become Workflow
OAuth is one of the industry’s default ways to grant delegated access across apps, cloud services and consumer platforms. It lets one app act inside another without asking users to hand over their passwords. Instead, a user or organization grants permission, and the app receives a token with rules attached.
You’ve used it. Everyone has. It’s the “Sign in with Google,” “Sign in with Microsoft,” “Sign in with Apple” or “connect this app” button you click without reading closely. Technically, many of those sign-in systems use OpenID Connect, an identity layer built on top of OAuth. But to most users, it all looks the same: one button, one approval screen, one more app asking to act on your behalf.
For years, that bargain was simple enough. One app asked for access. A user approved it. A token was issued. The app did its job.
One app. One consent screen. One grant.
Then agents showed up.
Now the same model looks more like this: one agent, multiple authorized connections, chained together into a workflow that moves faster than most people can follow. Read email. Extract meeting details. Create calendar events. Notify Slack. Update CRM. Maybe summarize everything while it’s at it. Every step is authorized. Every token is valid. Every API call is technically correct.

And too often, no one is looking at the whole chain.
That is the agentic-era shift. The old mental model—one key opens one door—no longer captures the risk. The better question is what autonomous systems can access, what sensitive data they can reach and how those permissions combine when an agent strings multiple actions together.
“The original problem statement was pretty narrowly focused,” said Aaron Parecki, a longtime OAuth contributor.
The protocol originally solved a narrow but important problem: letting one application access user data without asking users to hand over passwords to random apps. That narrowness helped OAuth survive. Parecki said one of OAuth’s important design choices was avoiding overengineering the protocol, leaving room for the ecosystem to evolve as applications, tokens and implementation patterns changed.
But AI is testing that flexibility.
The issue is not always a single compromised token going rogue. It is the unintended consequence of an AI agent orchestrating multiple tasks across multiple OAuth grants.
Parecki said the problem is real, but it is less a built-in failure of OAuth than a byproduct of convenience.
“I don’t want to bother my user every time, so we’re going to ask them once, and we’re just going to remember that thing,” Parecki said.
That trade-off made sense when the alternative was pestering users with consent prompts every few days. But it also created a long tail of durable app connections that users rarely inspect, sometimes called OAuth sprawl.
AI did not create that problem. It made it more visible, more frequent and more consequential.
Juggling OAuth Tokens in the Real World
Time for a real-world example.
You spin up an AI agent to “organize your inbox.” Seems harmless. It needs access to your email, your calendar and maybe Slack so it can ping you when something important comes in.
You grant access. OAuth does its job. Tokens are issued. APIs enforce the rules. From an OAuth/API authorization standpoint, everything is working exactly as designed. Now guess what happens.
The agent reads your inbox — not just today’s messages. The agent reads more inbox history than the task requires because more context usually improves the answer. That sounds reasonable until “context” starts to include old HR threads, customer disputes and the kind of messages everyone assumed had quietly sunk to the bottom of the inbox.

None of that requires a hack or a breach, and there were no stolen credentials. Just valid permissions, used enthusiastically.
We’re very good at asking, “Who has access?” We’re much worse at asking, “What is using that access, and what is it actually doing with it?”
The OAuth protocol doesn’t understand the agent’s intent, context or how multiple tokens are being used together. Every step is authorized. Every token is valid. Every API call is technically correct. That is the horrifying part.
The machine is not breaking the rules. It is following them into a ditch.
The real risk isn’t just over permissioned tokens. It’s what happens when an agent chains together multiple perfectly valid permissions — and operates faster and more broadly than most humans would.
32 Flavors of Agentic Misery
The new token headaches are really just new spins on old risks.
Tokens get stolen. GitHub had to deal with attackers abusing OAuth tokens tied to third-party integrations from Heroku and Travis CI to pull private repo data.

Supply chains still get compromised. In April, a malicious version of Bitwarden’s CLI was briefly pushed through npm and started behaving like a credential-harvesting tool before anyone could blink.
Over permissioned access is still everywhere. Stale OAuth grants, refresh tokens and long-lived app connections still hang around like that gym membership you forgot to cancel.
But those are almost the easy problems now. At least they look like attacks.
The harder problem is when nothing looks wrong.
BYO-AI Agent
This brings us to the most dangerous person in enterprise technology: a well-meaning employee who found a workaround.
The phrase “it worked at home” is how many bad enterprise ideas begin, right after “I found this free Chrome extension.”
And unlike old shadow IT, this does not just sit in a browser tab waiting to be discovered during an audit. It acts. It moves data. It makes decisions. It is shadow IT with a calendar invite.
In this case, consumer automation and AI tools give users automation power before many companies have agentic governance.

The danger is not that no-code tools like Zapier, Make and n8n are inherently risky. They are not. The danger is that DIY workflow automation platforms make it easy for users to create automations that connect personal and work apps, request OAuth permissions, move data across systems, run continuously and operate outside normal IT oversight.
It’s like giving the summer intern a badge, a company car, access to the mailroom and the CFO’s calendar because each request sounded reasonable on its own. Individually, no problem. Collectively, you have no idea what’s going to happen next.
Abhishek Agrawal, CEO of Material Security, described a darker version of that same shadow AI security risk.
In a demonstration, he showed how a legit-looking AI email triage tool could ask a user to connect a Google account and then pull years of mailbox data into an attacker-controlled destination.
“The problem that security teams have is that I can build an app like this in five seconds with Claude,” Agrawal said. “I can publish it to the internet. I can send a phishing email that tells you to go try out this app, and some user is going to fall for it and click, click, click through and give their access.”
At that point, the problem is not that the user typed a password into a fake login page. The user authorized the app. The app received access. The data was compromised.
Plugging the AI Dam After it Bursts
In the examples above, this is where governance should kick in: app allowlists, OAuth app review, admin approval for sensitive scopes, DLP, audit logs, agent identity and human approval for risky actions. Without those controls, a company may not know an agent exists until data has already crossed a boundary.
Parecki said enterprise blind spots exist. Some major platforms expose OAuth grants to administrators, but many SaaS applications still leave gaps. A worker can connect an agent or chatbot to another enterprise app without the identity provider or corporate administrators seeing the full connection, unless the target service, identity provider or security tooling exposes it.
“The crazy thing about this is that right now, the enterprise identity provider and enterprise admins don’t even see that happening unless the thing they’re connecting to exposes it,” Parecki said.
That visibility gap is where the cybersecurity industry is now racing to build guardrails.
The market response is splintering into three camps. One focuses on identity, treating agents as first-class entities that need owners, lifecycle management, least-privilege access and revocation. Companies such as Okta are pushing this model.
Another focuses on the gateway, placing a policy checkpoint between agents and the tools they use. That layer can monitor, restrict and log agent behavior before an autonomous workflow reaches sensitive data. Databricks and Cequence Security are advancing variations of that approach.
A third focuses on the OAuth layer itself: finding risky SaaS app connections, evaluating what they can access and revoking grants that look excessive, dormant or malicious.
Agrawal said security teams need to know whether an app’s OAuth access comes from a reputable vendor, whether its requested scopes make sense, which user is granting access and whether the app behaves consistently with its stated purpose.
Material’s OAuth Remediation Agent applies that model to Google Workspace OAuth connections. Its system evaluates vendor reputation, scope risk, user blast radius and app behavior, then can revoke risky tokens automatically or route them for human review.
“If you take each of these four dimensions, they’re what a really, really good analyst would do when trying to make a verdict on whether to trust this app or not,” Agrawal said. “Obviously, they never have time to do all those four things. It’s always the last priority.”
The no-code and automation market is moving in the same direction, but unevenly. Companies like Zapier offer stronger governance in Team and Enterprise tiers, including admin visibility, governed automation, audit logs and AI guardrails designed to catch sensitive data exposure or prompt-injection attempts. But those protections are not necessarily a default safety net for every DIY workflow a user creates.
That leaves consumers at a disadvantage. Enterprises can buy layers of visibility, approval and enforcement. Individuals are largely left to manage OAuth sprawl on their own, clicking through consent screens and periodically hunting through account settings for apps they no longer remember connecting.
Ashley Rose, CEO of Living Security, said that consumer reality exposes a failed assumption the security industry keeps returning to: that people can act as their own last line of defense.
“We’ve been running this experiment in the enterprise for two decades,” Rose said. “The result is in: making humans the last line of defense doesn’t work. It didn’t work for phishing, it didn’t work for password hygiene, and it won’t work for permission management.”
How MCPs Can Assist
Then there’s MCP—the Model Context Protocol—which a lot of vendors are suddenly very excited about.
MCP is not magic. It is not a security control by itself. It is plumbing. Think of it as a universal adapter that makes it easier for agents to talk to tools. It standardizes the connection, lowers friction and expands what is possible.
OAuth is about authorization. It gives an application or agent permission to access something. Tokens define what that access allows. APIs enforce individual requests.
MCP is about tool connection. It gives agents a standardized way to discover and use tools, data and prompts. MCP servers can expose tools such as sending messages, creating calendar events or querying databases, and MCP clients can discover and call those tools through defined protocol methods.
That makes agent-to-tool workflows easier to build. But MCP does not, by itself, decide whether the agent should use the tool, whether the request is appropriate or whether the broader workflow is safe. The MCP spec says tools may require user consent and that hosts should obtain explicit user consent before invoking tools, but it also says MCP itself cannot enforce those security principles at the protocol level.
Agrawal described MCP as a layer on top of APIs. It tells the agent what is possible. OAuth governs what is permitted.
“MCP is just a vocabulary,” Agrawal said. “The OAuth connection is really about what are you able to do—what scopes do you have, what access are you permitted.”
That distinction matters because MCP deployments can—and should—be built with fine-grained authorization, scoped OAuth, role-based access controls, user consent and audit logs. The MCP authorization guidance follows OAuth 2.1 conventions for protected MCP servers, and its docs describe OAuth-based flows in which clients obtain access tokens and servers validate those tokens and required permissions.
But the protocol is not a substitute for governance. It creates a cleaner way for agents to connect to tools. It does not automatically answer the harder security question: should this agent be allowed to do this thing, with this data, in this context, right now?
The Protocol isn’t Standing Still
The OAuth protocol is not frozen in amber. Parecki said the next major revision of the protocol—OAuth 2.1—is not a dramatic rewrite so much as a consolidation of practices the industry already agrees are the right way to use OAuth. Separate from that work, OAuth continues to evolve through extensions that add new behaviors and relationships.
“OAuth is like Lego blocks,” Parecki said. “You choose which things you want to use that makes sense for your application and your ecosystem.”

AI is now forcing the industry to assemble some of those blocks differently. Parecki said AI tools have exposed places where today’s OAuth deployments and related standards may not fully fit emerging agentic use cases.
“What we’re seeing right now is AI stress-testing the protocol, surfacing existing challenges we always knew were there much faster,” Parecki said.
The whole is a bigger problem than the parts
OAuth did not suddenly break because AI showed up. The bargain just got riskier.
Agrawal’s warning is that security teams cannot manually review every agent and every app connection in an AI-driven workplace. OAuth grants access. Tokens define scope. APIs enforce individual requests. MCP standardizes how tools are exposed and reached.
Human-in-the-loop review may work at small scale. But the scale of agentic access will make fully manual review increasingly unrealistic.
Parecki’s warning is the protocol version: OAuth can evolve, but standards, vendors and enterprises have to make more of the access chain visible, governable and revocable.
Every permission can look reasonable in isolation. The risk emerges when an agent starts putting them together.
The better question is: What is using that access, how far can it go, and who—if anyone—is watching?
Adapted from Image by Shawn Suttle from Pixabay