Microsoft says AI agents will soon outnumber human workers, and it wants those agents to authenticate, request permissions, and follow policy like employees. That idea, and the security risk wrapped inside it, set the tone at Microsoft Ignite 2025, where the company unveiled a new identity platform designed to govern the millions of autonomous software agents forecast to overrun corporate networks.
Identity security and copious identity tracks drove this year’s Ignite and much of the security narrative. Microsoft’s take turned long-standing IAM problems on their head. AI isn’t just a workload anymore, it’s a workforce, the company proclaimed.
Ignite, which ran from Nov. 18–21 in San Francisco and drew 20,000 attendees, treated AI systems as entities that need user accounts, access controls, and Zero Trust enforcement — the same configurations and protections used to manage people.
Microsoft main point: it’s time to position identity as the safety fabric for AI. Over the course of Ignite it demonstrated how it is focusing on building governance for agents, even as enterprises are still working on securing humans. That tension — innovation layered atop unfinished business — defined Microsoft’s parade of announcements at Ignite.
A New Workforce, Ready or Not
Microsoft didn’t tiptoe around the magnitude of the shift, citing the same IDC projection in keynote after keynote that “1.3 billion AI agents could be active by 2028.” Satya Nadella referenced it opening Ignite. So did Identity VP Eric Sachs during his keynote where it he said the upcoming agentic AI boom demands a fresh approach security and management.
To meet that future, Microsoft introduced Entra Agent ID, a new identity class for AI systems. It comes packaged with an Agent Registry that catalogs every agent in an organization and Agent Blueprints that define what agents can access, how long they live, and what guardrails they must obey.
The idea is straightforward. If agents behave like employees, they need employee-grade controls. If they access sensitive systems, they need accountability. If they take actions on behalf of a user, they need auditable trails.
And if an agent misbehaves — or is hijacked — administrators need a kill switch.
“Agents are not going to be pets. They’re going to be more like a school of fish,” said Alex Simons, corporate VP, product management & security engineering team, Microsoft Identity and Network Access Division.
This is an area where competitors are already circling. CyberArk, BeyondTrust, Delinea, and SailPoint all expanded machine-identity and bot governance platforms over the past year. But Microsoft has one advantage: its identity plane is already embedded in more than 720,000 organizations. If Entra becomes the de facto directory for agents, Microsoft wins the governance layer by default.
Zero Trust, Rewritten for an AI Workforce
Sachs, who leads Microsoft’s identity engineering group, redrew Microsoft’s classic Zero Trust diagram on stage. This time, AI sat beside “users” and “devices” as a primary actor. Sachs’ point was the guardrails built for humans must now apply to AI systems. They call APIs. They trigger workflows. They process credentials. They shouldn’t get a free pass.
“Microsoft Entra is the foundation of Zero Trust strategy, allowing organizations to manage AI agents similarly to human users,” said Joy Chik, president, Identity & Network Access, at Microsoft during her keynote.
The reality is most enterprises aren’t anywhere near ready. Corporate internal GenAI projects often run outside identity controls. Teams pour sensitive data into internal models with fewer restrictions than they give contractors. By pulling agents into Entra, Microsoft is forcing companies to confront risks many haven’t even started to assess, Sachs said.
He broke the model down into short, sharp rules. Verify agents explicitly. Enforce least privilege. Assume agents can be compromised. These principles aren’t new. The actor applying them is.
AI Safety Shifts to the Network Layer
Among Ignite’s many announcements, one stood out as a genuine curveball. Using Intune Internet Access, Microsoft demonstrated network-layer detection of sensitive file uploads bound for generative AI tools. It also showed it could spot prompt-injection patterns hidden in documents and stop those files before they reached an AI endpoint.
This marks a significant expansion of Microsoft’s security footprint. For years, secure web gateway vendors like Zscaler and Netskope controlled this territory. Now Microsoft is pushing AI-specific enforcement into its own infrastructure and tying those decisions directly to Entra identity, device state, and conditional access posture.
It’s one of the rare Ignite moments that didn’t feel iterative. It showed Microsoft bending network security toward AI safety — not the other way around.
Security Copilot Goes From Hype to Hard Numbers
Last year, Security Copilot looked like a promising experiment. This year, Microsoft arrived with hard data.
The phishing triage agent cuts SOC workload by 78 percent and boosts accuracy by 77 percent. St. Luke’s, a healthcare system, saves 200 hours a month by routing phishing analysis to agents. The conditional access optimization agent helps admins identify redundant rules and reveal missing Zero Trust baselines. A new device offboarding agent finds dormant devices in Entra that have fallen out of Intune management.
This isn’t just assistive tooling. It’s production-grade automation. And by including Security Copilot with Microsoft 365 E5, the company puts these AI-driven agents into some of the largest enterprises in the world overnight — a distribution advantage that pure-play AI security startups cannot match.
Microsoft Secures the House It Built
Ignite’s forward-looking vision came with an uncomfortable backdrop: the fundamentals are still shaky.
MFA adoption across Entra remains dismal. “Bad guys, they don’t break in — they sign in,” Sachs said. Less than half of work accounts have MFA enabled. Only about 20 percent of administrator accounts — the ones attackers salivate over — are protected. It’s a staggering weakness for a platform that claims to run on Zero Trust.
Passkeys help. They fix usability. They reduce help-desk burden. They streamline enrollment. But they don’t erase the irony: Microsoft is building identity governance for AI while human identities remain exposed.
Conditional access highlights another contradiction. It’s powerful, but it’s sprawling. Microsoft now needs an AI to help customers understand policies they created over the past decade. The optimization agent is smart. Its existence is telling.
Device hygiene follows the same pattern. Microsoft spent years telling customers to reconcile Entra and Intune device records manually. Most never did. Now an AI agent does it for them.
Leader or Follower? The Answer Is Complicated
Many Ignite ideas mirror trends already playing out across the identity and security landscape. Identity threat detection looks similar to earlier UEBA and attack-path analytics. Governance for non-human identities echoes long-standing PAM and IGA controls. AI-aware network filtering now competes with secure web gateway vendors.
But Microsoft’s strength isn’t in inventing every idea. It’s in integrating them. Entra Agent ID, the Agent Registry, Security Copilot workflows, prompt-injection inspection, and Zero Trust enforcement at the network edge form a coherent architecture aimed squarely at a future where agents outnumber employees. No vendor spans identity, devices, data, and networking at Microsoft’s scale.
That consolidation, not any single feature, may be the most significant competitive move of the year.
The Bottom Line: Identity Becomes the AI Safety Fabric
Ignite 2025 made one thing plain. If AI is going to run the work, identity will govern the AI. Microsoft wants Entra at the center of that model. The vision is bold. The engineering is real. And the stakes are enormous.
Whether enterprises are ready for that shift — or ready to let Microsoft define it — remains an open question.