RSAC 2026 opens Monday in San Francisco, and for the first time in years, AI is not just a session track. It is the conference. Officially, the theme is “The Power of Community,” but the keynotes, the breakout sessions and the mood on the Moscone Center floor all tell a different story. This week belongs to artificial intelligence.
Roughly 40% of RSAC’s 450-plus sessions are AI-weighted. The other major tracks — identity, cloud security, threat intelligence, operational technology and industrial control systems — have AI woven through them, too. The conversation has moved decisively from “should we use AI?” to “how do we govern systems where AI is already making decisions?”
“AI and agents are going to be major themes at the conference this March,” RSAC Executive Chairman Hugh Thompson wrote ahead of the event. “Managing AI risk is now a fundamental requirement for the entire C-suite and the board.”
His concern is specific. As enterprises chain multiple AI agents together to automate critical workflows, the error rate of each handoff compounds. “It’s the AI version of a game of telephone,” Thompson wrote, “where the message changes slightly with each person in the chain.” Nobody has figured out how to fix that yet. That’s why everyone is here.
Feds Ghost RSAC
The biggest meta-story of RSAC 2026 is the one that’s not on the agenda: CISA, the FBI and the NSA won’t be there. For 35 years, RSA Conference has been the place where the public and private sectors negotiate a shared picture of reality. Federal agencies brought classified-adjacent threat intelligence into a public setting. Private researchers brought disclosures the government couldn’t make. That arrangement worked, imperfectly but reliably, for a long time.
The agencies withdrew their speakers in the weeks before the conference, following the appointment of former CISA Director Jen Easterly as CEO of RSAC. Among the casualties: a session on the hunt for China’s Typhoon actors, billed as a behind-the-scenes look at FBI, NSA and private-industry joint operations to disrupt Beijing’s ongoing espionage campaigns inside U.S. telecom networks. CISA’s stated reason was “good stewardship of taxpayer dollars.” The timing — one week after Easterly’s appointment — said something else entirely.
Call it bad timing for a Fed no-show. RSAC opens just weeks after defenders have been scrambling to assess active Iranian cyber operations targeting U.S. and allied firms — spillover from the broader conflict that has made cyberattacks a tool of asymmetric retaliation, not just espionage. One of those attacks hit U.S. defense medical contractor Stryker, a reminder that companies with no obvious connection to a geopolitical conflict can still end up on the receiving end of one.
The week also arrives in the shadow of the Pentagon’s designation of Anthropic as a supply-chain risk — after the company refused to strip guardrails against autonomous weapons use and domestic surveillance from its Claude models. It is the first time an American AI vendor has been labeled a procurement risk not because it was compromised, but because it declined to compromise its own safety standards. For CISOs trying to govern AI adoption inside their organizations, the question it raises is uncomfortable: what happens when AI ethics and IT procurement become a national security flashpoint?
Exhibitors Point the Way
The political subtext aside, the vendor floor — 600 exhibitors strong — shows where the money and the urgency have already landed.
Microsoft arrived with the most aggressive agentic security push of any exhibitor. The company noted that 80% of Fortune 500 companies are already running AI agents — meaning the governance problem isn’t theoretical. It is running in production, today, at most of the organizations in this building. Its RSAC announcements reflect that urgency: Entra Agent ID assigns unique, manageable identities to AI agents; Agent 365 gives security teams a control plane to observe and govern those agents at scale (generally available May 1); an expanded Security Store with an AI-guided advisor is now embedded directly in Entra; and Entra Backup and Recovery adds the ability to restore directory objects to a known-good state after compromise. It’s a full-stack argument that identity is the only perimeter that matters when the workforce includes machines.
Google is making a similar argument with different emphasis. Its headline product is the SecOps Triage and Investigation Agent, an agentic AI that automates security investigations end-to-end and is available in a no-cost trial from April 1 through June 30. At the booth, Google is running live demos of its Agentic SOC platform, showing how Mandiant threat intelligence and automated response can be woven into a single workflow — and making the case that detecting threats is no longer enough. Defenders, the company’s keynote messaging argues, need to impose real costs on attackers.
Across the floor, the themes are consistent: agentic SOC automation, AI-assisted malware analysis, post-quantum identity protection and resilience operations. The phrase on nearly every booth briefing is “agentic SOC” — a security operations center where AI agents don’t just flag problems, they close tickets. Six of the 10 Innovation Sandbox finalists are explicitly about securing AI agents or governing non-human identities. Past Sandbox winners include Wiz, SentinelOne and Axonius. The class of 2026 is telling you something.
Research, Keynotes and Sessions
Despite the federal withdrawal, the conference lineup is dense with national-security pedigree. A Tuesday session — “Inside Offensive Cyber: Lessons from Four NSA Directors” — assembles Gen. Keith Alexander, Gen. Paul M. Nakasone, Gen. Tim Haugh and Adm. Mike Rogers, four former directors of the NSA and commanders of U.S. Cyber Command, for a rare public discussion on the future of offensive operations and the ethics of private-sector “hack back.” The irony is hard to miss: four former directors of the agency whose current leadership won’t send a single delegate to the same building.
The keynote roster is characteristically eclectic. Former New Zealand Prime Minister Jacinda Ardern, venture capitalist Ben Horowitz, author Michael Lewis and MythBusters host Adam Savage share the main stage with Google Threat Intelligence’s Sandra Joyce, CrowdStrike’s George Kurtz and Ballistic Ventures’ Kevin Mandia. Mandia co-presents Thursday’s keynote with Nicole Perlroth, whose reporting on the global cyberweapons market remains the essential backstory for much of what’s being discussed at the conference this week. Their joint session asks the pointed question of whether the “AI honeymoon” in cybersecurity is already ending.
Several sessions are likely to set the defensive agenda for the year ahead. The Cryptographers’ Panel — featuring public-key cryptography pioneer Whitfield Diffie, RSA co-inventor Adi Shamir and Harvard’s Cynthia Dwork — will almost certainly surface the post-quantum timeline, at a moment when “harvest now, decrypt later” attacks have moved from theoretical to operational concern. And the SANS Institute’s annual “Five Most Dangerous New Attack Techniques” briefing — this year focused on agentic impersonation — functions as the field’s closest thing to an authoritative priority list. What SANS flags on Wednesday, CISOs will be budgeting against by Friday.
The Empty Chair
For 35 years, RSA Conference has been the place where government and industry shake hands on a shared threat picture. This week, industry shows up alone — surrounded by active cyberwar spillover, embedded AI agents nobody fully governs yet, and Chinese espionage campaigns that are, by the FBI’s own last accounting, still very much ongoing.
While the official branding leans into community, the actual mood leans toward control: who has it, who is losing it, and whether the industry can rebuild enough trust to govern what it has already unleashed.
Image Courtesy: RSAC