Two high-profile incidents. Two completely different companies. One identical root cause.

In April 2026, Vercel - the cloud platform powering millions of Next.js deployments - confirmed that attackers gained unauthorized access to its internal systems. A compromised Context.ai employee triggered a chain of events that reached a Vercel employee's Workspace account, leading to a breach of Vercel's internal database - now being sold on BreachForums for $2 million.

Months earlier, in October 2025, researchers discovered that OpenAI's newly launched ChatGPT Atlas browser stored unencrypted OAuth tokens in a SQLite database with overly permissive file settings on macOS - a flaw found just days after the browser's October 21, 2025 launch that bypasses standard encryption practices used by major browsers.

Two incidents. Two vectors. The same underlying failure: OAuth tokens and AI agent credentials operating completely outside identity governance.

Security teams spent years hardening authentication. MFA, SSO, zero trust - the front door is locked. But these breaches didn't come through the front door. They came through OAuth grants nobody was watching, AI tools nobody had approved, and credentials nobody ever governed. That's a different problem entirely. It requires a different solution.

The Vercel Breach: An OAuth Grant Nobody Reviewed

Here's exactly what happened - and why your incident response team should be reading this.

A Vercel employee had connected an AI app - specifically a deprecated consumer-grade "AI Office Suite" product from Context.ai - into their Google Workspace tenant. Vercel wasn't even a registered Context.ai customer. This was most likely a self-service trial that got integrated, lightly used, and forgotten - adding an invisible node to the organization's attack surface.

🔗
Employee Grants OAuth Access
A Vercel employee installs a third-party AI tool (Context.ai) and grants it broad OAuth access to their corporate Google Workspace account. No approval required. No record kept.
arrow_forward
🎯
Third-Party Gets Compromised
A Context.ai employee downloads malicious software. Their credentials are stolen. The attacker now controls Context.ai's OAuth application - and the tokens it holds for all connected users.
arrow_forward
🔓
Attacker Inherits Trust
Using the stolen OAuth token, the attacker logs into the Vercel employee's Google Workspace as a trusted, authenticated session. No alert fires. No MFA challenge. SSO sees a valid login.
arrow_forward
⬆️
Lateral Movement & Escalation
Inside Vercel's environment, the attacker enumerates environment variables. API keys, NPM tokens, GitHub tokens, and database credentials - all accessible because they were never marked 'sensitive.'
arrow_forward
💰
Ransom Demand & Disclosure
The attacker exfiltrates Vercel's internal database and lists it for sale on BreachForums for $2 million. Mandiant is engaged. Law enforcement is notified. The breach becomes public.

When Context.ai was compromised - allegedly the result of an infostealer infection from an employee searching for Roblox cheats - the attacker used OAuth tokens stored in Context.ai's platform to access downstream customer accounts. That access included a Vercel employee's Google Workspace account. This user had significant access to data and secrets across Vercel's systems: internal dashboards, employee records, API keys, NPM tokens, and GitHub tokens. The attacker exfiltrated all of it, holding Vercel to ransom for $2 million.

Notice what's missing from this entire chain: a vulnerability. A zero-day. A brute-force attack. There was none.

This wasn't malware. It wasn't a zero-day. It was trusted access doing exactly what it was designed to do. An OAuth grant, issued once, never reviewed, carried a Vercel employee's full Workspace permissions straight into the attacker's hands.

star Important

The Vercel breach was not a hacking problem. It was a governance problem. The attacker didn't exploit a zero-day or break encryption. They used a legitimate OAuth token - one that an employee granted, no one tracked, and nobody ever revoked - to walk straight through the front door.

From Vercel's perspective, this attack could have been prevented had employees been blocked from adding new OAuth integrations without admin approval - a toggle in their Google admin panel. Or if the integration had been flagged in a routine audit and removed. Two controls. Both identity governance controls. Neither an authentication control.

The OpenAI Atlas Problem: When AI Agents Store Credentials Like It's 2005

The Vercel breach illustrates OAuth sprawl created by employees. The OpenAI Atlas incident shows the other half of the problem: what happens when AI tools themselves mishandle the credentials they've been granted.

A security concern surfaced around ChatGPT Atlas, a macOS browser enabling access to OpenAI's ChatGPT models. Researchers revealed that OAuth tokens - used for authenticating users - were stored in plain text inside a local SQLite database. This flaw could allow attackers or malicious local processes to hijack user accounts and access private conversations, API data, and linked services.

Unlike most modern browsers, Atlas did not encrypt sensitive authentication data. It stored session tokens and user credentials in unprotected format, giving anyone with local system access the ability to extract them.

Here's the governance dimension most post-mortems skip: it doesn't matter how good your MFA policy is if the tokens it generates sit unencrypted in a file any process on the machine can read. The authentication step worked perfectly. The credential lifecycle - how that token is stored, scoped, rotated, and revoked - was not governed at all.

Unencrypted tokens could let attackers impersonate users, accessing not just ChatGPT conversations but potentially linked services if scopes overlap - echoing past OAuth leakage incidents in AI tools.

This is the new species of identities problem in its most concrete form. AI agents, browsers, and agentic tools are accumulating credentials across your stack - OAuth tokens, session cookies, API keys - and almost none are governed with the same rigor as human identities. They have no lifecycle. They don't offboard. They don't get reviewed.

This Isn't Two Isolated Incidents - It's a Pattern

Before you write these off as edge cases, consider the broader trend.

In 2025, Scattered Lapsus$ Hunters launched OAuth-driven supply chain attacks against Salesforce and Google Workspace tenants after breaching Salesloft. Over 1,000 organizations were impacted - including Google, Cloudflare, Rubrik, Elastic, Proofpoint, JFrog, Zscaler, Tenable, Palo Alto Networks, CyberArk, BeyondTrust, Qualys, and many more - with over 1.5 billion records stolen.

The 2026 Identity Exposure Report found 18.1 million exposed API keys and tokens captured in a single year. The attack surface involves non-human identities (NHIs), with 18.1 million exposed API keys and tokens captured in 2025.

A Grip Security analysis of 23,000 SaaS environments found a 490% year-over-year increase in AI-related attacks. Grip Security's March 2026 analysis of 23,000 SaaS environments found a 490% year-over-year increase in AI-related attacks.

The 2025 Verizon DBIR reported that 54% of all ransomware attacks traced back to infostealer-enabled credential theft. The 2025 Verizon DBIR reported that 54% of all ransomware attacks traced back to infostealer-enabled credential theft, and 46% of systems with compromised corporate credentials were non-managed devices.

The breaches of 2025 confirmed a hard truth many security leaders have long known but few have fully acted on: the greatest vulnerabilities aren't unpatched systems or zero-day exploits - they're the trusted logins, tokens, and integrations we rely on every day. Attackers no longer "break in" the way they once did. They use the same credentials that power cloud workloads and third-party APIs to move through environments like insiders - and too often, they do so for months before anyone notices.

The cadence is accelerating. The common thread across every incident is the same: OAuth tokens, API credentials, and AI agent access that was never governed - issued once, never reviewed, never revoked.

How Big Is Your Shadow Identity Exposure?

Before you can fix the problem, you need to understand its scale in your own environment. Use this calculator to estimate your organization's OAuth sprawl and shadow identity risk - based on realistic industry parameters.

Most security teams are surprised by the output. The reason: doing this across every SaaS app is considerably harder than it looks. You need a comprehensive, up-to-date inventory. You need to be an app admin for every app. And the app itself needs to give you control to restrict and remove OAuth grants on behalf of users in your tenant.

This is precisely why most organizations undercount their OAuth grants by a factor of three to five - and why most post-mortems on breaches like Vercel's find the exploited credential had been sitting unreviewed for months.

The Real Diagnosis: This Is a Governance Failure, Not an Authentication Failure

Here's the distinction too many security teams miss - and most breach coverage glosses over.

Authentication answers: Is this the right person or system?

Governance answers: Should this person or system have this access, right now, with these permissions, to these resources?

Vercel's SSO worked. OpenAI's authentication stack worked. The OAuth tokens in both cases were legitimately issued to legitimately authenticated sessions. The problem wasn't that authentication failed. The problem was that nobody governed what happened after authentication.

What SSO/MFA CoversWhat SSO/MFA Misses
Human logins to known IdP-connected appsOAuth grants made outside IdP control
MFA challenge at the authentication layerThird-party tokens that inherit user sessions
Password theft preventionLong-lived tokens that never expire
Known app access eventsShadow integrations and unregistered AI tools
User deprovisioning from the IdPRevocation of outstanding OAuth grants across all apps
Identity at the 'front door'Credentials at the 'back door': API keys, service tokens, AI agent credentials

As employees connect new applications through OAuth logins and third-party integrations without centralized oversight, unmanaged tools quietly expand the attack surface - exposing sensitive data and creating compliance blind spots.

This gap isn't new - it's just dramatically worse now that every employee is adopting AI tools at speed. The AI scramble is a force multiplier. Every new AI integration is a new OAuth grant. Every new grant is a new shadow identity. And every shadow identity is a potential Vercel.

The three governance failures behind these breaches

1. No OAuth grant inventory Neither Vercel nor, in Atlas's case, the average enterprise running AI tools has a live, centralized inventory of every OAuth grant across their stack. Without that inventory, you can't review what you can't see. You can't revoke what you don't know exists.

2. No lifecycle management for non-human identities Agentic sprawl - the uncontrolled proliferation of AI agents, their associated credentials, and their accumulated access rights across an enterprise - means these credentials have no joiner-mover-leaver process. They don't offboard when a project ends. They accumulate, and they become attack surface.

3. No continuous access reviews for third-party integrations An all-too-common tale: the SaaS app trialled by a single employee, lightly used, integrated with core app tenants, and forgotten - adding an invisible node to the organization's attack surface. Static, periodic access reviews - run quarterly if you're lucky - don't catch the OAuth grant issued yesterday by someone in engineering who wanted to try a new tool.

What Actually Fixes This: Permission-Level Governance, Not Prompt-Level Security

The instinct after a breach like Vercel's is to layer on more authentication controls - stricter MFA policies, CASB rules, a new SIEM alert. These aren't wrong. But they're fighting the last war.

The fix is governance at the permission level, not security at the prompt level. Specifically:

1. Universal connector coverage - including apps without SCIM or APIs

Most "modern IGA" tools claim to solve the OAuth problem, then quietly only cover the SCIM-enabled apps in your stack. That's typically 20-40% of your actual environment. The rest - the long-tail SaaS tools, AI integrations, internal apps - stay ungoverned. That's exactly where Vercel-style OAuth grants live.

Complete identity governance means connecting to every app in your stack - whether it supports SCIM, an API, or neither - and maintaining a live, governed view of every credential, token, and integration against every user and agent. No SCIM tax. No coverage gaps.

2. Fine-grained control beyond group membership

SCIM-level governance tells you a user has access to GitHub. It doesn't tell you which repositories, which environments, or which secrets that user's OAuth grants can read. Fine-grained control means going deeper: channel-level, repository-level, project-level, environment-level permissions - the kind of granularity that would have flagged "this OAuth app can read all Google Drive files for this user" as a risk before Context.ai was ever compromised.

3. Lifecycle governance for non-human identities

Non-human identities - service accounts, API keys, workload identities, certificates, OAuth apps, machine-to-machine access - now outnumber humans in most cloud-native organizations. The biggest risks: unmanaged lifecycle, overprivileged access, and exposed credentials.

AI agents, OAuth apps, service accounts, and API tokens need the same lifecycle treatment as human users: provisioned with least privilege, reviewed continuously, deprovisioned immediately when no longer needed. Right now, almost none of them get it.

4. Continuous access reviews - not quarterly rubber stamps

The Vercel breach exploited a grant that sat unreviewed long enough for Context.ai to be compromised and for the attacker to act. A continuous governance model - where access is validated in real time against policy, not at the next quarterly review - closes that window dramatically.

Continuous governance doesn't mean more work for your team. It means agentic workflows (AI-driven, autonomous processes) that evaluate access in real time, flag anomalies, and trigger revocation without a human opening a ticket.

This is the governance model that SSO tools were never designed to deliver - and that legacy IGA vendors made too complicated and too slow for fast-moving organizations.

What CISOs Should Take Away

If you're a security leader reading this, here are the questions you should be asking about your own environment this week:

  • Do you have a live inventory of every OAuth grant across your stack? Not just the ones your IdP knows about - the ones issued by individual employees to third-party tools they tried once.
  • Do your AI agent credentials have a lifecycle? Are they provisioned with least privilege? Are they reviewed? What happens when the project ends or the tool is deprecated?
  • What percentage of your app stack sits outside your governance coverage? If the answer is more than 30%, you have a Vercel-sized gap.
  • When was the last time you reviewed third-party OAuth integrations - not just in Google Workspace, but across your entire SaaS stack?

Identity, once considered an administrative layer, has become the defining battleground of enterprise security. The question heading into 2026 isn't whether attackers will keep targeting identities - it's whether the industry will finally treat identity as the foundation of resilience rather than a subset of access control.

The Vercel and OpenAI Atlas incidents aren't warnings about AI tools specifically. They're warnings about what happens when identity governance doesn't keep pace with the speed at which identities - human and non-human - are created, connected, and forgotten.

Your authentication stack is probably fine. Your governance layer is where the exposure lives.

The Takeaway

The Vercel breach was a $2 million ransom built on a forgotten OAuth grant. The OpenAI Atlas vulnerability was an authentication token exposed because nobody governed how AI agents store credentials. Both were governance failures dressed up as security incidents.

The fix isn't better passwords or stricter MFA. It's complete identity governance - a live, continuous, fine-grained view of every identity (human and non-human), every OAuth grant, every AI agent credential, and every app in your stack, including the ones that don't support SCIM or APIs.

That's not what legacy IGA vendors deliver. It's not what SSO tools were built for. It's what a purpose-built, universal governance layer provides - continuously, in real time, without a 12-month implementation project.