Void Vanguard logo 100x100Void Vanguard
X
InsightsPodcastAssessmentPortfolioMethodologyAboutContact
← ALL INSIGHTS

Risk & Failure Modes

Your Biggest AI Risk is likely Identity Governance

Vercel's April 2026 breach was not an AI vendor incident. It was an identity governance failure with an AI tool as the attack vector. Mid-market financial institutions share the exact same failure surface.

Failure Mode

April 24, 2026

At 2pm on a Tuesday, an employee at your institution signs up for an AI productivity tool using the "Sign in with Google" button. The consent dialog asks for read access to their Workspace. They click approve. Nobody else at the institution ever sees that approval. Not IT. Not security. Not governance. The OAuth grant is now a piece of your attack surface that does not appear on any inventory.

On April 18, 2026, Vercel published the post-mortem for a security incident in which customer credentials were exposed through exactly this kind of grant. The attacker did not breach Vercel directly. They compromised a third-party AI tool called Context.ai, whose Google Workspace OAuth grant authorized it to operate inside a Vercel employee's Workspace account. From that grant, the attacker pivoted into the employee's session, into Vercel's internal environments, and into customer environment variables.

The AI governance community will read this as an AI vendor incident. The identity governance community will read this as an OAuth supply chain compromise. Both readings are incomplete. For a federally regulated mid-market institution, the right reading is that AI governance and identity governance share a failure surface that neither program is currently covering on its own.

The Failure Surface Nobody Inventories

Your employees have already signed in to dozens of AI tools using Workspace or Microsoft 365. Most of those OAuth grants authorized read access to mail, documents, calendars, or contacts. Most are still active. Most have never been reviewed by anyone with the authority to revoke them.

The V³ Domain Assessment assigns this to D-07: Third-Party and Vendor AI. But D-07 is not a pure AI problem. It sits at the joint between AI governance and identity governance, and the Vercel incident is a direct illustration of what the joint looks like when it is ungoverned.

Three mechanism gaps produced the incident. Each of them shows up at mid-market institutions right now, in the environments most CISOs and CROs already manage.

No inventory of AI OAuth grants. The institution either has an AI inventory or it does not. Even when it does, the inventory typically captures AI deployed as a product: the chatbot, the credit-scoring model, the customer service agent. It rarely captures the AI tool that an employee authenticated into using their work identity. The OAuth grant is the mechanism through which the AI tool operates. Without an inventory of those grants, the Model vs. Non-Model Determination cannot run, because the institution does not know what to classify.

No decision authority for the grant. Under the Decision Authority Matrix, a third-party tool that reads Workspace data at enterprise scale is at least a high-tier governed action. A Workspace admin or security lead is the required authority. In practice, the authority is the individual employee, clicking through a consent dialog. The tier-to-authority mapping is not broken. It was never built.

No evidence the grant operates within scope. The Governance Spine sequence runs Appetite, Strategy, Controls, Evidence, Reporting. An OAuth grant is a control surface. It defines what the AI tool can do and for how long. The evidence question is whether the institution can produce a log, on demand, of every AI-tool OAuth grant currently active in Workspace or Microsoft 365, the scope of each grant, the employee who authorized it, and the date it was last reviewed. Most mid-market institutions cannot produce this log.

The Examiner's Frame

The NIST AI Risk Management Framework calls this domain Map: know your AI inventory, know its scope, know its risk classification. Void Vanguard operationalizes NIST Map through D-07 and through the Decision Authority Matrix. The Vercel incident is the scenario that separates institutions that have done the operationalization from institutions that have only documented it.

OCC Bulletin 2026-13 explicitly excludes generative AI and agentic AI from the revised interagency model risk guidance. That does not remove these tools from examiner scope. Footnote 3 states that institutions must use their own governance architecture for AI not covered by the guidance, and the general Safety and Soundness frame still applies. An OAuth grant to an AI productivity tool is governance architecture. The absence of one is a finding waiting to be written.

What the Vercel Post-Mortem Actually Shows

Consider the specific mechanism that failed. An employee authorized Context.ai to operate in a Workspace account. The OAuth App ID was present in Google's OAuth app inventory from the moment the grant was authorized. Any Workspace admin could have listed it. Any security review process that periodically certified OAuth grants would have surfaced it.

The certification process that exists for employee identities rarely extends to machine identities and OAuth applications. Joiner, mover, leaver. Quarterly access reviews. Privileged session recording. Those disciplines stop at the human perimeter. In over a decade running IAM and security governance at institutional scale, extending the quarterly certification campaign to OAuth apps was always the later-phase maturity step. It is the step where most mid-market institutions still have not started.

At Vercel, the outcome was limited to customer credentials in environment variables. At a mid-market financial institution, the outcome is customer PII, loan decisioning data, or core banking credentials. The regulatory conversation is no longer a security bulletin. It is a Safety and Soundness conversation.

The One Question the Examiner Will Ask

If your institution cannot produce, on demand, a list of every third-party AI OAuth grant currently active in Workspace or Microsoft 365, with scope, authorizing employee, and last review date, you have a gap an examiner will find before you do.

Founder & Principal Advisor

Mark Vanis

Latest Insights

Stay Sharp

Subscribe: The Governance Brief

AI governance diagnostics, failure mode analysis, and regulatory trajectory delivered when it matters. No noise.

No spam. Unsubscribe anytime. Your email stays between us.