On April 17, 2026, the OCC, Federal Reserve, and FDIC jointly released OCC Bulletin 2026-13 — revised interagency guidance on Model Risk Management. It rescinds four prior issuances including OCC 2011-12, the bulletin that governed model risk management at national banks for fifteen years.
Two things in the revised guidance matter for anyone running AI governance at a mid-market institution.
The Model Definition Narrowed
The revised guidance defines a model as "a complex quantitative method, system, or approach that applies statistical, economic, or financial theories to process input data into quantitative estimates." Simple arithmetic calculations and deterministic rule-based processes are explicitly excluded. If your institution has been performing Model vs. Non-Model Determinations for your AI tools, the classification line just shifted. The new definition is narrower than the prior OCC 2011-12 language.
Importantly, the guidance clarifies that non-generative, non-agentic AI models are still covered. A fraud detection model, a credit scoring neural network, a statistical risk model — these remain within the scope of the revised guidance. The exclusion applies specifically to generative AI and agentic AI.
If your institution has NOT been performing Model vs. Non-Model Determinations, the urgency to start just increased — because the classification boundaries are now more nuanced, not less.
Generative AI Is Explicitly Out of Scope — But Not Ungoverned
The guidance states in footnote 3: "Generative AI and agentic AI models are novel and rapidly evolving. As such, they are not within the scope of this guidance."
But the footnote continues with a sentence that changes the entire reading: "Nonetheless, a banking organization’s risk management and governance practices should guide the determination of appropriate governance and controls for any tools, processes, or systems not covered in this document."
Read that carefully. The regulators excluded generative AI from the model risk framework and in the same footnote told every institution that their own governance architecture should determine the controls for AI systems the guidance chose not to cover. That is not a free pass. That is a direct statement that your institution is responsible for building the governance architecture the bulletin did not provide.
The guidance also notes that non-compliance itself will not result in supervisory criticism but adds that supervisory action can result from unsafe or unsound practices stemming from insufficient management of model risk. The absence of enforceable standards does not mean the absence of consequences.
What This Means in Practice
Before this bulletin, a mid-market CISO could point to OCC 2011-12 and say: "We evaluate our AI tools against the model risk definition and perform a Model vs. Non-Model Determination for each one." That was a defensible governance posture anchored to a published framework.
After this bulletin, OCC 2011-12 is rescinded and the replacement explicitly excludes generative AI from scope. The CISO who was anchoring governance to 2011-12 just lost that anchor. And the CISO who was doing nothing just received false comfort from a guidance document that appears to say AI governance is not required — while a footnote in the same document says the institution is responsible for governing it anyway.
The institutions that built governance architecture (risk appetite, decision authority, evidence loops, reporting) are the ones whose posture survives this transition intact. Their governance was never anchored to a single bulletin. It was anchored to the discipline of producing defensible outcomes regardless of which framework applies.
The Framework That Does Cover AI
The NIST AI Risk Management Framework (Govern, Map, Measure, Manage) remains the most comprehensive published framework for AI governance. It is not a federal mandate, but it is rapidly becoming the benchmark regulators reference when evaluating whether an institution has a governance architecture or just a governance policy.
For institutions navigating the gap the revised guidance created, NIST AI RMF provides the positive reference anchor. The Governance Spine — Appetite, Strategy, Controls, Evidence, Reporting — operationalizes those four NIST functions into mechanism design that produces examiner-ready evidence.
Notably, the agencies announced they plan to issue a separate Request for Information addressing model risk management and banks’ AI usage. Generative AI was not excluded from this guidance because regulators are ignoring it — it was excluded because they intend to address it specifically. Dedicated AI governance expectations are on the horizon. The institutions building architecture now will be positioned to meet them.
What to Do Monday Morning
If your institution has been performing Model vs. Non-Model Determinations anchored to OCC 2011-12, update the Determinations to reference the revised interagency guidance (Bulletin 2026-13) and document the rationale under the new definition. Remember: non-generative AI/ML models are still within scope.
If your institution has generative AI tools deployed, recognize that the published model risk guidance now explicitly does not cover them, but the regulators explicitly said your own governance practices should fill that gap. Build your governance architecture for those systems using NIST AI RMF as the reference framework and the Governance Spine as the operational structure. The evidence trail you start building now is the evidence trail that compounds before the examiner asks the question the bulletin chose not to answer.
The OCC told you two things yesterday. First: generative AI is not covered by our model risk guidance. Second: your own governance should determine the controls anyway. Both statements are in the same footnote. Build accordingly.

