A department head at a $2B community bank deploys Microsoft Copilot for a team of 15. Procurement signs the vendor agreement. IT provisions the licenses. The team starts using it on a Tuesday. Six months later, an examiner asks three questions:
Who approved this AI system for use at this institution? What risk tier was it classified under? Where is the documentation showing the approval authority matched the risk classification?
The room does not go quiet because anyone acted irresponsibly. It goes quiet because no decision authority structure existed when the deployment happened. The approval was implicit. The risk classification was never performed. The evidence trail consists of a procurement invoice and an IT service ticket.
That is not a malicious failure. It is a design absence and it is one of the most preventable governance findings in banking right now.
The single most common cause of governance findings in my experience has never been missing policies. It has been missing decision authority with evidence attached. The Decision Authority Matrix is the framework I built to eliminate that failure mode at the source.
What is a Decision Authority Matrix? A Decision Authority Matrix is a governance control that maps four variables — risk tier, governed action, required authority, and evidence requirement — into a single artifact. It defines who is allowed to approve, deploy, modify, and retire an AI system at each risk level, and it specifies the evidence artifact the approval must generate. When an examiner asks "who approved this?" the matrix is the answer, and the evidence artifact is the proof.
The Decision Authority Problem
Most institutions have some version of technology approval authority. Capital expenditure thresholds. Vendor risk tiers. Change advisory boards. These structures were designed for software purchases and infrastructure changes. They were not designed for AI.
An AI system is not just a technology purchase. It is a decision-making agent operating inside your business processes. Depending on scope, it may touch customer data, influence credit decisions, automate compliance functions, or generate content that represents the institution externally. The risk surface is categorically different from a software license, and the governance authority that approved the license is almost certainly not the right authority for what the AI is actually doing.
The gap is not that approval processes do not exist. The gap is that they do not differentiate by AI risk tier, do not specify governance authority by risk level, and do not generate the evidence artifact that proves the approval happened correctly.
The Matrix
A Decision Authority Matrix maps four variables into a single governance artifact: risk tier, governed action, required authority, and evidence requirement.
Risk Tiers
Every AI system gets classified into one of four tiers. The tier determines everything downstream.
Critical
AI that influences regulated decisions — credit scoring, BSA/AML alert prioritization, fair lending analytics, disclosure generation — or touches core banking systems, or operates with broad autonomy over sensitive data. Board or executive committee approval. No exceptions. The examiner will walk directly to this tier first, because critical-tier AI is where MRA language originates.
High
AI that automates business processes, accesses customer data at scale, or has enterprise-wide deployment scope. Senior management approval with documented risk acceptance.
Medium
AI used for internal productivity, content generation, or departmental workflows with limited data access. Department head approval with IT security review.
Low
Individual-use AI tools with no institutional data access and no system integration. Manager acknowledgment with acceptable use attestation. Low-tier does not mean ungoverned. It means lightweight governance with a lightweight evidence artifact.
Governed Actions
The matrix does not just cover initial deployment. It governs the full lifecycle: approval to evaluate, approval to deploy, approval to modify scope (data access expansion, new integrations, broader user population), and approval to retire. Each action at each tier has a specified authority. A department head can approve a medium-tier evaluation. Only senior management can approve a high-tier deployment. Only the executive committee can approve critical-tier systems.
This is where most organizations fail: they create a deployment approval process but do not govern scope changes. The AI system approved as a medium-tier departmental tool quietly expands to enterprise-wide use, connects to customer data, and becomes a critical-tier system without anyone re-evaluating the risk tier or the approval authority. I have seen this pattern at three institutions in the past year alone. The deployment was approved. The escalation was invisible.
Evidence Requirements
This is the piece that transforms the matrix from a framework into a governance control. Every approval generates a specific evidence artifact: a signed approval record, a risk tier classification document, a data access scope statement, and a governance review confirmation. The evidence is not optional. It is not retroactive. It is the mechanism that makes the matrix defensible. Without evidence requirements, you have a decision authority framework. With them, you have a control that proves itself.
Building It in 30 Days
This is not a six-month initiative. It is a focused sprint.
Week 1: AI Census
Identify every AI tool, model, and vendor integration in your environment. Include the AI embedded in platforms your institution already runs — your core banking system, your CRM, your communication tools. Most institutions discover 3–5x more AI exposure than they expected. That discovery is the point.
Week 2: Classify Risk Tiers
Apply the four-tier model to every system identified. Involve business owners, IT, and risk/compliance. Document the rationale for each classification. The rationale matters — an examiner will ask why a system landed in a particular tier, and "IT decided" is not a defensible answer.
Week 3: Map the Matrix
For each risk tier, specify required authority for each governed action. Define evidence artifacts. Get sign-off from the executive sponsor. The output is a one-page matrix that your board can read in five minutes and your examiners can evaluate against your actual deployments. Five minutes. One page. That is the test.
Week 4: Retroactive Remediation
Apply the matrix to systems already in production. Generate retroactive approval documentation: a formal acknowledgment that each system was reviewed, classified, and accepted by the appropriate authority. This is not ideal. A documented retroactive review is infinitely more defensible than silence.
Bottom Line
The goal is not bureaucracy. It is traceable authority. Every AI system in your environment was approved by someone or it was not approved at all. The Decision Authority Matrix ensures the approval is deliberate, documented, and defensible.
When the examiner asks "who approved this?" the answer is on file, with a signature and a date. That is the difference between a program that survives the walkthrough and a program that becomes the finding.
Frequently Asked Questions
Do I need a separate Decision Authority Matrix for AI, or does my existing technology approval process cover it?
A separate matrix. Existing technology approval processes were designed for software purchases and infrastructure changes. They classify by cost, scope, and vendor risk, not by AI-specific risk characteristics like autonomy, data access, regulatory decision influence, or scope-creep potential. An AI system can be a $200 monthly SaaS subscription and still be a critical-tier deployment if it influences credit decisions. The existing process will underweight it every time. A purpose-built AI Decision Authority Matrix is the only structure that reliably classifies AI by what it does, not by what it costs.
What makes an AI system critical-tier?
Three criteria, any one of which triggers critical classification. First: influence over regulated decisions — credit underwriting, BSA/AML alerts, fair lending analytics, disclosure generation, customer-facing chatbots that discuss account terms. Second: connection to core banking systems or sensitive data stores. Third: broad autonomy — the AI is making decisions and taking actions without a human in the loop for each one. If any of those three apply, the system is critical-tier regardless of which department deployed it or how much it costs.
What evidence artifact does the examiner actually want to see?
Four things, minimum. A signed approval record with the approver's name, title, and date. A risk tier classification document explaining why the system landed in that tier. A data access scope statement listing what data the AI can touch. A governance review confirmation showing the system was evaluated by the right authority before deployment. The signature matters — not because examiners are bureaucratic, but because the signature is the person willing to attach their name to the decision. Anonymous approval is not approval.
Can I remediate AI systems that were deployed before the matrix existed?
Yes, and you should. Retroactive remediation is not ideal, but it is infinitely more defensible than silence. Produce a formal review of each existing system, classify it into a risk tier, identify the right approval authority in hindsight, and generate a signed retroactive acknowledgment from that authority. The examiner will notice it is retroactive. They will still grade it higher than the alternative, which is a program with no evidence trail at all.
How does the Decision Authority Matrix relate to the Governance Spine?
The Governance Spine is the five-stage structural framework (Appetite, Strategy, Controls, Evidence, Reporting) that defines what a complete governance program looks like. The Decision Authority Matrix is a specific control that lives in the Controls stage of the Spine and produces artifacts at the Evidence stage. The matrix answers a single question within the larger framework: who is authorized to approve each AI action at each risk tier, and what evidence proves the approval was correct. If the Spine is the skeleton, the Decision Authority Matrix is the joint that lets governance articulate.
