All Posts
April 7, 2026·9 min read

The AI Governance Questions Every Board Should Be Asking

A practical set of questions boards and executives should be asking about their AI programs — covering risk, compliance, accountability, and operational readiness.

Boards are now actively asking about AI. The issue is not awareness — it is the quality of the questions. “Are we using AI?” and “Do we have a policy?” are easy to answer and largely irrelevant. They do not indicate whether an AI program is safe, compliant, or delivering value.

Regulatory momentum has raised the stakes. The EU AI Act is in force, NIST has formalized its AI Risk Management Framework, SOC 2 examiners are scrutinizing AI systems, and U.S. state-level regulations are expanding. Treating this as a technical concern introduces a level of exposure most audit committees would not accept elsewhere.

What follows is a practical checklist for board-level discussions. The questions are phrased the way they should be asked in a governance setting.


1. Inventory and accountability

You cannot govern what you cannot see. The first failure mode in most organizations is that nobody has a current, accurate picture of where AI is actually being used — including the shadow deployments happening in individual business units and the third-party tools quietly adding AI features under the same contract.

Questions to ask:

If ownership resolves to a committee, responsibility is diffused. Effective governance requires a named executive accountable for outcomes.


2. Risk classification and proportionality

Not every AI system warrants the same level of oversight. A marketing copy assistant and a credit adjudication model should not be governed identically — one is a productivity tool, the other is a regulated decision system. A good governance program uses a risk tier to decide how much scrutiny each system receives.

Questions to ask:

A practical test: review the last three AI systems that were blocked or materially modified through governance. If none exist, the process is likely performative.


3. Data lineage and training data provenance

The most expensive AI incidents I've seen in the last two years were not model failures — they were data failures. Training on data you don't have rights to, using customer data in ways the privacy notice doesn't cover, or letting PII leak into a fine-tuning set are the kinds of problems that become lawsuits and regulatory actions, not engineering tickets.

Questions to ask:

This is often the point where clarity breaks down. Many organizations do not have a defensible answer.


4. Model evaluation and ongoing monitoring

"We tested it before launch" is not evaluation. Production AI systems degrade — sometimes abruptly, sometimes gradually — as input distributions shift, underlying models are updated by the vendor, or adversarial usage patterns emerge. A governance program that doesn't include continuous evaluation is governance-as-of-last-year.

Questions to ask:

Boards should request trend data on evaluation metrics for high-risk systems. If it does not exist, that absence is itself a finding.


5. Human oversight and escalation

Regulatory frameworks emphasize meaningful human oversight. In practice, this is often reduced to superficial review at scale. That is not oversight — it is process without control.

Questions to ask:


6. Security and adversarial resilience

AI systems introduce a threat surface not fully covered by traditional security programs. Prompt injection, model inversion, data poisoning, and jailbreaks are actively exploited. In many organizations, ownership is fragmented between ML and security teams, leaving gaps in accountability.

Questions to ask:


7. Third-party and supply chain risk

Most enterprises now consume AI primarily through vendors — foundation model APIs, embedded AI features in SaaS tools, specialized vertical AI products. Each of these is a governance dependency that most vendor risk programs are not yet equipped to evaluate.

Questions to ask:


8. Regulatory readiness and documentation

Under the EU AI Act, high-risk AI systems require technical documentation, conformity assessments, and post-market monitoring. SOC 2 auditors are now asking about AI control environments. State-level laws in the U.S. (Colorado, New York City, California) impose their own disclosure and auditing requirements. The organizations that will weather this well are the ones who began producing the required artifacts before they were demanded.

Questions to ask:


9. Board-level reporting

Finally, a test of the governance program itself: what does the board actually see, and how often?

Questions to ask:

If AI appears only as a line item in a broader technology update, that is itself a governance gap.


The uncomfortable conclusion

Most AI governance programs are well-intentioned but structurally incomplete. Policies, committees, and presentations exist; operational mechanisms often do not. What is missing is execution — current inventories, active evaluation, empowered reviewers, and real documentation.

The point of a board-level checklist is not to become the expert. It is to ask questions precise enough that the absence of a real answer becomes visible. If executives cannot answer most of the questions above with specifics — names, numbers, artifacts — the governance program is not yet real, and the board now has a decision to make about what to do next.

The decision that follows is what matters. The rest is paperwork.

If your board is asking harder questions about AI and your team is working out how to answer them, I help executives build governance programs that are both compliant and operationally real. Get in touch.

Subscribe for more

Get posts on LLMOps, RAG, and production AI delivered to your inbox.

Subscribe on Substack