From Controls to Accountability: Designing a Governance Model That Actually Works

By this point in the series, the pattern should be visible. AI risk does not fit fully inside an ISMS. It differs from classical ISMS risk in object, failure mode, evidence, and ownership. Cybersecurity and AI governance overlap. Hidden AI-specific harms persist even in strong security environments.

Now we arrive at the question that matters most operationally: who actually owns this?

This is where many organisations stall. They understand the risk conceptually. They may even have principles, committees, or policy statements. But when a real decision needs to be made, ownership becomes blurry. And blurry ownership is not governance.

AI governance starts to work only when responsibility stops being abstract. Someone must own the use case, someone must own the control environment, someone must own the legal exposure, and someone must own the decision to proceed.

Why vague ownership is the default failure mode

Security assumes product or business teams own the use case. Product assumes security or compliance owns the risk. Legal assumes technical teams understand the system well enough to control it. Data teams assume deployment owners will set the right boundaries. Executives assume a committee somewhere is handling it.

So everyone is involved, but nobody is accountable enough.

A practical governance principle: shared ownership, explicit accountability

The phrase shared ownership is often used badly. Sometimes it really means nobody owns it clearly enough.

A better model is this: ownership is distributed because the risk is cross-functional, but accountability is explicit because decisions still need named owners.

The organisation needs clear answers to questions like who owns the business use case, who approves it for deployment, who decides what level of human review is mandatory, who owns model monitoring, who owns incident escalation, and who signs off that the evidence is good enough for the risk level.

The three-lines view still helps — but it needs adaptation

First line: business and operational ownership

This includes product owners, workflow owners, functional managers, and teams deploying the AI in practice. They should own the use case objective, the operational context, acceptable risk boundaries, day-to-day oversight, and whether the output is actually fit for business use.

Second line: risk, compliance, legal, privacy, and control functions

This line defines policy, challenge, review criteria, and oversight expectations. It should own control standards, risk classification rules, review gates, legal and regulatory interpretation, escalation criteria, and assurance expectations.

Third line: internal audit or equivalent independent assurance

This line evaluates whether the governance model is actually functioning as described: whether role boundaries are clear, processes are followed, evidence is credible, and high-risk use cases are governed differently from low-risk ones.

The business owner is more important than many organisations admit

One of the most important design choices is naming a genuine business owner for each material AI use case. Not just a technical maintainer. Not just a platform team. Not just a steering committee.

A real owner must answer what problem the system is allowed to solve, what level of error is acceptable, what happens when the output is uncertain, when a human must override or review the result, and what harm would matter enough to stop the system.

The CISO’s role is essential, but not exclusive

CISOs are often asked to hold too much of the AI governance burden. The CISO should typically own or strongly influence security architecture requirements, access and identity controls, vendor security review, monitoring and logging expectations, incident escalation triggers involving compromise or exposure, and baseline control discipline for AI systems in production.

But the CISO cannot be the sole owner of AI risk. Questions of fairness, explainability, operational boundaries, and decision suitability require other functions with real authority.

Legal, compliance, and privacy cannot be advisory spectators

In strong models, legal and compliance shape the control design early enough to matter. They help determine which use cases are inherently sensitive, where explainability or contestability is required, what records must exist for defensibility, whether the system creates jurisdiction-specific obligations, and when a use case should be limited or refused entirely.

Privacy functions matter more than many AI teams first assume. It is not enough to know that data is secured. You also need to know whether the data is appropriate, lawful, proportional, and properly governed for model use.

Data and model teams need defined obligations, not just technical freedom

These teams should typically own model development and evaluation methods, technical validation evidence, retraining or update processes, performance monitoring design, model documentation quality, and known limitations and usage constraints.

But they should not have unilateral authority to decide whether a sensitive use case is acceptable just because they can make the model perform reasonably well.

Risk classification is where the governance model becomes practical

A governance model becomes much easier to operate once use cases are classified by impact. Without classification, low-risk use cases get overburdened and high-risk use cases do not get enough scrutiny.

  • Low-impact use cases: drafting, summarisation, brainstorming, internal convenience tooling.
  • Medium-impact use cases: internal analysis, workflow prioritisation, operational recommendations, support drafting with structured review.
  • High-impact use cases: decisions affecting regulated outcomes, rights, safety, financial exposure, or limited reversibility.

Higher-impact AI needs thicker governance.

Approval gates should answer real questions, not perform rituals

A better gate asks concrete questions such as what the use case is, what could go wrong if the output is wrong, who is affected by the result, what evidence supports reliability for this context, what human review is required, what dependencies exist, what monitoring will detect deterioration or misuse, and what would trigger rollback or redesign.

Monitoring and retirement matter as much as approval

Deployment is often the start of the more interesting risk phase. Governance must define what performance is monitored over time, how drift is identified, how incidents are escalated, when reassessment is required after material change, who decides whether the system remains acceptable, and how the use case is retired if the risk-benefit balance breaks down.

The recurring internal assistant now needs a real operating model

The policy owner decides which internal knowledge sources are authoritative. The customer support leader owns where drafts may be used and what review is mandatory. The risk function defines what evidence is needed before summaries influence formal decisions. The CISO owns access, logging, and vendor security. Legal and compliance define where generated content cannot be relied on without specific checks. The model team owns evaluation, monitoring, and declared limitations. Leadership decides whether the productivity benefit justifies the residual risk.

Now governance starts to become real. Not because everyone is invited. But because responsibility is assigned.

What evidence leadership should demand

For significant AI use cases, leadership should expect to see clear business purpose and owner, risk classification, known failure modes, validation evidence for the intended context, defined review and escalation paths, a monitoring plan, named accountable functions, and the conditions under which the use case would be paused or restricted.

Conclusion

A governance model for AI works only when accountability becomes concrete. Security, legal, compliance, privacy, data, product, business owners, and leadership all have roles to play. But the model fails if those roles remain vague, duplicative, or purely advisory.

The goal is not to force AI risk into one department. It is to create a structure in which distributed ownership and explicit accountability can coexist.

Part 6 takes the final step: how do you extend an existing ISMS so AI-specific risk is governed with the same discipline, without creating framework fatigue or governance theatre?

Previous PostNext Post