Same Boardroom, Different Beast: AI Risk vs ISMS Risk Side by Side

In Part 1, I argued that AI risk does not fit neatly inside a traditional ISMS. That is the conceptual break many organisations need first.

But once that point lands, the next question comes quickly: what is actually different?

AI risk and ISMS risk do overlap. They share infrastructure, data, vendors, controls, and governance touchpoints. But they differ in what they are trying to protect, how failure appears, what evidence matters, and who must own the response.

ISMS risk asks whether information and systems are protected. AI risk also asks whether model behaviour, outputs, and decisions are trustworthy enough for the use case.

Start with the simplest distinction: the object of risk

ISMS risk is primarily concerned with protecting information assets, systems, and services. The object of protection is usually clear: data, infrastructure, applications, users and access rights, business continuity, and supplier dependencies.

AI risk shifts the focus. Now the object of concern also includes model outputs, model behaviour under uncertainty, data lineage, statistical performance across contexts, explainability, human reliance patterns, and downstream business or societal harm.

That changes the governance conversation immediately. In an AI discussion, a system can be secure, available, and technically functioning while still producing harmful or unreliable results.

ISMS risk is largely about compromise. AI risk is often about behaviour.

A useful shortcut is this:

  • ISMS risk is often about unauthorised access, disruption, corruption, or exposure.
  • AI risk is often about flawed behaviour, flawed outputs, flawed judgment support, or flawed automation.

Behaviour failures are harder to spot than security failures. An outage looks like an incident. A ransomware event looks like an incident. But an AI system that becomes more misleading over time may not trigger any traditional incident process at all.

A side-by-side comparison

Dimension ISMS Risk AI Risk
Primary object of riskInformation assets, systems, servicesOutputs, models, behaviour, decision support, automation
Typical failure modeBreach, leakage, unauthorised access, outageHallucination, bias, drift, unsafe recommendation, opaque decisioning
Impact pathwayConfidentiality, integrity, availability harmOperational, legal, ethical, financial, or reputational harm
Nature of systemMostly deterministic and rules-basedProbabilistic, context-sensitive, sometimes non-explainable
EvidenceAccess logs, patching, supplier review, incident recordsValidation results, drift monitoring, model limitations, human review records
OwnershipSecurity, IT, risk, operationsSecurity, legal, compliance, data, product, business, leadership

The same internal AI assistant now looks different

Let us return to the recurring example from Part 1: a company deploys an internal AI assistant for policy search, customer support drafting, and risk analysis summaries.

Viewed through an ISMS lens, the team asks: Who can access it? Where is data stored? Which vendor is involved? Are logs retained? Is transmission encrypted? What happens if the system is unavailable?

Viewed through an AI risk lens, the team must also ask: How consistent are the answers across prompts? What happens when the assistant is unsure? Could a plausible but wrong answer influence a regulated decision? What evidence shows the use case is still safe six months later?

The first set protects the environment. The second set governs the behaviour inside that environment. Both are necessary. Neither replaces the other.

Static assets and probabilistic systems should not be governed identically

Traditional ISMS thinking grew up around assets that are relatively stable in how they behave. AI systems do not behave with the same clarity. Their outputs are shaped by training data, prompt phrasing, tuning choices, retrieval quality, and changing real-world conditions.

The system may not fail the same way twice. The same prompt may not always produce the same answer. The model may perform well in tests and poorly under operational pressure. The harm may emerge gradually instead of dramatically.

The impact story is wider in AI

AI failures can create unfair treatment, misleading internal advice, low-quality customer communication, automation errors that bypass judgment, silent performance decay, and reputational damage caused by fluent but wrong outputs.

This is one reason boards struggle with AI risk at first. The impact is not always visibly technical, even when the root cause is technical.

Evidence is different too

A mature ISMS produces familiar evidence: access reviews, security testing, supplier assessments, patching status, asset inventories, and incident logs.

AI governance needs a different evidence layer on top of that: validation results by use case, reliability thresholds, workflow boundaries, fairness checks, drift monitoring, and evidence of human review in sensitive cases.

Security evidence proves some necessary things. It does not prove the model is fit for purpose.

Ownership cannot stay inside the security team

If the risk differs, ownership must differ too. AI risk cannot be managed responsibly by security, IT, and enterprise risk alone.

The key questions are not only technical. They include whether the use case is acceptable at all, what level of human oversight is required, what legal exposure exists, and which business leader owns the consequence of a wrong answer.

Conclusion

AI risk and ISMS risk belong in the same boardroom, but they are not the same beast. The overlap is real. The distinction is real too.

Once organisations understand that clearly, they stop asking the wrong question — can our ISMS absorb AI? — and start asking the better one: where do our existing controls still help, and where do we need a different governance layer?

That is where Part 3 goes next: where do AI risk and cybersecurity risk actually collide in the real world?

Previous PostNext Post