In Part 1, I argued that AI risk does not fit neatly inside a traditional ISMS. That is the conceptual break many organisations need first.
But once that point lands, the next question comes quickly: what is actually different?
AI risk and ISMS risk do overlap. They share infrastructure, data, vendors, controls, and governance touchpoints. But they differ in what they are trying to protect, how failure appears, what evidence matters, and who must own the response.
ISMS risk asks whether information and systems are protected. AI risk also asks whether model behaviour, outputs, and decisions are trustworthy enough for the use case.
ISMS risk is primarily concerned with protecting information assets, systems, and services. The object of protection is usually clear: data, infrastructure, applications, users and access rights, business continuity, and supplier dependencies.
AI risk shifts the focus. Now the object of concern also includes model outputs, model behaviour under uncertainty, data lineage, statistical performance across contexts, explainability, human reliance patterns, and downstream business or societal harm.
That changes the governance conversation immediately. In an AI discussion, a system can be secure, available, and technically functioning while still producing harmful or unreliable results.
A useful shortcut is this:
Behaviour failures are harder to spot than security failures. An outage looks like an incident. A ransomware event looks like an incident. But an AI system that becomes more misleading over time may not trigger any traditional incident process at all.
| Dimension | ISMS Risk | AI Risk |
|---|---|---|
| Primary object of risk | Information assets, systems, services | Outputs, models, behaviour, decision support, automation |
| Typical failure mode | Breach, leakage, unauthorised access, outage | Hallucination, bias, drift, unsafe recommendation, opaque decisioning |
| Impact pathway | Confidentiality, integrity, availability harm | Operational, legal, ethical, financial, or reputational harm |
| Nature of system | Mostly deterministic and rules-based | Probabilistic, context-sensitive, sometimes non-explainable |
| Evidence | Access logs, patching, supplier review, incident records | Validation results, drift monitoring, model limitations, human review records |
| Ownership | Security, IT, risk, operations | Security, legal, compliance, data, product, business, leadership |
Let us return to the recurring example from Part 1: a company deploys an internal AI assistant for policy search, customer support drafting, and risk analysis summaries.
Viewed through an ISMS lens, the team asks: Who can access it? Where is data stored? Which vendor is involved? Are logs retained? Is transmission encrypted? What happens if the system is unavailable?
Viewed through an AI risk lens, the team must also ask: How consistent are the answers across prompts? What happens when the assistant is unsure? Could a plausible but wrong answer influence a regulated decision? What evidence shows the use case is still safe six months later?
The first set protects the environment. The second set governs the behaviour inside that environment. Both are necessary. Neither replaces the other.
Traditional ISMS thinking grew up around assets that are relatively stable in how they behave. AI systems do not behave with the same clarity. Their outputs are shaped by training data, prompt phrasing, tuning choices, retrieval quality, and changing real-world conditions.
The system may not fail the same way twice. The same prompt may not always produce the same answer. The model may perform well in tests and poorly under operational pressure. The harm may emerge gradually instead of dramatically.
AI failures can create unfair treatment, misleading internal advice, low-quality customer communication, automation errors that bypass judgment, silent performance decay, and reputational damage caused by fluent but wrong outputs.
This is one reason boards struggle with AI risk at first. The impact is not always visibly technical, even when the root cause is technical.
A mature ISMS produces familiar evidence: access reviews, security testing, supplier assessments, patching status, asset inventories, and incident logs.
AI governance needs a different evidence layer on top of that: validation results by use case, reliability thresholds, workflow boundaries, fairness checks, drift monitoring, and evidence of human review in sensitive cases.
Security evidence proves some necessary things. It does not prove the model is fit for purpose.
If the risk differs, ownership must differ too. AI risk cannot be managed responsibly by security, IT, and enterprise risk alone.
The key questions are not only technical. They include whether the use case is acceptable at all, what level of human oversight is required, what legal exposure exists, and which business leader owns the consequence of a wrong answer.
AI risk and ISMS risk belong in the same boardroom, but they are not the same beast. The overlap is real. The distinction is real too.
Once organisations understand that clearly, they stop asking the wrong question — can our ISMS absorb AI? — and start asking the better one: where do our existing controls still help, and where do we need a different governance layer?
That is where Part 3 goes next: where do AI risk and cybersecurity risk actually collide in the real world?