AI Risk Is Not Just ISMS With Better Marketing

Most organisations begin their AI governance journey with a comforting assumption: we already have an ISMS, so we already have the structure for AI risk.

That assumption is understandable. ISO 27001, security controls, risk registers, access management, incident response, and supplier assessments are familiar territory. When a company rolls out an internal AI assistant for policy search, customer support drafting, or risk analysis, the first instinct is often to treat it like any other new information system. Secure the access. Protect the data. Review the vendor. Update the asset inventory. Add a few controls. Move on.

That instinct is not wrong. It is just incomplete.

The problem is that AI does not only create information-security risk. It also creates decision risk, behaviour risk, model risk, and harm pathways that a conventional ISMS was never designed to see clearly. An ISMS protects information and systems. AI governance must also deal with outputs, uncertainty, explainability, fairness, drift, human overreliance, and business decisions made on probabilistic machine behaviour.

A strong ISMS is a necessary foundation for AI governance. But it is not the full structure. It protects the environment in which AI operates, not the full range of harms AI can create.

Why teams keep making this mistake

The reason is simple: organisations naturally reach for the framework they already trust.

If the security team has spent years building a mature ISMS, it feels efficient to extend that system rather than open a new governance lane. And to be fair, that extension often captures a meaningful part of the picture: access control for AI systems, supplier risk, data confidentiality, logging, change management, and resilience.

Those are real controls. They matter.

But once AI starts influencing content, recommendations, classifications, prioritisation, or decisions, the object of risk changes. The question is no longer only whether the system was accessed securely or whether data was leaked. The question becomes whether the system produces outputs that are wrong, biased, misleading, opaque, or dangerously overtrusted.

That is where the old frame starts to crack.

What ISMS is very good at

It is worth being precise here. ISMS is not outdated. In fact, mature AI governance usually fails faster when the ISMS foundation is weak.

An ISMS is built to manage risk around confidentiality, integrity, availability, asset ownership, control effectiveness, security incidents, third-party exposure, and operational resilience.

If your AI assistant exposes confidential documents to the wrong users, that is an ISMS problem. If a model pipeline pulls training data from an unapproved source, that is partly an ISMS problem. If an LLM integration opens a new vendor or API attack surface, that is an ISMS problem too.

This matters because some AI discussions swing too far in the other direction and act as if classic security discipline no longer matters. That would be sloppy thinking. Weak security makes every AI risk harder.

But that still does not mean ISMS covers the whole field.

Where AI changes the risk object

Traditional information-security risk is mostly concerned with protecting assets, systems, and information from compromise, misuse, disruption, or loss. AI risk introduces a different category of concern: the system can behave badly even when it is technically secure.

That sounds obvious once stated plainly, but it changes almost everything.

A secure AI system can still hallucinate a policy answer that sounds authoritative, rank people or cases unfairly, recommend an unsafe action, degrade silently as conditions change, produce non-explainable outputs in a regulated process, or trigger human overconfidence because it sounds more certain than it should.

That is not a side case. That is the core distinction.

A simple scenario shows the gap

Imagine a company launches an internal AI assistant for policy search, customer support draft responses, and internal risk analysis summaries.

The ISMS team does its job well. Access is role-based. Logs are retained. Vendor security is reviewed. Data transmission is encrypted. The model is deployed in an approved environment.

From an information-security perspective, this can look mature.

But now the assistant starts doing the following:

  • it gives different policy interpretations depending on how the user phrases the question
  • it drafts support responses that sound compliant but omit critical caveats
  • it overstates confidence in a risk summary built from incomplete source material
  • managers begin trusting it because it is fast, fluent, and available

Nothing here necessarily looks like a conventional security failure. No breach. No ransomware. No stolen credential. And yet the organisation is now exposed to a growing cluster of AI-specific risks that the ISMS alone does not classify well, monitor well, or assign ownership for.

That is the moment when teams realise the model is not just another asset. It is another kind of actor in the system.

Why AI risk cannot be reduced to CIA

The confidentiality-integrity-availability triad still matters. But AI governance needs a wider lens.

With AI, leaders also need to ask whether the output is reliable enough for the use case, whether a human can understand or contest the result, whether the model is fair across relevant groups, what happens when conditions shift and the model drifts, how much autonomy the organisation has quietly given the system, and what harm occurs if the answer is plausible but wrong.

Those questions sit awkwardly inside classical ISMS categories because they are not only about protecting information. They are about governing behaviour, judgment, and impact.

The strategic mistake leaders make

The biggest mistake is not failing to care about AI risk. It is assuming the existing security structure already covers it.

That assumption creates three downstream problems.

1. Ownership becomes too narrow

If AI risk is treated as a security topic only, the burden lands on teams that do not own all of the relevant questions. Legal, compliance, product, data, business process owners, and executive leadership stay too far away for too long.

2. Evidence becomes misleading

A company may report that access controls, vendor reviews, and monitoring are in place and conclude that governance is mature. But those controls do not prove the model is reliable, explainable, fair, or fit for a higher-impact use case.

3. Harm appears late

Because AI failures can look like bad output rather than security incident, organisations often detect them only after business decisions, customer interactions, or operational processes have already been affected.

So what should leaders do first?

Start with a sharper mental model.

Treat ISMS as the foundation layer for AI governance, not the entire governance stack.

In practical terms, keep the security controls, keep the supplier reviews, keep the incident and change discipline, but add AI-specific risk questions before the use case scales.

That means classifying AI use cases by impact, defining acceptable use boundaries, deciding where human review is mandatory, and identifying what evidence is needed beyond technical security.

The organisations that do this well are not abandoning their ISMS. They are refusing to overload it with responsibilities it was never designed to carry alone.

Conclusion

AI risk is not just another ISMS topic.

It overlaps with ISMS. It depends on ISMS. In some places it sits directly on top of it. But it also reaches into model behaviour, output quality, fairness, explainability, autonomy, and decision harm in ways that classical information-security frameworks do not fully capture.

That is the real governance challenge.

The good news is that organisations do not need to throw away their existing security discipline. They need to build on it intelligently. The ISMS remains essential, just no longer sufficient by itself.

And that leads to the next question: if AI risk is not the same as ISMS risk, what exactly differs? That is where Part 2 begins.

Previous Post Next Post