EU AI Act 2026: What Compliance Officers Must Do Before August

August 2026 is closer than most compliance teams realise. The EU AI Act enters full applicability on 2 August 2026, and several key obligations will bite before that date. For data protection officers, legal counsels, and compliance managers, this is not a distant regulatory exercise. It is a live programme of work with hard deadlines, meaningful fines, and board-level reputational exposure.

What the EU AI Act Actually Regulates

The Act does not target specific technologies. It targets risk. The framework sorts AI systems into four bands: unacceptable risk (prohibited), high risk (heavy obligations), limited risk (transparency only), and minimal risk (no specific obligations). The logic is straightforward: the more an AI system can harm fundamental rights, health, or safety, the stricter the rules.

This matters for compliance officers because the first question is always classification. Before you can map obligations, you need to know where your organisation sits in this hierarchy.

Prohibited Practices: What Is Off the Table

Article 5 of the Act sets out an exhaustive list of banned uses. These include systems that exploit psychological vulnerabilities, enable social scoring by public or private actors, deploy real-time biometric surveillance in public spaces (with narrow law-enforcement exceptions), and use subliminal manipulation techniques.

The list is exhaustive today, but not frozen. The European Commission will review and potentially extend it annually. This creates an ongoing monitoring obligation for your compliance programme, not a one-time review.

Several points deserve attention:

  • Social scoring applies to private actors too. The prohibition is not limited to government systems. Commercial platforms that rank users and create blacklists fall within scope.
  • Exceptions require case-by-case analysis. Most prohibitions carry detailed carve-outs. Do not assume a system is prohibited or permitted without a documented legal assessment.
  • Annual revision risk. Build a standing process to review Article 5 changes each year and assess impact on your product portfolio.

High-Risk Classification: The Two-Track Test

An AI system is high-risk if it meets either of two criteria. First, it functions as a safety component in a product subject to EU product safety legislation that requires third-party conformity assessment. Second, it falls within one of eight categories listed in Annex III, unless the provider can document that it does not pose significant risk of harm.

The Annex III categories cover a wide range of business applications: biometric identification, critical infrastructure management, education and vocational training, employment and workforce management, access to essential services (including creditworthiness and insurance), law enforcement, migration and border control, and administration of justice.

For compliance officers, the practical questions are:

  1. Does your organisation develop or deploy AI systems that fall within any Annex III category?
  2. Has a formal scope assessment been documented, including territorial and material applicability?
  3. Is there a monitoring process for regulatory updates that could reclassify existing systems?

Obligations for High-Risk AI: What Providers and Deployers Must Do

The Act draws a sharp distinction between providers (those who develop or place the system on the market) and deployers (organisations that use the system in their own context). Both carry significant obligations, and they differ.

Providers are responsible for the system across its entire lifecycle. This means:

  • Risk management system: Documented, continuous, covering the full system lifecycle.
  • Data governance: Robust controls over training, validation, and testing data.
  • Technical documentation: Detailed records sufficient for competent authority review.
  • Automatic logging: Systems must log events automatically to enable post-market monitoring.
  • Human oversight: Design features that allow human intervention and override.
  • Accuracy, robustness, cybersecurity: Documented performance standards with ongoing monitoring.
  • Conformity assessment: Must be completed before placing the system on the EU market, including EU declaration of conformity, CE marking, and registration in the EU database.
  • Quality management system: Enables continuous compliance and corrective action.

Deployers must use high-risk AI systems strictly within the provider's instructions and implement technical and organisational measures, including trained human oversight. Where they control input data, they are responsible for its relevance and representativeness. They must monitor performance continuously, report serious incidents without delay, and retain system logs.

One intersection worth noting: where a deployer is already required to conduct a Data Protection Impact Assessment under GDPR, that assessment should incorporate the information the provider is required to supply under Article 13 of the AI Act. This creates a formal linkage between your AI compliance and your existing privacy programme.

Other actors in the value chain — importers and distributors — must also conduct verification checks before placing systems on the EU market. This includes confirming conformity assessments, technical documentation, declarations of conformity, CE marking, and appropriate storage and transport conditions.

Transparency Requirements: Beyond High-Risk Systems

Transparency obligations under Chapter IV apply more broadly. They are not limited to high-risk systems. Any organisation deploying AI in customer interactions, content generation, or automated decision support needs to assess these requirements.

The core obligations are:

  • Disclose AI interaction: Users must be clearly informed when they are communicating with an AI system.
  • Label synthetic content: AI-generated or manipulated content, including deepfakes and public-interest text, must be labelled in a machine-readable and detectable way where technically feasible.
  • Emotion recognition and biometric categorisation: Individuals must be informed when these systems are in use.

For practical compliance, this means reviewing every AI-enabled customer touchpoint, content workflow, and decision-support tool to determine whether transparency disclosures are triggered. The standard is clear, accessible, and timely information at the point of first interaction or exposure. Internal systems used for HR decisions or performance management also fall within scope.

Enforcement, Fines, and National Variation

The Act sets maximum fine levels at EU level, but enforcement is delegated to Member State authorities, who set upper limits for specific infringements within those maxima. This means the fine landscape will vary by jurisdiction, and organisations operating across multiple EU Member States need to track national implementation laws, not just the EU Regulation itself.

Italy provides an early example: Law No. 132/2025, in force since October 2025, sets fines up to EUR 774,685 for serious infringements and includes disqualifying measures such as suspension or revocation of licences and disqualification from conducting business for up to one year under Decree 231.

The enforcement picture will develop rapidly as Member States finalise their national frameworks ahead of August 2026. Compliance officers should monitor national legislation in each jurisdiction where their organisation operates AI systems, not just the EU Regulation text.

What to Do Before August 2026

The transitional timeline is short. Several obligations already apply before full applicability, and the enforcement infrastructure is being built now. A practical compliance programme should address the following in sequence:

  1. AI system inventory: Map every AI system your organisation develops, deploys, imports, or distributes. Determine the role (provider, deployer, distributor) and the risk classification for each.
  2. Prohibited practice screen: Apply Article 5 to each system. Document the assessment. Flag systems requiring legal opinion.
  3. High-risk gap analysis: For systems in Annex III categories, conduct a gap analysis against Articles 8-15 obligations. Prioritise by enforcement timeline and product criticality.
  4. Transparency audit: Review all AI-enabled customer, worker, and public-facing interactions for transparency disclosure gaps.
  5. GDPR linkage: Where DPIA obligations apply, integrate AI Act Article 13 information requirements into existing privacy processes.
  6. National law monitoring: Establish a standing process to track Member State implementing legislation in each relevant jurisdiction.
  7. Governance and documentation: Implement ongoing compliance records — risk management logs, transparency measure records, incident reports — and assign ownership.

Conclusion

The EU AI Act is not another GDPR countdown to manage reactively. The risk categories are product-specific, the obligations are technical and operational, and the enforcement will be national as well as supranational. Compliance officers who treat this as a legal-only exercise will find themselves exposed. The work is cross-functional — legal, IT, product, procurement, and HR all have a role.

The organisations that will be in the best position after August 2026 are those building structured AI compliance programmes now, not assembling documentation under deadline pressure. If your organisation has not yet mapped its AI system inventory and assessed its risk classifications, that is the place to start.

Previous Post Next Post