

Your product team ships a new AI-powered hiring screening feature. It ranks candidates automatically based on CV data. It is running in production across three enterprise clients in Germany, France, and the Netherlands. Nobody ran a risk classification exercise before launch. There is no technical documentation file. The logging infrastructure captures model outputs but not the decision logic. You have no human override mechanism.
Under the EU AI Act, fully enforceable from August 2026, you have just deployed a high-risk AI system without a single required control in place. The fine ceiling is €15 million or 3% of global annual turnover for high-risk violations — and that is before GDPR exposure for automated decision-making under Article 22 is layered on top.
Explore more privacy compliance insights and best practices
This guide is not a legal explainer. Every other resource covers what the EU AI Act requires. This is about what your engineering team must actually build, integrate, and maintain — the system inventory, the logging pipeline, the FRIA workflow, the human override architecture — before enforcement arrives.
The EU AI Act is a product safety regulation applied to artificial intelligence. It entered force in August 2024 with phased enforcement milestones running through 2027. The August 2026 date is the critical one for most engineering teams: it is when full compliance obligations for high-risk AI systems become enforceable.
The Act applies extraterritorially, mirroring GDPR’s territorial model. If your system is deployed in the EU or affects EU residents, the Act applies regardless of where your company is incorporated or where the model is hosted. A SaaS company based in Austin deploying an AI-powered credit scoring feature to European customers is a provider under the Act.
The Act distinguishes between two roles that require different compliance postures. Providers place AI systems on the EU market under their own name — they build or commission the system and ship it to customers. Deployers use AI systems in a professional context within the EU. If you build a model and license it to enterprise clients, you are a provider. If your enterprise client deploys it in their hiring process, they are also a deployer. Both carry obligations; providers bear the heavier compliance burden.
General-purpose AI models — foundation models like LLMs — carry a separate set of obligations that have been effective since August 2025. If your company builds or fine-tunes a GPAI model that you make available to others, GPAI transparency and training data documentation requirements apply immediately, independent of how downstream deployers use the model.
Risk classification is the first technical decision, and it determines the entire compliance programme for a given system. The classification is determined by use case domain, not by model architecture, training methodology, or capability level. A transformer-based model used for spam filtering is minimal risk. The same model used to screen job applications is high-risk. The model did not change. The domain did.
These systems cannot legally operate in the EU market at all. Prohibited categories include social scoring systems that evaluate individuals based on behaviour or characteristics to determine access to services or opportunities, real-time remote biometric identification in public spaces (with narrow law enforcement exceptions), AI systems that exploit psychological vulnerabilities to manipulate behaviour, and emotion recognition in workplace and educational settings.
If your product roadmap includes features that touch these categories, the conversation is not about compliance controls — it is about product architecture. These features need to be removed or fundamentally redesigned before EU market entry.
High-risk status is triggered by use case domain. Annex III of the Act lists the categories: biometric identification and categorisation, critical infrastructure management, educational and vocational training (automated assessment of learning outcomes), employment and workforce management (CV screening, performance evaluation, task allocation), essential private and public services (credit scoring, insurance risk assessment), law enforcement, migration and asylum processing, and administration of justice.
This list contains the use cases that most B2B SaaS platforms with AI features are already building or have shipped. If your platform touches any of these domains for EU-based customers, you have high-risk obligations — regardless of whether you think of yourself as an AI company.
Systems in this tier must notify users that they are interacting with AI. Chatbots, AI-generated content, deepfakes, and emotion recognition systems in non-prohibited contexts fall here. The engineering requirement is disclosure infrastructure: users must be informed at point of interaction, and AI-generated content must be machine-readable labelled.
Spam filters, recommendation engines, productivity tools, and most AI features in consumer applications that do not touch the Annex III domains fall here. No mandatory compliance obligations, though voluntary codes of conduct are encouraged. This is where the majority of AI features currently in production sit.
Teams classify based on how their model works rather than what the deployment context does. A CV parsing model sounds benign. When it ranks candidates and that ranking influences hiring decisions for EU-based employees, it is Annex III high-risk. The classification question is: what decision does this system inform or make, and in which domain?
This is where the Act becomes concrete. Each requirement below is a system you need to build, not a document you need to file.
Before any other compliance work is possible, you need a complete inventory of every AI system in production or development — its risk classification, its use case domain, its data inputs, and its deployment context. Most engineering teams discover they have no centralised AI system inventory when this question surfaces in a governance context. Models are owned by different teams, deployed in different environments, and tracked in different tools. The inventory is the compliance foundation everything else depends on.
The inventory must be dynamic, not a spreadsheet updated quarterly. New models, fine-tuned versions, and API integrations with third-party AI services all need to be captured on intake, not discovered retrospectively.
Article 9 of the Act requires a continuous risk management system spanning the entire AI lifecycle — from design through decommissioning. This is not a one-time risk assessment. It is a documented process that identifies risks, evaluates mitigation measures, implements those measures, and monitors their effectiveness after deployment.
In engineering terms: risk management needs to be embedded in your ML development lifecycle as a gate, not appended as a post-deployment checklist. The risk assessment output must be versioned alongside model versions — when you retrain, you reassess.
Article 10 requires that training, validation, and test datasets meet quality criteria — relevance, representativeness, freedom from errors and biases, and coverage of the intended operating domain. Data provenance and lineage documentation is a hard requirement: you must be able to demonstrate where training data came from, what quality controls were applied, and what bias mitigation procedures were run. This documentation must be maintained and available for regulatory inspection.
Third-party datasets are not exempt from this obligation. If you fine-tune on Common Crawl, use a licensed corpus, or pull from a data vendor, you are responsible for verifying that the dataset meets Act requirements and documenting that verification.
High-risk AI systems require a technical documentation file that must be prepared before market deployment and maintained continuously thereafter. This is a living engineering artefact, not a one-time deliverable to legal. Required contents include: system architecture and design specifications, training methodology and dataset descriptions, performance metrics and testing results, known limitations and intended use scope, risk management documentation, and post-market monitoring plan.
The Article 96 requirement for documentation to be machine-readable, timestamped, and continuously updated is structurally incompatible with manual document management. AI governance framework tools that maintain the documentation file in sync with model versions and deployment state are the only operationally sustainable approach at any scale beyond a single model.
High-risk systems must automatically generate logs sufficient to enable post-hoc evaluation of system operation and the identification of risks throughout the system lifecycle. The specific minimum: logs must capture the reference data used by the system where possible, input data — at least in aggregate or statistical form where retaining full inputs is technically infeasible — and the decisions or outputs produced.
The logging infrastructure must be retained for the period defined in the technical documentation, and it must be accessible to supervisory authorities on request. This is not application logging for debugging. It is audit-grade evidence of system operation.
Article 14 requires that high-risk systems are designed and developed in a way that allows natural persons to effectively oversee the system during its operation. This is an architectural requirement, not a policy statement. The system must have mechanisms that allow operators to interrupt operation, override outputs, and escalate edge cases to human review.
In practice, this means: override controls that actually work in production, not just in a test environment; escalation paths for low-confidence outputs or edge cases that the model was not trained for; and operator instructions that document what intervention looks like and when it should be triggered.
Before market entry, high-risk systems must undergo a conformity assessment demonstrating compliance with Act requirements. For most Annex III categories, self-assessment is permitted — but it must produce documented evidence. Biometric identification systems require third-party assessment. The conformity assessment is not a one-time exercise: whenever the system is substantially modified, re-assessment is required. Fine-tuning on new data, changing the intended use scope, or deploying to a new domain all trigger re-assessment.
Running AI in production without these controls in place?
Get an EU AI Act readiness assessment from Secure Privacy →
The compliance failures we will see in 2026 enforcement actions will not be teams that tried to comply and got the documentation wrong. They will be teams that never classified their systems, never built the logging infrastructure, and shipped without governance gates in their development process.
The most common structural gap. Models are deployed by different teams without central visibility. Third-party AI API integrations are treated as software dependencies rather than AI systems requiring classification. Shadow AI — employees using external AI tools that touch production data — creates compliance exposure that no engineering team is currently measuring. You cannot classify what you cannot find.
Most ML teams maintain training data in data lakes or feature stores that have no connection to the consent infrastructure governing that data. When a user withdraws consent under GDPR, that withdrawal has no mechanism to propagate to the training pipeline. AI training data consent management requires a connection between your CMP and your training data governance layer that most organisations have never built — because it crosses a team boundary that nobody owns.
Continuous training pipelines — the systems that retrain models on new data automatically or periodically — are often invisible to compliance processes. A model version that was assessed and documented when it launched may be materially different six months later after several retraining cycles. The Act’s requirement to maintain documentation in sync with the actual deployed system means retraining must trigger documentation updates. This requires governance hooks in the ML pipeline that most teams have not built.
The instinct is to treat EU AI Act compliance as a documentation project — produce the technical file, write the risk assessment, update the privacy policy. But documentation that is not generated from the actual system state is stale from the moment it is written. Records of processing automation — the same principle applied to GDPR Article 30 — applies here: compliance evidence must be derived from live system state, not assembled manually. A risk assessment that was accurate when you shipped version 1.0 does not describe version 1.8.
Regulatory observability is the engineering practice of building systems with the instrumentation needed to answer regulatory questions from production data — not from documentation. It means logging designed for audit readiness, not just debugging. It means model registries that capture the state of every deployed version. It means provenance tracking that can trace a model output back to training data sources. It is the infrastructure gap between "we are compliant" and "we can prove we were compliant at any given point in time."
The compliance challenges at the intersection of the EU AI Act and GDPR are not theoretical. They converge on the same systems, require the same data infrastructure, and create compounded exposure when either framework is violated. High-risk AI systems that process personal data must satisfy both simultaneously.
GDPR requires a documented lawful basis for every personal data processing activity. Legitimate interests — the basis most AI teams assume covers training data collection — requires a balancing test that must be documented at the time of the decision. The EDPB has repeatedly found that the scale and opacity of AI training collection makes this test increasingly difficult to pass. If your training data includes personal data of EU residents without documented lawful basis, you have simultaneous GDPR and AI Act exposure.
GDPR Article 22 restricts solely automated individual decision-making with legal or similarly significant effects. The EU AI Act’s high-risk category for employment and credit decisions overlaps almost exactly with Article 22’s scope. Both frameworks require human oversight mechanisms and explainability — but the specific requirements differ. Article 22 requires the ability to obtain human review of a specific decision. The AI Act requires human oversight of the system itself. Both must be satisfied, and the architecture that satisfies one often does not automatically satisfy the other.
Data subjects have the right to access, correct, and erase their personal data. When that data has been used to train a model, erasure becomes technically non-trivial — the model weights embed statistical patterns from the training data in ways that cannot be surgically removed. The practical answer is prevention: automated DPIA processes that assess the rights implications of training data inclusion before the data enters the pipeline, not after the model is in production.
GDPR’s Article 25 requirement for privacy by design and default applies to AI systems in the same way it applies to any data processing architecture. Data minimisation, purpose limitation, and access controls need to be designed into the system from inception — not retrofitted after a DPA inquiry. For AI systems, this means training data minimisation, feature selection that avoids processing personal data where statistical patterns can be learned without it, and inference-time controls that limit what data the model is exposed to.
| Milestone | Date | What Becomes Enforceable |
|---|---|---|
| Prohibited AI practices ban | February 2, 2025 | All systems in banned categories must be withdrawn or shut down immediately |
| GPAI model obligations | August 2, 2025 | Foundation model providers must maintain technical documentation and training data summaries |
| High-risk AI full enforcement | August 2, 2026 | Annex III systems require full conformity assessment, technical documentation, logging, and human oversight |
| Annex I product safety systems | August 2, 2027 | AI embedded in regulated products (medical devices, vehicles, industrial machinery) requires full compliance |
Fine exposure under the AI Act runs on top of GDPR exposure, not instead of it. The penalty structure in 2026 is: €35 million or 7% of global annual turnover for prohibited AI practices; €15 million or 3% for high-risk violations; €7.5 million or 1.5% for inaccurate information to authorities. For a company with €50 million in annual revenue, a high-risk violation without an appeal path is a €1.5 million floor, not a ceiling.
The EU AI Office has signalled that initial enforcement will prioritise systems in highest-impact domains: employment, credit, and law enforcement. Cross-border enforcement through the one-stop-shop mechanism — the same model that produced the largest GDPR fines — means that a complaint filed in any member state can trigger an investigation with EU-wide scope. Companies that have deployed unclassified AI features in these domains are the first targets.
| Spreadsheets and Static Docs | Governance Platform | |
|---|---|---|
| AI system inventory | Point-in-time list; outdated immediately | Dynamic registry updated on model intake |
| Risk classification | Manual; dependent on individual judgment | Workflow-guided; consistently applied across teams |
| Training data documentation | Written once; disconnected from pipeline state | Generated from actual data sources; versioned with model |
| Logging and audit trail | Application logs; not audit-grade | Compliance-grade logs; queryable by system, version, and date |
| DPIA / FRIA | Manual; infrequent; owned by legal | Automated trigger on high-risk classification; owned by engineering and legal jointly |
| Conformity assessment | Assembled before launch; never updated | Maintained continuously; linked to model version history |
| GDPR / AI Act overlap | Handled separately by different teams | Unified evidence layer serves both frameworks |
The manual approach has a structural failure mode that the table does not fully capture: it makes compliance invisible to engineering until it becomes a legal problem. When the model your team trained six months ago is now running in production with different data and different behaviour, nobody outside the ML team knows. A governance platform makes AI system state visible to the compliance, legal, and security teams who need to act on it — not just to the engineers who built it.
Build EU AI Act compliance into your engineering stack from day one Book a Secure Privacy AI governance demo →
Yes. The Act applies extraterritorially in the same way GDPR does. Any company placing an AI system on the EU market or deploying one that affects EU residents must comply, regardless of where the company is incorporated or where the model is hosted.
High-risk status is determined by deployment domain under Annex III, not by model capability. Employment screening, credit scoring, educational assessment, biometric identification, critical infrastructure management, law enforcement, and administration of justice are statutory high-risk categories. If your system informs or automates decisions in these domains for EU customers, it is high-risk.
It depends on the use. Open-source GPAI models released with publicly accessible weights are largely exempt from GPAI model obligations, though they remain subject to prohibited practice rules. If an open-source model is integrated into a product deployed in a high-risk domain, the deployer carries full high-risk compliance obligations for that deployment.
GPAI model providers must maintain training data summaries identifying major data sources and implement copyright compliance policies. Where training data includes personal data of EU residents, GDPR lawful basis requirements apply simultaneously. Consent management for AI training data — specifically the connection between CMP records and training pipeline governance — is a compliance infrastructure gap that most teams will need to close before the next major training run.
For high-risk systems: a technical documentation file covering system architecture, training methodology, performance metrics, known limitations, and risk management documentation; logging infrastructure generating audit-grade evidence of system operation; a conformity assessment; and a post-market monitoring plan. For GPAI models: technical documentation for the EU AI Office and publicly accessible training data summaries.
August 2, 2026 for Annex III high-risk systems. The European Commission proposed a "Digital Omnibus" package in late 2025 that could extend this deadline to December 2027 for some Annex III categories, but prudent compliance planning treats August 2026 as the binding date — the extension has not been confirmed and may not be.
The frameworks overlap on automated decision-making, training data lawful basis, data subject rights, and DPIA obligations. They require the same underlying infrastructure — documented processing records, risk assessments, human oversight mechanisms — but apply different specific requirements to that infrastructure. ISO 42001 certification is emerging as the governance framework that demonstrates systematic compliance with both the EU AI Act and GDPR simultaneously, serving as documentary evidence for regulators across both frameworks.
Related reading