The high-risk deadline just moved to 2027. But two obligations are already in force — and most European companies are ignoring both.
EU AI Act Compliance for SMBs — the full analysis, offline-ready (PDF)
I've spent the last twenty years watching companies panic about compliance deadlines they ignored for two years, then scramble six months before enforcement began. The EU AI Act is following exactly that pattern — with one twist that changes everything.
In December 2025, the European Commission proposed pushing the main high-risk AI compliance deadline from August 2026 to December 2027. That sounds like good news. But while everyone was focused on the moved deadline, two earlier waves of the AI Act quietly went into force — and most businesses still haven't acted on either of them.
This is a business translation, not a legal brief. I'm not a lawyer. What I am is someone who's implemented AI systems inside regulated companies and watched what happens when executives leave this to the legal team without anyone who actually understands how AI works. Let's fix that.
Two waves of the EU AI Act are already active. If your company uses any AI — including off-the-shelf tools like ChatGPT, Copilot, or any AI-assisted HR, finance, or legal software — you likely have obligations right now.
| Date | What Happened / What's Coming | Status |
|---|---|---|
| Feb 2, 2025 | Prohibited AI practices banned. AI literacy training required for all staff using AI. | ● IN FORCE |
| Aug 2, 2025 | General-purpose AI (GPT, Gemini, Claude) obligations active. Penalty regime begins. AI Office operational. | ● IN FORCE |
| Aug 2, 2026 | Original high-risk systems deadline. Now proposed to move. | ⏳ BEING REVISED |
| Dec 2, 2027 | New proposed deadline: Annex III high-risk systems (employment, credit, healthcare, education AI). | ⏳ PROPOSED (not final) |
| Aug 2, 2028 | New proposed deadline: Annex I systems (critical infrastructure, biometrics, law enforcement AI). | ✓ DELAYED (proposed) |
The delay is part of the EU's "Digital Omnibus" package proposed December 2025. The new deadlines are tied to the completion of technical harmonised standards (CEN-CENELEC work expected late 2026). The delay proposal still needs EU Parliament and Council approval — so August 2026 is not formally cancelled yet. Plan as if both timelines are possible.
This is the part most businesses have missed entirely.
Article 4 of the EU AI Act requires that companies ensure all personnel who operate or use AI systems have sufficient AI literacy. This is not a future obligation. It has been in force for over a year.
What does "sufficient AI literacy" mean in practice? Your staff should understand what the AI tool does, what it cannot do, how to spot errors, and the legal and ethical implications of acting on its outputs. "The AI said so" is not a valid defence in a compliance context.
If you're using Copilot for contract review, ChatGPT for financial analysis, or any AI tool in an HR or hiring process — this applies to you, today.
If your company provides or deploys general-purpose AI models — meaning the large language models like GPT-4, Gemini, Claude, or any fine-tuned version of these — you have transparency and documentation obligations that have been active since August 2025.
If you're a user of these tools (not the provider), your main obligation is knowing you're using them and being able to account for it. If you're building products on top of them, the obligations are significantly higher.
The AI Act categorises AI systems into four tiers. Knowing which tier your tools fall into is the first thing any compliance effort needs to establish.
Prohibited (Banned Entirely): Social scoring, predictive policing, real-time biometric surveillance in public spaces, AI that manipulates people through subliminal techniques. If you're using any of these, stop. The fines here go up to 7% of global annual turnover.
High-Risk (Strict Compliance Required): This is where most regulated-industry companies will find themselves. High-risk AI includes: any AI used in hiring or managing employees, AI that makes or influences credit decisions, AI used in healthcare diagnosis or treatment, AI used in education assessment, and AI used in access to essential services like insurance or housing. Full compliance requirements apply — risk assessments, documentation, human oversight, conformity assessments, EU database registration.
Limited Risk: AI that interacts with humans (chatbots, customer service AI) must disclose that it's AI. Content generated by AI should be labelled. These are transparency requirements, not structural compliance ones.
Minimal Risk: Most AI tools fall here. General productivity AI, spam filters, recommendation engines, inventory optimisation. Good governance applies, but no mandatory compliance obligations.
| Violation | Maximum Fine |
|---|---|
| Using a prohibited AI system | €35 million or 7% of global annual turnover |
| Non-compliant high-risk system | €15 million or 3% of global annual turnover |
| False or incomplete documentation | €7.5 million or 1% of global annual turnover |
For SMBs, fines are calculated as whichever is lower — the percentage or the fixed amount. That's the proportionality carve-out for smaller companies. But "lower fine" and "no fine" are very different things.
Enforcement is through national market surveillance authorities. In Portugal, that will be through the relevant sectoral regulators. In Poland, the Office of Competition and Consumer Protection (UOKiK) has been designated. Member state readiness varies considerably — but that's not a long-term protection strategy.
Whether the high-risk deadline is August 2026 or December 2027, the preparation work is identical. The difference is how much time pressure you have. Start here:
Here's the thing about compliance deadlines in Europe: the ones that matter are never the ones that slip. The penalty regime started in August 2025. The AI Office is operational. The prohibited practices ban is in force. The "everything moves to 2027" narrative is only true for one specific category of high-risk systems — and it's still a proposal, not a law.
Companies that use the delay as an excuse to do nothing are making a mistake. Companies that use it as breathing room to build proper compliance infrastructure — an AI inventory, a governance policy, staff training, documented vendor accountability — will be in a fundamentally different position in 2027 than the ones who procrastinated.
The businesses that move now are also the ones that get to define what AI-responsible looks like in their industry in Europe. That's not a compliance advantage — it's a competitive one. Clients, partners, and regulators notice when a company can demonstrate it has done the work.
I've worked in and around businesses long enough to know that "we have until 2027" is exactly the kind of sentence that turns into a €15 million problem. The history of digital transformation is full of companies that waited for certainty before acting. The EU AI Act's certainty is already here — it's just spread across three waves. Two of them already landed. The third is two years away. Use those two years wisely.
Yes, with proportionality. The AI Act applies to any company using AI that is placed on the EU market or affects EU residents — regardless of company size or country of origin. SMEs get proportional fines (lower of percentage or fixed amount) and some simplified pathways, but the obligations still apply.
Yes. Since August 2025, if your company uses general-purpose AI models commercially, you need to be able to account for that usage. OpenAI, Microsoft, and Google are the providers — but you're the deployer, and deployers have their own obligations: knowing what you're deploying, documenting how it's used, and ensuring your staff are trained on its limitations.
Not yet. The Digital Omnibus proposal to delay it is still being reviewed by EU Parliament and Council. Until it's formally adopted, August 2026 remains the official deadline for high-risk systems. Plan for both scenarios.
The AI inventory is free. The risk classification can be done with a good lawyer and a few hours. AI literacy training can be done through existing providers. The biggest bottleneck isn't money — it's the decision to start. Most companies delay because the scope feels overwhelming. Breaking it into the four steps above makes it manageable.
I work with SMBs and regulated industry companies across Portugal and Poland to implement AI responsibly — including EU AI Act readiness. If you're not sure where your company stands, a 30-minute conversation is usually enough to map it out.
Book a Free Diagnostic Call