The EU AI Act: Most Companies Still Aren’t Ready
Research suggests a worrying gap between regulation and reality.
Surveys completed by Deloitte (2024 – 2025) of European organisations revealed widespread uncertainty, with many reporting low confidence in understanding and implementing the EU AI Act’s implications.
Research from McKinsey & Company shows AI adoption is accelerating rapidly across sectors, yet governance maturity is lagging well behind deployment as highlighted in McKinsey’s 2025 State of AI survey
And analysis by f indicates that while boards are increasingly aware of AI risk, operational teams often lack clarity on what compliance frameworks require in practice.
In short: AI use is growing fast. Regulatory understanding is not.
That gap is now a risk.
A quick reminder: what is the EU AI Act?
The EU AI Act is the world’s first comprehensive AI regulation. It applies a risk-based framework to AI systems, categorising them from minimal risk through to high risk and unacceptable risk.
High-risk systems — such as AI used in recruitment, credit scoring, critical infrastructure or healthcare — will soon have to meet strict requirements (with main obligations applying from August 2026) around:
- Risk management
- Data governance
- Transparency
- Human oversight
- Accuracy and robustness
It is not optional guidance. It is binding law.
According to reporting from Capgemini, many organisations experimenting with generative AI have yet to formally assess regulatory exposure.
Meanwhile, research by IBM on AI governance shows that only a minority of companies have enterprise-wide AI policies in place, despite widespread deployment.
If you do not know where AI is being used in your organisation, you cannot assess whether it falls into a regulated category.
And under the EU AI Act, ignorance is not a defence.
Does it only affect EU companies?
No.
Like GDPR, the EU AI Act has extraterritorial reach.
If an organisation outside the EU provides AI systems used within the EU — or whose outputs affect individuals in the EU — it may be subject to the Act.
That means:
- A US SaaS provider selling AI tools into Europe
- An Australian company using AI to assess EU-based applicants
- A UK-based platform deploying AI to EU customers
All may fall within scope.
This will increasingly affect procurement. EU-based organisations will need to ensure that vendors outside the EU can demonstrate compliance. Contracts, audit rights and documentation standards will shift accordingly.
The risks of not knowing
There are obvious financial penalties. The Act allows for significant fines tied to global turnover
The Act allows for steep administrative fines — up to €35 million or 7% of global annual turnover (whichever is higher) for prohibited AI practices, and up to €15 million or 3% for most other violations (e.g., high-risk obligations).
But the deeper risks are strategic.
- Procurement exclusion Public bodies and large enterprises will require compliance evidence. Suppliers who cannot demonstrate governance maturity will lose out.
- Contractual liability If an AI system causes harm or breaches regulatory requirements, liability questions will land quickly.
- Operational disruption High-risk AI systems may need redesign or suspension if they fail to meet requirements.
- Reputational damage The Act is grounded in protection of fundamental rights. Non-compliance will not be seen as a technical oversight. It will be viewed as governance failure.
Why this regulation exists
The spirit of the EU AI Act is not anti-innovation. It is about building trust.
AI systems are now embedded in hiring decisions, lending, infrastructure management, public services and healthcare.
Without transparency and oversight:
- Bias can scale
- Decisions can become opaque
- Harm can become systemic
The EU has decided that AI is now infrastructure-level technology. Infrastructure is regulated.
The most important shift
The most significant impact of the EU AI Act is not the fine levels. It is the governance expectation.
AI can no longer sit invisibly inside systems.
Organisations must:
- Map where AI is used
- Classify risk levels
- Document processes
- Assign accountability
- Monitor performance
For many companies, the biggest exposure right now is not deliberate non-compliance.
It is a lack of awareness.
The research is clear: AI adoption is accelerating. Governance maturity is uneven. Regulatory understanding is patchy.
That combination creates risk.
As of February 2026, prohibitions and GPAI rules are already in effect, while the bulk of high-risk obligations apply from 2 August 2026 — leaving organizations just months to prepare.
The organisations that treat this seriously now will not just avoid penalties. They will be able to demonstrate control, transparency and trust — which will increasingly become competitive advantages in their own right.
With high-risk obligations just months away (August 2026), starting an AI inventory and risk classification exercise now can turn compliance from a burden into a differentiator.


