Published on 2/2/2025 | 4 min read
As of February 2, the European Union (EU) has started enforcing its comprehensive AI regulatory framework, known as the EU AI Act. This landmark legislation, which came into effect on August 1, 2023, aims to establish strict AI governance standards to mitigate potential risks associated with artificial intelligence (AI) systems.
The February 2 compliance deadline focuses on AI systems deemed to pose unacceptable risk, with strict penalties for non-compliance. Companies operating within or interacting with the EU market must ensure adherence to the AI regulation in the EU, or face significant fines.
Breakdown of the EU AI Act’s Risk Levels
The AI governance framework classifies AI applications into four categories based on their potential risks:
Minimal Risk AI Systems: These include AI applications such as email spam filters, which do not require regulatory oversight.
Limited Risk AI Systems: AI tools like customer service chatbots fall under this category and must comply with transparency obligations.
High-Risk AI Systems: Examples include AI for healthcare recommendations or AI in critical infrastructure. These require stringent AI risk management and compliance measures.
Unacceptable Risk AI Systems: These are prohibited due to their potential harm to individuals and society.
AI Systems Banned Under the EU AI Act
The February 2 compliance deadline primarily targets unacceptable risk AI applications, which are explicitly banned. These include:
AI used for social scoring, such as building risk profiles based on personal behavior.
AI that manipulates human decisions subliminally or deceptively.
AI that exploits vulnerable populations, including children, the elderly, or individuals with disabilities.
AI attempting to predict criminal behavior based solely on appearance.
AI using biometric data to infer personal characteristics (e.g., sexual orientation or political beliefs).
AI collecting real-time biometric data in public spaces for law enforcement.
AI that tracks emotions in workplaces and educational institutions.
AI that expands facial recognition databases by scraping online images.
Non-Compliance Penalties and Enforcement
Companies that continue to use these banned AI applications will face heavy fines. The penalties include:
Up to €35 million (~$36 million) in fines.
Alternatively, fines could reach 7% of a company’s annual revenue, whichever is greater.
While organizations are expected to comply immediately, strict enforcement and penalties will take full effect in August 2025. According to Rob Sumroy, head of technology at Slaughter and May, the upcoming deadline will clarify enforcement mechanisms and designate the competent authorities responsible for oversight.
AI Industry Response: The EU AI Pact
In anticipation of the compliance deadline, over 100 companies, including Amazon, Google, and OpenAI, signed the EU AI Pact in September 2023. This voluntary agreement ensures companies proactively align with AI regulations ahead of enforcement.
However, major players such as Meta, Apple, and Mistral AI have notably declined to sign the Pact, raising questions about their AI legal requirements and compliance strategies.
While the AI compliance deadline is now active, certain exceptions exist under the EU AI Act:
The European Commission is expected to issue additional AI governance guidelines in early 2025 following stakeholder consultations. However, these AI ethics and governance guidelines remain unpublished as of now.
Interaction with Other AI Regulations
Organizations must also consider how the EU AI Act interacts with existing regulations such as:
Understanding these overlapping AI legal frameworks is crucial for businesses to achieve full compliance and avoid regulatory conflicts.
The EU AI Act compliance deadline marks a significant step toward AI risk management and responsible AI use. With severe penalties for non-compliance, companies must align their AI operations with EU regulations or risk substantial financial and reputational damage.
As AI continues to evolve, so will global regulations. Organizations should stay informed about AI governance updates, implement robust AI compliance strategies, and prepare for additional enforcement measures in the coming months.