This AI Rule Changes Everything, Starting 2025
The European Union is taking a bold step toward shaping the future of AI. More than just a set of rules, this regulation represents a fundamental shift in how we think about artificial intelligence. What does it mean for individuals, businesses, and governments to build AI that protects human rights and fosters innovation? Here is how this new AI era will come into being, through a phased approach that spans from 2025 through 2030.
From 2 February 2025
-
Prohibited AI practices in effect: rules banning certain
uses activate, including:
- AI systems using subliminal or manipulative techniques to distort a person’s behavior in a harmful way.
- Exploiting vulnerabilities due to age, disability, or socio-economic situation.
- Social scoring of individuals.
- Risk assessments based solely on profiling or personality traits.
- Scraping facial images from the internet or CCTV for facial-recognition databases.
- Emotion recognition in workplaces and education.
- Biometric categorization based on sensitive traits.
- Real-time remote biometric identification in public spaces for law enforcement, except in very specific, justified cases.
From 2 August 2025
- Governance and conformity infrastructure operational: the European AI Board and conformity assessment structures must be in place.
- Obligations on general-purpose AI models: providers of models for general use, such as large language models, face new duties.
- Cybersecurity: ENISA’s role in AI cybersecurity is activated.
- Transparency obligations: certain AI systems or components must clearly identify AI-generated outputs.
- Sanction regime: member states establish administrative fines and penalties for violations.
From 2 August 2026
- Full regulation into effect: most provisions apply, including regulation of high-risk AI systems and activation of compliance and enforcement mechanisms.
By 2 August 2027
- Large-scale IT systems: AI systems used in such environments must be brought into conformity by end of 2030.
- Existing general-purpose models: released models must be brought into conformity.
By 2 August 2030
- Public authorities’ AI systems: obligations applicable to these systems must be met.
Key goals and themes
- Harmonization: one legal framework across the EU to avoid fragmented national rules.
- High protection: safeguard health, safety, and fundamental rights like privacy, non-discrimination, and freedom of expression.
- Risk-based approach: regulate proportional to risk.
- Human-centric: AI should serve people and respect values and rights.
- Innovation: promote trustworthy AI while protecting citizens.
- Transparency: require openness for certain AI systems.
- Market access: foster an ecosystem of compliant public and private AI developers.
- Monitoring and testing: member states set up real-world test environments.
- Accountability: clarify roles and responsibilities across the AI value chain.
- Cooperation: EU-member state coordination and stakeholder engagement.
- Cybersecurity: improved cyber-resilience of AI systems.
- AI literacy: improve literacy for workers and users.
In summary
The EU is rolling out a comprehensive AI framework: prohibitions in 2025, broad duties for general-purpose and high-risk systems in 2026–2027, and full obligations for public-sector AI by 2030. The aim is a trustworthy, innovation-friendly AI ecosystem that protects people and sets clear rules for developers and providers within and beyond the EU.