The European Union’s landmark AI Act has now entered its enforcement phase, establishing the world’s first comprehensive regulatory framework for artificial intelligence in a major economy. This groundbreaking legislation adopts a risk-based approach, imposing varying levels of obligations on AI developers, providers, and users depending on the potential impact of their systems. The Act categorizes AI applications into four distinct tiers: unacceptable risk systems that face outright bans, high-risk applications subject to strict requirements, limited-risk systems with basic transparency rules, and minimal-risk uses that remain unregulated.
Among the prohibited practices are biometric categorization using sensitive characteristics, indiscriminate facial image scraping, predictive policing based solely on profiling, and emotion recognition in workplaces or schools. High-risk applications in critical sectors like healthcare, law enforcement, and education must meet rigorous standards for accuracy, transparency, and human oversight, with mandatory conformity assessments before deployment. The regulation also introduces special provisions for general-purpose AI models, requiring developers to disclose training data, document capabilities and limitations, and adhere to cybersecurity and environmental standards.
Enforcement will be overseen by national authorities and a newly created European AI Office, with non-compliance potentially resulting in substantial fines of up to €35 million or 7% of global turnover. The implementation will be phased, with bans on prohibited AI taking effect within six months, rules for general-purpose AI applying after twelve months, and full compliance for high-risk systems required within two years. The EU’s pioneering framework is expected to influence global AI governance, serving as a reference point for other jurisdictions developing their own regulatory approaches while ensuring that innovation progresses responsibly and ethically.