The European Union’s landmark AI Act, which officially goes into effect on Thursday, introduces stringent regulations for the development and use of artificial intelligence, particularly targeting large US tech companies. The law, first proposed in 2020 and now finalized, sets out a comprehensive framework for regulating AI across the EU.
The AI Act will impose strict obligations on “high-risk” AI applications, such as autonomous vehicles and medical devices, requiring rigorous risk assessments, high-quality training datasets, and detailed documentation. It also bans “unacceptable” AI uses like social scoring and predictive policing.
The law is expected to have significant implications for U.S. tech giants like Microsoft, Google, Amazon, Apple, and Meta, which are heavily involved in AI development. These companies will now face greater scrutiny in the EU market, particularly in how they use EU citizen data and comply with new transparency and safety standards.
Penalties for non-compliance are steep, with fines reaching up to €35 million or 7% of global annual revenue, depending on the severity of the breach. However, many of the AI Act’s provisions, including those affecting general-purpose AI systems like OpenAI’s ChatGPT, won’t fully come into force until at least 2026, allowing companies time to adjust to the new regulations.