Google has confirmed it will sign the European Union’s voluntary Code of Practice for General Purpose AI, positioning itself in alignment with the bloc’s upcoming AI Act, which begins phased implementation on August 2. The code provides a framework for AI developers to proactively adopt systems and safeguards that support regulatory compliance across areas such as transparency, copyright, and data management.
This move sets Google apart from Meta, which recently declined to sign, criticizing the EU’s approach as excessive. Google’s commitment comes amid growing pressure on major AI firms — such as Anthropic, OpenAI, and Meta — to prepare for strict rules governing models deemed to pose systemic risk.
Although Google expressed ongoing concerns about regulatory overreach potentially hampering innovation in Europe, the company acknowledged improvements in the final version of the code. Under its terms, AI developers must avoid training on unauthorized content, maintain clear documentation, and honor opt-outs from content owners.
The AI Act introduces a risk-based framework, including bans on high-risk applications like social scoring and certain biometric surveillance, while mandating registration and oversight of systems used in sensitive domains such as education and employment. By signing the code, Google reinforces its role in shaping Europe’s evolving AI governance landscape.




