Google to Sign EU AI Code Despite Concerns

Google has confirmed that it will sign the European Union’s voluntary AI code of practice, a framework designed to help tech companies align with the EU’s landmark Artificial Intelligence Act. While supporting the initiative, Google has also raised concerns that certain provisions may hinder innovation and delay AI development in Europe.

Voluntary code aims to guide AI compliance in Europe

The AI code of practice was drafted by a panel of 13 independent experts and is intended to offer clear guidance for companies developing general-purpose AI models. It outlines how developers can comply with key parts of the AI Act, including the need to provide summaries of training data, respect EU copyright laws, and ensure transparency in their systems. While not legally binding, the code is expected to influence how companies prepare for full compliance once the AI Act is fully enforced.

Google’s leadership expressed optimism that the code will promote access to high-quality and safe AI tools for both citizens and businesses across Europe. However, they also voiced apprehension about potential overreach. Specific concerns include the risk of exposing trade secrets, diverging from established copyright norms, and introducing delays in model approvals—all of which could, in Google’s view, impact the region’s AI competitiveness.

Also read: EU AI Code of Practice May Apply by End-2025

Divergent industry reactions and legal uncertainty

The tech industry’s response to the code has been mixed. Microsoft has indicated it is likely to sign the document, while Meta has opted out due to unresolved legal uncertainties. Meta cited the potential legal ambiguity surrounding general-purpose AI models and the broader implications of complying with the voluntary framework as its primary reason for staying out.

This divergence in responses highlights the ongoing tension between promoting ethical AI development and maintaining a competitive edge. For global tech firms, aligning with the EU’s evolving regulatory landscape involves careful navigation of legal, technical, and commercial risks.

The EU positions itself as a global AI rule-setter

The EU Artificial Intelligence Act aims to establish the world’s first comprehensive legal framework for AI. With its focus on transparency, accountability, and risk management, the legislation is being closely watched by other regions. The code of practice serves as a bridge between current practices and upcoming legal obligations, allowing early adopters like Google to shape how the rules are interpreted and implemented.

By choosing to participate despite reservations, Google is signalling its intent to remain engaged in Europe’s AI ecosystem while advocating for practical regulations. As the AI Act edges closer to full implementation, the voluntary code may set the tone for how compliance evolves across the industry.

Latest articles

Related articles