Microsoft Backs EU AI Act as Meta Opts Out

As the European Union rolls out its AI Act, tech companies are beginning to reveal their stance on the bloc’s regulatory framework. Microsoft has expressed intent to sign the EU’s voluntary code of practice—an initiative designed to guide companies in aligning with the new AI legislation—while Meta has publicly declined to participate, citing legal and operational concerns.

The code, developed by a panel of AI experts, aims to help developers of general-purpose AI models understand how to comply with upcoming rules. It emphasizes transparency, responsible data usage, and adherence to EU copyright standards, including the publication of training data summaries.

Microsoft leadership indicated that the company is reviewing the document and is likely to sign on, describing the code as a step toward constructive engagement between regulators and the industry.

Meta flags legal uncertainties, warns of regulatory overreach

In contrast, Meta has rejected the code outright, stating that the current version introduces vague obligations and goes beyond the scope of the AI Act. The company claims that the code could stifle innovation and complicate AI development for both large tech players and smaller European startups.

Meta’s concerns echo those of 45 other companies across the EU who have questioned whether the voluntary guidelines impose premature or excessive constraints on the use and training of frontier AI models. Critics argue that such an approach risks discouraging innovation before the AI Act has been fully operationalized.

Meta’s position is particularly notable given its ongoing involvement in AI development through models such as Llama and its broader push into generative AI technologies.

Voluntary code may shape long-term regulatory outlook

The divide between Microsoft and Meta highlights broader disagreements within the industry over how to approach AI governance in Europe. While the AI Act formally came into force in mid-2024, the voluntary code represents an interim step aimed at building compliance culture ahead of full implementation.

Some major players, including OpenAI and Mistral, have already signed the code, signaling support for self-regulation while legal frameworks continue to evolve. As AI governance becomes a critical policy issue worldwide, the EU’s approach could serve as a model—or a cautionary tale—for balancing innovation and accountability.

Latest articles

Related articles