A visual divide: Meta rejects the EU’s AI Code of Practice while Microsoft signals support, reflecting the broader tension over AI regulation in Europe.

Meta has announced it will not sign the European Commission’s voluntary Code of Practice for general-purpose AI models, marking a fresh dispute between the social media giant and European regulators. In a statement posted to LinkedIn on Friday, Joel Kaplan, Meta’s Chief Global Affairs Officer, said the company believes the code introduces legal uncertainty and exceeds the boundaries set by the EU’s AI Act.

The European Union published the Code of Practice on July 10. Though voluntary, the code is designed to help companies align with the broader AI Act, a legal framework enacted in 2024 that governs how artificial intelligence can be developed and used across the bloc. The AI Act aims to ensure safety, transparency, and accountability for systems deployed in the EU.

Kaplan argued that the code contains measures that extend beyond the AI Act’s scope and could complicate compliance for developers. “Europe is heading down the wrong path on AI,” he wrote, noting that more than 40 European companies have also voiced opposition. Earlier this month, business leaders from 46 firms signed an open letter asking the Commission to delay implementation, citing overlapping regulations and confusion around enforcement.

The Code of Practice outlines a number of expectations for companies building general-purpose AI models, especially those considered high risk. Among the requirements are bans on training AI with pirated content and mandates to honor opt-out requests from creators who don’t want their work used in datasets. Developers must also provide public documentation describing how their models function and how copyright compliance is ensured.

While not mandatory, signing the code is expected to offer companies some relief from regulatory burden. The European Commission has said that those who sign may face fewer audits and enjoy greater legal clarity. Firms choosing not to participate will need to find alternate means of demonstrating compliance and may come under closer scrutiny.

Some tech firms are taking a different approach than Meta. OpenAI has indicated its intent to sign the code, calling the move a reflection of its long-standing commitment to transparency and safety. Microsoft President Brad Smith also said it was likely his company would sign, although they are still reviewing the specifics.

Meta, by contrast, has continued to criticize the EU’s approach to AI regulation. It previously called the AI Act “unpredictable” and said the legislative process was making product development harder for innovators. Meta’s resistance is not new—it has long been skeptical of the EU’s regulatory philosophy, and its stance now aligns with recent statements from U.S. political figures opposing heavy-handed oversight. Former President Trump reportedly pressured EU leaders to abandon the act, likening it to a tax on innovation.

Despite the clash, the AI Act remains binding. Companies that fail to comply can face fines of up to 7% of annual revenue. The penalties for general-purpose AI providers are set slightly lower, at around 3%.

As AI adoption continues to grow, the divide between global tech platforms and regional lawmakers may widen. While some firms pursue cooperation, others, like Meta, are choosing confrontation, betting that the regulatory burden is too high and the terms too vague to justify participation.

This image is the property of The New Dispatch LLC and is not licenseable for external use without explicit written permission.