“The current draft,” Meta wrote in a confidential lobby paper, is a case of “regulatory overreach” that “poses a significant threat to AI innovation in the EU.”
It was early 2025, and the text Meta railed against was the second draft of the EU’s Code of Practice. The Code will put the EU’s AI Act into operation by outlining voluntary requirements for general-purpose AI, or models with many different societal applications (see Box 1).
Meta’s lobby message hit the right notes, as the second von der Leyen Commission has committed to slashing regulations to stimulate European ‘competitiveness’. An early casualty of this deregulatory drive was the EU’s AI Liability Directive, which would have allowed consumers to claim compensation for harms caused by AI.
And the Code may end up being another casualty. Meta’s top lobbyist said they would not sign unless there were significant changes. Google cast doubt on its participation.
But as this investigation by Corporate Europe Observatory and Lobby Control – based on insider interviews and analysis of lobby papers – reveals, Big Tech enjoyed structural advantages from early on in the process and – playing its cards well – successfully lobbied for a much weaker Code than could have been. That means weaker protection from potential structural biases and social harms caused by AI.
Potemkin participation: how civil society was sidelined
In a private meeting with the Commission in January 2025, Google “raised concerns about the process” of drafting the Code of Practice. The tech giant complained “model developers [were] heavily outweighed by other stakeholders”.
Only a superficial reading could support this. Over 1,000 of stakeholders expressed interest in participating to the EU’s AI Office, a newly created unit within the European Commission’s DG CNECT. Nearly four hundred organisations were approved.
But tech companies enjoyed far more access than others. Model providers – companies developing the large AI models the Code is expected to regulate – were invited to dedicated workshops with the working group chairs.
“This could be seen as a compromise,” Jimmy Farrell of the European think tank Pour Demain said. “On the one hand, they included civil society, which the AI Act did not make mandatory. On the other, they gave model providers direct access.”
Tech companies enjoyed far more access than others. Model providers were invited to dedicated workshops with the working group chairs.
Fifteen US companies, or nearly half of the total, were on the reported list of organisations invited to the model providers workshops. Among them, US tech giants Google, Microsoft, Meta, Apple, and Amazon.
Others included AI “start-ups” with multi-billion dollar valuations such as OpenAI, Anthropic, and Hugging Face, each of which receive Big Tech funding. Another, Softbank, is OpenAI’s lead partner for the US$500 billion Stargate investment fund.
In April, OpenAI dialed up its lobbying to water down the Code of Practice with a series of meetings with European politicians. Right: OpenAI’s main lobbyist Chris Lehane. Left: EU Commissioner Michael McGrath
EC – Audiovisual Service
Several European AI providers, which lobbied over the AI Act, were also involved. Some of these also partner with American tech firms, like the French Mistral AI or the Finnish SiloAI.
The participation of the other 350 organisations – which include rights advocates, civil society organisations, representatives of European corporations and SMEs, and academics – was more restricted. They had no access to the provider workshops, and despite a commitment to do so, sources said meeting minutes from the model providers workshops were not distributed to participants.
It put civil society, which participated in working group meetings and crowded plenaries, at a disadvantage. Opportunities for interaction during meetings were limited. Questions needed to be submitted beforehand through a platform called SLIDO, which others could then up-vote.
Normally, the AI Office would consider the top ten questions during meetings, although sources told us, “controversial questions would sometimes be side-stepped”. Participants could neither submit comments during meetings, nor unmute themselves.
[…]
In the absence of full list of individual participants, which she requested but not received, Pfister Fetz would “write down every name she saw on the screen” and look people up after, “to see if they were like-minded or not.”
Participants received little notice to review and comment on draft documents with short deadlines. Deadlines to apply for a speaking slot to discuss a document would come before said document had even been shared. The third draft of the Code was delayed for nearly a month, without communication from the AI Office, until one day, without notice, it landed in participants’ mailboxes.
[…]
A long-standing demand from civil society was a dedicated civil society workshop. It was only after the third severely watered down Code of Practice draft that such a workshop took place.
“They had many workshops with model providers, and only one at the end with civil society, when they told us there would only be minor changes possible,” van der Geest, the fundamental rights advocate, said. “It really shows how they see civil society input: as secondary at best.”
Partnering with Big Tech and the AI office: a conflict of interest?
A contract to support the AI Office in drafting the Code of Practice was awarded, under an existing framework contract, to a consortium of external consultants – Wavestone, Intellera, and the Centre for European Policy Studies (CEPS).
It was previously reported that the lead partner, the French firm Wavestone, advised companies on AI Act compliance, but “does not have [general purpose AI] model providers among its clients”.
But our investigation revealed that the consultants do have ties to model providers.
In 2023 Wavestone announced it had been “selected by Microsoft to support the deployment and accelerated adoption of Microsoft 365 Copilot as a generative artificial intelligence tool in French companies.”
This resulted in Wavestone receiving a “Microsoft Partner of the Year Award” at the end of 2024, when it already supported the AI Office in developing the Code. The consultancy also worked with Google Cloud and is an AWS partner.
The other consortium partners also had ties to GPAI model providers. The Italian consultancy Intellera was bought in April 2024 by Accenture and is now “Part of Accenture Group”. Accenture boasted at the start of 2025 that they were “a key partner” to a range of technology providers, including Amazon, Google, IBM, Microsoft, and NVIDIA – in other words, US general purpose model providers.
The third and final consortium partner, CEPS, counted all Big Tech among corporate members – including Apple, AWS, Google, Meta, Microsoft. At a rate of between €15,000 – €30,000 EUR (plus VAT) per year, members get “access to task forces” on EU policy and “input on CEPS research priorities”.
The problem is that these consultancy firms can hardly be expected to advise the Commission to take action that would negatively impact their own clients. The EU Financial Regulation states that the Commission should therefore reject a contractor where a conflicting interest “can affect or risk the capacity to perform the contract in an independent, impartial and objective manner”.
Also the 2022 framework contract under which the consortium was initially hired by the European Commission stipulated that “a contractor must take all the necessary measures to prevent any situation of conflict of interest.”
[…]
On key issues, the messaging of the US tech firms was well coordinated. Confidential lobby papers by Microsoft and Google, submitted to EU members states and seen by Corporate Europe Observatory and LobbyControl, echoed what Meta said publicly – that the Code’s requirements “go beyond the scope of the AI Act” and would “undermine” or “stifle” innovation.
It was a position carefully crafted to match the political focus on deregulation.
“The current Commission is trying to be innovation and business friendly, but is actually disproportionately benefiting Big Tech” said Risto Uuk, Head of EU Policy and Research from the Future of Life Institute.
Uuk, who curates a biweekly newsletter on the EU AI Act, added that “there is also a lot of pressure on the EU from the Trump administration not to enforce regulation.”
[…]
One of the most contentious topics has been the risk taxonomy. This determines the risks model providers will need to test for and mitigate. The second draft of the Code introduced a split between “systemic risks,” such as nuclear risks or a loss of human oversight, and a much weaker category of “additional risks for consideration”.
“Providers are mandated to identify and mitigate systemics risks,” Article 19’s Dinah van der Geest said, “but the second tier, including risks fundamental rights, democracy, or the environment, are optional for providers to follow.”
These risks are far from hypothetical. From Israeli mass surveillance and killing of Palestinians in Gaza, the dissemination of disinformation during elections including by far-right groups and foreign governments, to massive lay-offs of US federal government employees, generative AI is already used in countless problematic ways. In Europe, investigative journalism has exposed the widespread use of biased AI systems in welfare systems.
The introduction of a hierarchy in the risk taxonomy offered additional lobby opportunities. Both Google and Microsoft argued that “large-scale, illegal discrimination” needed to be bumped down to optional risks.
[…]
The tech giants got their way: in the third draft, large-scale, illegal discrimination was removed from the list of systemic risks, which are mandatory to check for, and categorised under “other types of risk for potential consideration”.
Like other fundamental rights violations, it now only needs to be checked for if “it can be reasonably foreseen” and if the risk is “specific to the high-impact capabilities” of the model.
“But what is foreseeable?” asked Article 19’s Dinah van der Geest. “It will be left up to the model providers to decide.”
[…]
At the AI Action Summit in Paris in February 2025, European Commission President Ursula von der Leyen had clearly drunk the AI Kool-Aid: “We want Europe to be one of the leading AI continents. And this means embracing a way of life where AI is everywhere.” She went on to paint AI as a silver bullet for almost every societal problem: “AI can help us boost our competitiveness, protect our security, shore up public health, and make access to knowledge and information more democratic.”
The Code of Practice seems to be only one of the first casualties of the Commission’s deregulatory offensive. With key rules on AI, data protection, and privacy up for review this year, the main beneficiaries are poised to be the corporate interests with endless lobbying resources.
The AI Action Summit marked a distinctive shift in the Commission’s discourse. Where previously the Commission paid at least lip-service to safeguarding fundamental rights when rolling out AI, it now largely abandoned that discourse talking about winning “the global race for AI” instead.
At the same summit, Henna Virkkunen, the Commissioner for Tech Sovereignty, was quick to parrot von der Leyen’s message, announcing that the AI Act would be implemented ‘innovation-friendly’, and after criticism from Meta and Google a week earlier, she promised that the Code of Practice would not create “any extra burden”.
Ursula von der Leyen at the AI Action Summit. In the background on the right Google CEO Sundar Pichai.
EC – Audiovisual Service
Big Tech companies have quickly caught on to the new deregulatory wind in Brussels. They have ramped up their already massive lobbying budgets and have practiced their talking points about Europe’s ‘competitiveness’ and ‘over-regulation’.
The Code of Practice on General-Purpose AI seems to be only one of the first casualties of this deregulatory offensive. With key rules on AI, data protection, and privacy up for review this year, the main beneficiaries are poised to be the corporate interests with endless lobbying resources.
[…]
Big Tech cannot be seen as just another stakeholder. The Commission should safeguard the public interest from Big Tech influence. Instead of beating the deregulation drum, the Commission should now stand firm against the tech industry’s agenda and guarantee the protection of fundamental rights through an effective Code of Conduct.
Source: Coded for privileged access | Corporate Europe Observatory

Robin Edgar
Organisational Structures | Technology and Science | Military, IT and Lifestyle consultancy | Social, Broadcast & Cross Media | Flying aircraft