The ambassadors of the 27 countries of the European Union unanimously approved the world’s first comprehensive rulebook for Artificial Intelligence, rubber-stamping the political agreement reached in December.
In December, EU policymakers reached a political agreement on the main sticking points of the AI Act, a flagship bill to regulate Artificial Intelligence based on its capacity to cause harm. The complexity of the law meant its technical refinement took more than one month.
On 24 January, the Belgian presidency of the Council of EU Ministers presented the final version of the text, leaked in an exclusive by Euractiv, at a technical meeting. Most member states maintained reservations at the time as they did not have enough time to analyse the text comprehensively.
These reservations were finally lifted with the adoption of the AI Act from the Committee of Permanent Representatives on Friday (2 February). However, the green light from EU ambassadors was not guaranteed since some European heavyweights resisted parts of the provisional deal until the very last days.
Powerful AI models
The primary opponent of the political agreement was France, which, together with Germany and Italy, asked for a lighter regulatory regime for powerful AI models, such as Open AI’s GPT-4, that support General Purpose AI systems like ChatGPT and Bard.
Europe’s three largest economies asked for limiting the rules in this area to codes of conduct, as they did not want to clip the wings to promising European start-ups like Mistral AI and Aleph Alpha that might challenge American companies in this space.
Read: France, Germany and Italy were deeply in the pocket of AI firm lobbyists and created a lot of time wasting opposition to good laws, allowing the big boys to gain further grounds over the little guys whilst they were themselves signing letters asking for moratoriums on dangerous world destroying AI research.
However, the European Parliament was united in asking for hard rules for these models, considering that it was unacceptable to carve out the most potent types of Artificial Intelligence from the regulation while leaving all the regulatory burden on smaller actors.
The compromise was based on a tiered approach, with horizontal transparency rules for all models and additional obligations for compelling models deemed to entail a systemic risk.
The Belgian presidency put the member states before a ‘take-it-or-leave-it’ scenario and, despite attempts from France to delay the ambassadors’ vote, kept a tight timeline -partially to allow enough time for the legal polishing of the text and partially to limit last-minute lobbying.
French back-room manoeuvring aimed at gathering sufficient opposition to obtain concessions in the text or even reject the provisional agreement.
However, the balance titled decisively against Paris as Berlin decided to support the text earlier this week. The German Digital Minister, the liberal Volker Wissing, found himself isolated in its opposition to the AI rulebook from the coalition partners and had to drop his reservations.
Italy, always the most defiladed country of the sceptical trio as it does not have a leading AI start-up to defend, also decided not to oppose the AI Act. Despite discontent with the agreement, Rome opted to avoid drama as it holds the rotating presidency of the G7, where AI is a crucial topic.
EU countries still have room to influence how the AI law will be implemented, as the Commission will have to issue around 20 acts of secondary legislation. The AI Office, which will oversee AI models, is also set to be significantly staffed with seconded national experts.
The European Parliament’s Internal Market and Civil Liberties Committees will adopt the AI rulebook on 13 February, followed by a plenary vote provisionally scheduled for 10-11 April. The formal adoption will then be complete with endorsement at the ministerial level.
The AI Act will enter into force 20 days after publication in the official journal. The bans on the prohibited practices will start applying after six months, whereas the obligations on AI models will start after one year.
All the rest of the rules will kick in after two years, except for the classification of AI systems that have to undergo third-party conformity assessment under other EU rules as high-risk, which was delayed by one additional year.
Organisational Structures | Technology and Science | Military, IT and Lifestyle consultancy | Social, Broadcast & Cross Media | Flying aircraft