A(I) deal at any cost: Will the EU buckle to Big Tech?

Would you trust Elon Musk with your mortgage? Or Big Tech with your benefits?

Us neither.

That’s what’s at stake as the EU’s Artificial Intelligence Act reaches the final stage of negotiations. For all its big talk, it seems like the EU is buckling to Big Tech.

EU lawmakers have been tasked with developing the world’s first comprehensive law to regulate AI products. Now that AI systems are already being used in public life, lawmakers are rushing to catch up.

[…]

The principle of precaution urges us to exercise care and responsibility in the face of potential risks. It is crucial not only to foster innovation but also to prevent the unchecked expansion of AI from jeopardising justice and fundamental rights.

At the Left in the European Parliament, we called for this principle to be applied to the AI Act. Unfortunately, other political groups disagreed, prioritising the interests of Big Tech over those of the people. They settled on a three-tiered approach to risk whereby products are categorised into those that do not pose a significant risk, those that are high risk and those that are banned.

However, this approach contains a major loophole that risks undermining the entire legislation.

Like asking a tobacco company whether smoking is risky

When it was first proposed, the Commission outlined a list of ‘high-risk uses’ of AI, including AI systems used to select students, assess consumers’ creditworthiness, evaluate job-seekers, and determine who can access welfare benefits.

Using AI in these assessments has significant real-life consequences. It can mean the difference between being accepted or rejected to university, being able to take out a loan or even being able to access welfare to pay bills, rent or put food on the table.

Under the three-tiered approach, AI developers are allowed to decide themselves whether their product is high-risk. The self-assessment loophole means the developers themselves get to determine whether their systems are high risk akin to a tobacco company deciding cigarettes are safe for our health, or a fossil fuel company saying its fumes don’t harm the environment.

[…]

Experience shows us that when corporations have this kind of freedom, they prioritise their profits over the interests of people and the planet. If the development of AI is to be accountable and transparent, negotiators must eliminate provisions on self-assessment.

AI gives us the opportunity to change our lives for the better. But as long as we let big corporations make the rules, we will continue to replicate inequalities that are already ravaging our societies.

Source: A(I) deal at any cost: Will the EU buckle to Big Tech? – EURACTIV.com

OK, so this seems to be a little breathless – surely we can put in a mechanism for EU checking of risk level when notified of a potential breech, including harsh penalties for misclassifying an AI?

However, the discussions around the EU AI Act – which had the potential to be one of the first and best pieces of regulation on the planet – has now descended into farce since ChatGPT and some strange idea that the original act did not have any provisions for General Purpose / Foundational AI models (it did – they were high risk models). The silly induced discussions this has provoked has only served to delay the AI act coming into force for over a year – something that big businesses are very very happy to see.

Robin Edgar

Organisational Structures | Technology and Science | Military, IT and Lifestyle consultancy | Social, Broadcast & Cross Media | Flying aircraft

 robin@edgarbv.com  https://www.edgarbv.com