The government has suffered another setback in the House of Lords over its plans to let artificial intelligence firms use copyright-protected work without permission.
An amendment to the data bill requiring AI companies to reveal which copyrighted material is used in their models was backed by peers, despite government opposition.
It is the second time parliament’s upper house has demanded tech companies make clear whether they have used copyright-protected content.
The vote came days after hundreds of artists and organisations including Paul McCartney, Jeanette Winterson, Dua Lipa and the Royal Shakespeare Company urged the prime minister not to “give our work away at the behest of a handful of powerful overseas tech companies”.
The amendment was tabled by crossbench peer Beeban Kidron and was passed by 272 votes to 125.
The bill will now return to the House of Commons. If the government removes the Kidron amendment, it will set the scene for another confrontation in the Lords next week.
Lady Kidron said: “I want to reject the notion that those of us who are against government plans are against technology. Creators do not deny the creative and economic value of AI, but we do deny the assertion that we should have to build AI for free with our work, and then rent it back from those who stole it.
“My lords, it is an assault on the British economy and it is happening at scale to a sector worth £120bn to the UK, an industry that is central to the industrial strategy and of enormous cultural import.”
The government’s copyright proposals are the subject of a consultation due to report back this year, but opponents of the plans have used the data bill as a vehicle for registering their disapproval.
The main government proposal is to let AI firms use copyright-protected work to build their models without permission, unless the copyright holders signal they do not want their work to be used in that process – a solution that critics say is impractical and unworkable.
The problem is that the actual creators never see much of the money from copyright income – that all goes to the giant copyright holding behemoths who keep it for themselves.
And considering the way that AI systems are trained, they do not keep a copy of the work ingested, just like a human doesn’t keep a copy. So to say that a system can only ingest a work if permission is given is just like saying a specific person can only read that without permission.
So anything that is freely available is fair game. If an AI wants to read a book, they should buy that book. Once.

Robin Edgar
Organisational Structures | Technology and Science | Military, IT and Lifestyle consultancy | Social, Broadcast & Cross Media | Flying aircraft