A few weeks ago Walled Culture explored how the leaders in the generative AI world are trying to influence the future legal norms for this field. In the face of a powerful new form of an old technology – AI itself has been around for over 50 years – those are certainly needed. Governments around the world know this too: they are grappling with the new issues that large language models (LLMs), generative AI, and chatbots are raising every day, not least in the realm of copyright. For example, one EU body, EUIPO, has published a 436-page study “The Development Of Generative Artificial Intelligence From A Copyright Perspective”. Similarly, the US Copyright Office has produced a three-part report that “analyzes copyright law and policy issues raised by artificial intelligence”. The first two parts were on Digital Replicas and Copyrightability. The last part, just released in a pre-publication form, is on Generative AI Training. It is one of the best introductions to that field, and not too long – only 113 pages.
Alongside these government moves to understand this area, there are of course efforts by the copyright industry itself to shape the legal landscape of generative AI. Back in March, Walled Culture wrote about a UK campaign called “Make It Fair”, and now there is a similar attempt to reduce everything to a slogan by a European coalition of “authors, performers, publishers, producers, and cultural enterprises”. The new campaign is called “Stay True to the Act” – the Act in question being the EU Artificial Intelligence Act. The main document explaining the latest catchphrase comes from the European Publishers Council, and provides numerous insights into the industry’s thinking here. It comes as no surprise to read the following:
Let’s be clear: our content—paid for through huge editorial investments—is being ingested by AI systems without our consent and without compensation. This is not innovation; it is copyright theft.
As Walled Culture explained in March, that’s not true: material is not stolen, it is simply analysed as part of the AI training. Analysing texts or images is about knowledge acquisition, not copyright infringement.
In the Stay True to the Act document, this tired old trope of “copyright theft” leads naturally to another obsession of the copyright world: a demand for what it calls “fair licences”. Walled Culture the book (free digital versions available) noted that this is something that the industry has constantly pushed for. Back in 2013, a series of ‘Licences for Europe’ stakeholder dialogues were held, for example. They were based on the assumption that modernising copyright meant bringing in licensing for everything that occurred online. If a call for yet more licensing is old hat, the campaign’s next point is a novel one:
AI systems don’t just scrape our articles—they also capture our website layouts, our user activity, and data that is critical to our advertising models.
It’s hard to understand what the problem is here, other than the general concern about bots visiting and scraping sites – something that is indeed getting out of hand in terms of volume and impact on servers. It’s not as if generative AI cares about Web site design, and it’s hard to see what data about advertising models can be gleaned. It’s also worth nothing that this is the only point where members of the general public are mentioned in the entire document, albeit only as “users”. When it comes to copyright, publishers don’t care about the rights or the opinions of ordinary citizens. Publishers do care about journalists, at least to the following extent:
AI-generated content floods the market with synthetic articles built from our journalism. Search engines like Google’s and chatbots like ChatGPT, increasingly serve AI summaries which is wiping out the traffic we rely on, especially from dominant players.
The statement that publishers “rely on” traffic from search engines is an unexpected admission. The industry’s main argument for the “link tax” that is now part of the EU Copyright Directive was that search engines were giving nothing significant back when their search results linked to the original article, and should therefore pay something. Now publishers are admitting that the traffic from search engines is something they “rely on”. Alongside that significant U-turn on the part of the publishers, there is a serious general point about journalism in the age of AI:
These [generative AI] tools don’t create journalism. They don’t do fact-checking, hold power to account, or verify sources. They operate with no editorial standards, no legal liability—and no investment in the public interest. And yet, without urgent action, there is a danger they will replace us in the digital experience.
This is an extremely important issue, and the publishers are right to flag it up. But demanding yet more licensing agreements with AI companies is not the answer. Even if the additional monies were all spent on bolstering reporting – a big “if” – the sums involved would be too small to matter. Licensing does not address the root problem, which is that important kinds of journalism need to be supported and promoted in new ways.
One solution is that adopted by the Guardian newspaper, which is funded by its readers who want to read and sustain high-quality journalism. This could be part of a wider move to the “true fans” idea discussed in Walled Culture the book. Another approach is for more government support – at arm’s length – for journalism of the kind produced by the BBC, say, where high editorial standards ensure that fact-checking and source verification are routinely carried out – and budgeted for.
Complementing such direct support for journalism, new laws are needed to disincentivise the creation of misleading fake news stories and outright lies that increasingly drown out the truth. The Stay True to the Act document suggests “platform liability for AI-generated content”, and that could be part of the answer; but the end users who produce such material should also face consequences for their actions.
In its concluding section, “3-Pillar Model for the Future – and Why Licensing is Essential”, the document bemoans the fact that advertising revenue is “declining in a distorted market dominated by Google and Meta”. That is true, but only because publishers have lazily acquiesced in an adtech model based on real-time bidding for online ads powered by the constant surveillance of visitors to Web sites. A better approach is to use contextual advertising, where ads are shown according to the material being viewed. This not only requires no intrusive monitoring of the personal data of visitors, but has been found to be more effective than the current approach.
Moreover, in a nice irony, the new generation of LLMs make providing contextual advertising extremely easy, since they can analyse and categorise online material rapidly for the purpose of choosing suitable ads to be displayed. Sadly, publishers’ visceral hatred of the new AI technologies means that they are unable to see these kind of opportunities alongside the threats.

Robin Edgar
Organisational Structures | Technology and Science | Military, IT and Lifestyle consultancy | Social, Broadcast & Cross Media | Flying aircraft