European Publishers Council stays true – to the tired old trope about “copyright theft”

A few weeks ago Walled Culture explored how the leaders in the generative AI world are trying to influence the future legal norms for this field. In the face of a powerful new form of an old technology – AI itself has been around for over 50 years – those are certainly needed. Governments around the world know this too: they are grappling with the new issues that large language models (LLMs), generative AI, and chatbots are raising every day, not least in the realm of copyright. For example, one EU body, EUIPO, has published a 436-page study “The Development Of Generative Artificial Intelligence From A Copyright Perspective”. Similarly, the US Copyright Office has produced a three-part report that “analyzes copyright law and policy issues raised by artificial intelligence”. The first two parts were on Digital Replicas and Copyrightability. The last part, just released in a pre-publication form, is on Generative AI Training. It is one of the best introductions to that field, and not too long – only 113 pages.

Alongside these government moves to understand this area, there are of course efforts by the copyright industry itself to shape the legal landscape of generative AI. Back in March, Walled Culture wrote about a UK campaign called “Make It Fair”, and now there is a similar attempt to reduce everything to a slogan by a European coalition of “authors, performers, publishers, producers, and cultural enterprises”. The new campaign is called “Stay True to the Act” – the Act in question being the EU Artificial Intelligence Act. The main document explaining the latest catchphrase comes from the European Publishers Council, and provides numerous insights into the industry’s thinking here. It comes as no surprise to read the following:

Let’s be clear: our content—paid for through huge editorial investments—is being ingested by AI systems without our consent and without compensation. This is not innovation; it is copyright theft.

As Walled Culture explained in March, that’s not true: material is not stolen, it is simply analysed as part of the AI training. Analysing texts or images is about knowledge acquisition, not copyright infringement.

In the Stay True to the Act document, this tired old trope of “copyright theft” leads naturally to another obsession of the copyright world: a demand for what it calls “fair licences”. Walled Culture the book (free digital versions available) noted that this is something that the industry has constantly pushed for. Back in 2013, a series of ‘Licences for Europe’ stakeholder dialogues were held, for example. They were based on the assumption that modernising copyright meant bringing in licensing for everything that occurred online. If a call for yet more licensing is old hat, the campaign’s next point is a novel one:

AI systems don’t just scrape our articles—they also capture our website layouts, our user activity, and data that is critical to our advertising models.

It’s hard to understand what the problem is here, other than the general concern about bots visiting and scraping sites – something that is indeed getting out of hand in terms of volume and impact on servers. It’s not as if generative AI cares about Web site design, and it’s hard to see what data about advertising models can be gleaned. It’s also worth nothing that this is the only point where members of the general public are mentioned in the entire document, albeit only as “users”. When it comes to copyright, publishers don’t care about the rights or the opinions of ordinary citizens. Publishers do care about journalists, at least to the following extent:

AI-generated content floods the market with synthetic articles built from our journalism. Search engines like Google’s and chatbots like ChatGPT, increasingly serve AI summaries which is wiping out the traffic we rely on, especially from dominant players.

The statement that publishers “rely on” traffic from search engines is an unexpected admission. The industry’s main argument for the “link tax” that is now part of the EU Copyright Directive was that search engines were giving nothing significant back when their search results linked to the original article, and should therefore pay something. Now publishers are admitting that the traffic from search engines is something they “rely on”. Alongside that significant U-turn on the part of the publishers, there is a serious general point about journalism in the age of AI:

These [generative AI] tools don’t create journalism. They don’t do fact-checking, hold power to account, or verify sources. They operate with no editorial standards, no legal liability—and no investment in the public interest. And yet, without urgent action, there is a danger they will replace us in the digital experience.

This is an extremely important issue, and the publishers are right to flag it up. But demanding yet more licensing agreements with AI companies is not the answer. Even if the additional monies were all spent on bolstering reporting – a big “if” – the sums involved would be too small to matter. Licensing does not address the root problem, which is that important kinds of journalism need to be supported and promoted in new ways.

One solution is that adopted by the Guardian newspaper, which is funded by its readers who want to read and sustain high-quality journalism. This could be part of a wider move to the “true fans” idea discussed in Walled Culture the book. Another approach is for more government support – at arm’s length – for journalism of the kind produced by the BBC, say, where high editorial standards ensure that fact-checking and source verification are routinely carried out – and budgeted for.

Complementing such direct support for journalism, new laws are needed to disincentivise the creation of misleading fake news stories and outright lies that increasingly drown out the truth. The Stay True to the Act document suggests “platform liability for AI-generated content”, and that could be part of the answer; but the end users who produce such material should also face consequences for their actions.

In its concluding section, “3-Pillar Model for the Future – and Why Licensing is Essential”, the document bemoans the fact that advertising revenue is “declining in a distorted market dominated by Google and Meta”. That is true, but only because publishers have lazily acquiesced in an adtech model based on real-time bidding for online ads powered by the constant surveillance of visitors to Web sites. A better approach is to use contextual advertising, where ads are shown according to the material being viewed. This not only requires no intrusive monitoring of the personal data of visitors, but has been found to be more effective than the current approach.

Moreover, in a nice irony, the new generation of LLMs make providing contextual advertising extremely easy, since they can analyse and categorise online material rapidly for the purpose of choosing suitable ads to be displayed. Sadly, publishers’ visceral hatred of the new AI technologies means that they are unable to see these kind of opportunities alongside the threats.

Source: European Publishers Council stays true – to the tired old trope about “copyright theft” – Walled Culture

Regeneron to Acquire all 23andMe genetic data for $256m

23andMe Holding Co. (“23andMe” or the “Company”) (OTC: MEHCQ), a leading human genetics and biotechnology company, today announced that it has entered into a definitive agreement for the sale of 23andMe to Regeneron Pharmaceuticals, Inc. (“Regeneron”) (NASDAQ: REGN), a leading U.S.-based, NASDAQ-listed biotechnology company that invents, develops and commercializes life-transforming medicines for people with serious diseases. The agreement includes Regeneron’s commitment to comply with the Company’s privacy policies and applicable law, process all customer personal data in accordance with the consents, privacy policies and statements, terms of service, and notices currently in effect and have security controls in place designed to protect such data.

[…]

Under the terms of the agreement, Regeneron will acquire substantially all of the assets of the Company, including the Personal Genome Service (PGS), Total Health and Research Services business lines, for a purchase price of $256 million. The agreement does not include the purchase of the Company’s Lemonaid Health subsidiary, which the Company plans to wind down in an orderly manner, subject to and in accordance with the agreement.

[…]

 

Source: Regeneron, A Leading U.S. Biotechnology Company, to Acquire

Boeing Strikes Deal with DOJ to Avoid Criminal Charges Over 737 Max Crashes

Boeing and the Department of Justice have reached an “agreement in principle” that will keep the airplane manufacturer from facing criminal charges for allegedly misleading regulators about safety features on its 737 Max plane before two separate crashes that killed 346 people. The tentative deal, according to a court filing, will see Boeing pay out $1.1 billion in penalties and safety investments, as well as set aside an additional $444 million for the families of victims involved in the crashes.

Boeing’s payments will include $487.2 million paid as a criminal monetary penalty and $455 million to “strengthen the Company’s compliance, safety, and quality programs.” The company will also promise to “improve the effectiveness of its anti-fraud compliance and ethics program” to hopefully avoid the whole allegedly lying to the government thing. The DOJ is also requiring Boeing’s Board of Directors to meet with the families of victims to “hear directly from them about the impact of the Company’s conduct, as well as the Company’s compliance, safety, and quality programs.”

While the settlement will result in more money being made available to the surviving families of the victims, the resolution is not what some of the relatives were looking for. Paul Cassell, an attorney for some of the families, issued a statement earlier this week when word of the agreement started circulating: “Although the DOJ proposed a fine and financial restitution to the victims’ families, the families that I represent contend that it is more important for Boeing to be held accountable to the flying public.”

The families have objected to the potential of a plea deal for some time. When the DOJ first worked toward finalizing an agreement last year, Cassell said Boeing was getting “sweetheart” treatment. Mark Lindquist, another lawyer who represents victim families, said at the time that the deal “fails to acknowledge that the charged crime of Conspiracy to Defraud caused the death of 346 people. This is a sore spot for victim families who want accountability and acknowledgment.”

[…]

The case against Boeing stemmed from the company’s alleged attempts to conceal potential safety concerns with its 737 Max aircraft during the Federal Aviation Administration’s certification process. The company is accused of failing to disclose that its software system could turn the plane’s nose down without pilot input based on sensor data. Faulty readings from that sensor caused two separate flights to go nose down, and pilots were unable to override it and gain control, ultimately resulting in the planes crashing.

Boeing already reached one settlement with the Department of Justice over the 737 Max crashes, agreeing to pay $2.5 billion to avoid prosecution, but it violated the terms of that settlement, which opened it back up to potential charges.

Source: Boeing Strikes Deal with DOJ to Avoid Criminal Charges Over 737 Max Crashes

New Orleans police secretly used facial recognition on over 200 live camera feeds

New Orleans’ police force secretly used constant facial recognition to seek out suspects for two years. An investigation by The Washington Post discovered that the city’s police department was using facial recognition technology on a privately owned camera network to continually look for suspects. This application seems to violate a city ordinance passed in 2022 that required facial recognition only be used by the NOLA police to search for specific suspects of violent crimes and then to provide details about the scans’ use to the city council. However, WaPo found that officers did not reveal their reliance on the technology in the paperwork for several arrests where facial recognition was used, and none of those cases were included in mandatory city council reports.

“This is the facial recognition technology nightmare scenario that we have been worried about,” said Nathan Freed Wessler, an ACLU deputy director. “This is the government giving itself the power to track anyone — for that matter, everyone — as we go about our lives walking around in public.” Wessler added that the is the first known case in a major US city where police used AI-powered automated facial recognition to identify people in live camera feeds for the purpose of making immediate arrests.

Police use and misuse of surveillance technology has been thoroughly documented over the years. Although several US cities and states have placed restrictions on how law enforcement can use facial recognition, those limits won’t do anything to protect privacy if they’re routinely ignored by officers.

Read the full story on the New Orleans PD’s surveillance program at The Washington Post.

Source: New Orleans police secretly used facial recognition on over 200 live camera feeds

FBI Director Kash Patel Abruptly Closes Internal Watchdog Office Overseeing Surveillance Compliance

If there’s one thing the Federal Bureau of Investigation does well, it’s mass surveillance. Several years ago, then attorney general William Barr established an internal office to curb the FBI’s abuse of one controversial surveillance law. But recently, the FBI’s long-time hater (and, ironically, current director) Kash Patel shut down the watchdog group with no explanation.

On Tuesday, the New York Times reported that Patel suddenly closed the Office of Internal Auditing that Barr created in 2020. The office’s leader, Cindy Hall, abruptly retired. People familiar with the matter told the outlet that the closure of the aforementioned watchdog group alongside the Office of Integrity and Compliance are part of internal reorganization. Sources also reportedly said that Hall was trying to expand the office’s work, but her attempts to onboard new employees were stopped by the Trump administration’s hiring freezes.

The Office of Internal Auditing was a response to controversy surrounding the FBI’s use of Section 702 of the Foreign Intelligence Surveillance Act. The 2008 law primarily addresses surveillance of non-Americans abroad. However, Jeramie Scott, senior counselor at the Electronic Privacy Information Center, told Gizmodo via email that the FBI “has repeatedly abused its ability to search Americans’ communications ‘incidentally’ collected under Section 702” to conduct warrantless spying.

Patel has not released any official comment regarding his decision to close the office. But Elizabeth Goitein, senior director at the Brennan Center for Justice, told Gizmodo via email, “It is hard to square this move with Mr. Patel’s own stated concerns about the FBI’s use of Section 702.”

Last year, Congress reauthorized Section 702 despite mounting concerns over its misuses. Although Congress introduced some reforms, the updated legislation actually expanded the government’s surveillance capabilities. At the time, Patel slammed the law’s passage, stating that former FBI director Christopher Wray, who Patel once tried to sue, “was caught last year illegally using 702 collection methods against Americans 274,000 times.” (Per the New York Times, Patel is likely referencing a declassified 2023 opinion by the FISA court that used the Office of Internal Auditing’s findings to determine the FBI made 278,000 bad queries over several years.)

According to Goitein, the office has “played a key role in exposing FBI abuses of Section 702, including warrantless searches for the communication of members of Congress, judges, and protesters.” And ironically, Patel inadvertently drove its creation after attacking the FBI’s FISA applications to wiretap a former Trump campaign advisor in 2018 while investigating potential Russian election interference. Trump and his supporters used Patel’s attacks to push their own narrative dismissing any concerns. Last year, former representative Devin Nunes, who is now CEO of Truth Social, said Patel was “instrumental” to uncovering the “hoax and finding evidence of government malfeasance.”

Although Patel mostly peddled conspiracies, the Justice Department conducted a probe into the FBI’s investigation that raised concerns over “basic and fundamental errors” it committed. In response, Barr created the Office of Internal Auditing, stating, “What happened to the Trump presidential campaign and his subsequent Administration after the President was duly elected by the American people must never happen again.”

But since taking office, Patel has changed his tune about FISA. During his confirmation hearing, Patel referred to Section 702 as a “critical tool” and said, “I’m proud of the reforms that have been implemented and I’m proud to work with Congress moving forward to implement more.” However, reforms don’t mean much by themselves. As Goitein noted, “Without a separate office dedicated to surveillance compliance, [the FBI’s] abuses could go unreported and unchecked.”

[…]

Source: FBI Director Kash Patel Abruptly Closes Internal Watchdog Office Overseeing Surveillance Compliance

Russia to enforce location tracking app on all foreigners in Moscow

The Russian government has introduced a new law that makes installing a tracking app mandatory for all foreign nationals in the Moscow region.

The new proposal was announced by the chairman of the State Duma, Vyacheslav Volodin, who presented it as a measure to tackle migrant crimes.

“The adopted mechanism will allow, using modern technologies, to strengthen control in the field of migration and will also contribute to reducing the number of violations and crimes in this area,” stated Volodin.

Using a mobile application that all foreigners will have to install on their smartphones, the Russian state will receive the following information:

  • Residence location
  • Fingerprint
  • Face photograph
  • Real-time geo-location monitoring

“If migrants change their actual place of residence, they will be required to inform the Ministry of Internal Affairs (MVD) within three working days,” the high-ranking politician explained.

The measures will not apply to diplomats of foreign countries or citizens of Belarus.

Foreigners attempting to avoid their obligation in relation to the new law will be added to a registry of monitored individuals and deported from Russia.

Russian internet freedom observatory Roskomsvoboda’s reactions to this proposal reflect skepticism and concern.

Lawyer Anna Minushkina noted that the proposal violates Articles 23 and 24 of the Russian Constitution, guaranteeing the right to privacy.

President of the Uzbek Community in Moscow, Viktor Teplyankov, characterized the initiative as “ill-conceived and difficult to implement,” expressing doubts about its feasibility.

Finally, PSP Foundation’s Andrey Yakimov warned that such aggressive measures are bound to deter potential labor migrants, creating a different problem in the country.

The proposal hasn’t reached its final form yet, and specifics like what happens in the case of device theft/loss or similar technical or practical obstacles are to be addressed in the upcoming period during meetings between the Ministry and regional authorities.

The mass-surveillance experiment will run until September 2029, and if deemed successful, the mechanism will extend to cover more parts of the country.

Source: Russia to enforce location tracking app on all foreigners in Moscow

Google found not compliant with AVG when registering new accounts – sends the data to 70 services without user knowledge

According to a ruling by the Berlin Regional Court, Google must disclose to its users which of its more than 70 services process their data when they register for an account. The civil chamber thus upheld a lawsuit filed by the German Association of Consumer Organizations (vzbv). The consumer advocates had complained that neither the “express personalization” nor the alternative “manual personalization” complied with the legal requirements of the European General Data Protection Regulation (GDPR).
The ruling against Google Ireland Ltd. was handed down on March 25, 2025, but was only published on Friday (case number 15 O 472/22). The decision is not yet legally binding because the internet company has appealed the ruling. Google stated that it disagrees with the Regional Court’s decision.
What does Google process data for?
The consumer advocates argued that consumers must know what Google processes their data for when registering. Users must be able to freely decide how their data is processed. The judges at the Berlin Regional Court confirmed this legal opinion. The ruling states: “In this case, transparency is lacking simply because the defendant does not provide information about the individual Google services, Google apps, Google websites, or Google partners for which the data is to be used.” For this reason, the scope of consent is completely unknown to the user.
Google: Account creation has changed
Google stated that the ruling concerned an old account creation process that had since been changed. “What hasn’t changed is our commitment to enabling our users to use Google on their terms, with clear choices and control options based on extensive research, testing, and guidelines from European data protection authorities,” it stated. In the proceedings, Google argued that listing all services would result in excessively long text and harm transparency. This argument was rejected by the court. In the court’s view, information about the scope of consent is among the minimum details required by law. The regional court was particularly concerned that with “Express Personalization,” users only had the option of consenting to all data usage or canceling the process. A differentiated refusal was not possible. Even with “Manual Personalization,” consumers could not refuse the use of the German location.

Source: Landgericht Berlin: Google-Accounterstellung verletzte DSGVO | heise online

Respond to the EU on allowing corporations to shut down sections of the internet with no recourse before 28th May

After LaLiga accidentally shut down Cloudflare and Vercel in Spain (LaLiga Piracy Blocks Randomly Take Down huge innocent segments of internet with no recourse or warning, slammed as “Unaccountable Internet Censorship”) and the Italian Privacy Shield shut down Google Drive in Italy (Massive expansion of Italy’s Piracy Shield underway despite growing criticism of its flaws and EU illegality) as well as many other innocent IP addresses in the name of combating illegal online streaming, the EU has launched a feedback initiative. Considering how the DMCA in the US has been weaponised, leading to all kinds of non-valid takedowns that are very hard to fight (see here for examples) I really don’t want to see the EU take the path of being in the pocket of big corporations with unchecked powers to censor the internet. Take the time to respond to this!

The Commission Recommendation of 4 May 2023 on combating online piracy of sports and other live events encourages Member States and relevant stakeholders to take effective, appropriate and proportionate measures to combat unauthorised retransmissions of such events.

Source: Combating online piracy of sports and other live events – assessment of the May 2023 Commission Recommendation

House of Lords shows they are in pocket of big copyright and pushes back against government’s AI plans

The government has suffered another setback in the House of Lords over its plans to let artificial intelligence firms use copyright-protected work without permission.

An amendment to the data bill requiring AI companies to reveal which copyrighted material is used in their models was backed by peers, despite government opposition.

It is the second time parliament’s upper house has demanded tech companies make clear whether they have used copyright-protected content.

The vote came days after hundreds of artists and organisations including Paul McCartney, Jeanette Winterson, Dua Lipa and the Royal Shakespeare Company urged the prime minister not to “give our work away at the behest of a handful of powerful overseas tech companies”.

The amendment was tabled by crossbench peer Beeban Kidron and was passed by 272 votes to 125.

The bill will now return to the House of Commons. If the government removes the Kidron amendment, it will set the scene for another confrontation in the Lords next week.

Lady Kidron said: “I want to reject the notion that those of us who are against government plans are against technology. Creators do not deny the creative and economic value of AI, but we do deny the assertion that we should have to build AI for free with our work, and then rent it back from those who stole it.

“My lords, it is an assault on the British economy and it is happening at scale to a sector worth £120bn to the UK, an industry that is central to the industrial strategy and of enormous cultural import.”

The government’s copyright proposals are the subject of a consultation due to report back this year, but opponents of the plans have used the data bill as a vehicle for registering their disapproval.

The main government proposal is to let AI firms use copyright-protected work to build their models without permission, unless the copyright holders signal they do not want their work to be used in that process – a solution that critics say is impractical and unworkable.

Source: House of Lords pushes back against government’s AI plans | Artificial intelligence (AI) | The Guardian

The problem is that the actual creators never see much of the money from copyright income – that all goes to the giant copyright holding behemoths who keep it for themselves.

And considering the way that AI systems are trained, they do not keep a copy of the work ingested, just like a human doesn’t keep a copy. So to say that a system can only ingest a work if permission is given is just like saying a specific person can only read that without permission.

So anything that is freely available is fair game. If an AI wants to read a book, they should buy that book. Once.

Moderna’s Vaccine for both Flu and Covid Works—Now Politics Could Sink It

Moderna’s mRNA-based flu and covid-19 vaccine could provide the best of both worlds—if it’s actually ever approved by the Food and Drug Administration.

This week, scientists at Moderna published data from a Phase III trial testing the company’s combination vaccine, codenamed mRNA-1083. Individuals given mRNA-1083 appeared to generate the same or even greater immune response compared to those given separate vaccines, the researchers found. But the FDA’s recent policy change on vaccine approvals, orchestrated by Health Secretary Robert F. Kennedy Jr, could imperil the development of this and other future vaccines.

The trial involved 8,000 people split into two age groups: those between the ages of 50 and 64, and those over 65. People were randomly given mRNA-1083 (plus a placebo) or two already approved flu and covid-19 vaccines.

The vaccine seemed effective across both age groups, with mRNA-1083 participants showing at least the same level of humoral immune response (antibody-based) to circulating flu and covid-19 strains as participants who were given the separate vaccines. On average, this response was actually higher to the flu strains in particular among those given mRNA-1083. The experimental vaccine also appeared to be safe and well-tolerated, as the authors explained in their paper, published Wednesday in JAMA.

The study results are certainly encouraging, and typically they would pave the way toward a surefire FDA approval. But the political situation has changed for the worse. The Department of Health and Human Services recently mandated an overhaul of the vaccine approval process, one that will require all new vaccines to undergo placebo-controlled trials to receive approval.

While many experimental vaccines today are placebo-tested (including the original covid-19 vaccines), it’s unclear whether this order will also apply to vaccines that can be compared to existing vaccines, like the combination mRNA-1083 vaccine, or to vaccines that have to be regularly updated to match fast-evolving viruses like the flu and covid-19.

Some vaccine experts have said that these changes are unnecessary and potentially unethical, since it could leave some people vulnerable to an infection that already has a vaccine. The new rule also might delay the availability of upcoming seasonal vaccines, particularly the current covid-19 shots.

A potentially important wrinkle for the mRNA-1083 vaccine is that no mRNA-based vaccine for the flu is currently approved. That reality could very well be all that the FDA needs to demand further placebo-controlled trials. RFK Jr. and other recent Trump appointees have also been highly skeptical of mRNA-based vaccines in general, despite no strong evidence that these vaccines are significantly less safe than other types. Kennedy, who has a long history of supporting the anti-vaccination movement, has even wrongly declared that the mRNA covid-19 vaccine was the “deadliest vaccine ever made.”

Moderna stated last week it doesn’t expect its mRNA-1083 vaccine to be approved before 2026, following the FDA’s request for late-stage data showing the vaccine’s effectiveness against flu specifically. But it’s worth wondering if even that timeline is now in jeopardy under the current public health regime.

Source: Moderna’s Super-Vaccine for Flu and Covid Works—Now Politics Could Sink It

Google will pay Texas $1.4B to settle claims the company collected users’ data without permission

[…] “For years, Google secretly tracked people’s movements, private searches, and even their voiceprints and facial geometry through their products and services. I fought back and won.”

The agreement settles several claims Texas made against the search giant in 2022 related to geolocation, incognito searches and biometric data. The state argued Google was “unlawfully tracking and collecting users’ private data.”

Paxton claimed, for example, that Google collected millions of biometric identifiers, including voiceprints and records of face geometry, through such products and services as Google Photos and Google Assistant.

Google spokesperson José Castañeda said the agreement settles an array of “old claims,” some of which relate to product policies the company has already changed.

[…]

Texas previously reached two other key settlements with Google within the last two years, including one in December 2023 in which the company agreed to pay $700 million and make several other concessions to settle allegations that it had been stifling competition against its Android app store.

Meta has also agreed to a $1.4 billion settlement with Texas in a privacy lawsuit over allegations that the tech giant used users’ biometric data without their permission.

Source: Google will pay Texas $1.4B to settle claims the company collected users’ data without permission | AP News

US senator introduces bill calling for location-tracking on AI chips to limit China access

A U.S. senator introduced a bill on Friday that would direct the Commerce Department to require location verification mechanisms for export-controlled AI chips, in an effort to curb China’s access to advanced semiconductor technology.
Called the “Chip Security Act,” the bill calls for AI chips under export regulations, and products containing those chips, to be fitted with location-tracking systems to help detect diversion, smuggling or other unauthorized use of the product.
“With these enhanced security measures, we can continue to expand access to U.S. technology without compromising our national security,” Republican Senator Tom Cotton of Arkansas said.
The bill also calls for companies exporting the AI chips to report to the Bureau of Industry and Security if their products have been diverted away from their intended location or subject to tampering attempts.
The move comes days after U.S. President Donald Trump said he would rescind and modify a Biden-era rule that curbed the export of sophisticated AI chips with the goal of protecting U.S. leadership in AI and blocking China’s access.
U.S. Representative Bill Foster, a Democrat from Illinois, also plans to introduce a bill on similar lines in the coming weeks, Reuters reported on Monday.
Restricting China’s access to AI technology that could enhance its military capabilities has been a key focus for U.S. lawmakers and reports of widespread smuggling of Nvidia’s (NVDA.O)

Source: US senator introduces bill calling for location-tracking on AI chips to limit China access | Reuters

Of course it adds another layer of the US government spying on you if you want to buy a graphics card too. I’m not sure how anyone being able to track all your PCs does not compromise national security.

EU prepares to give new rights to live streaming sites, to the detriment of the Internet and its users

At the heart of Walled Culture the book (free digital versions available) lies the dispiriting saga of how the EU Copyright Directive came into being. It began in early 2013 with the usual “stakeholder dialogue”, in which the European Commission sought the views of the various constituencies affected. It generated an unprecedentedly large response that was surprising given the dry and dusty nature of copyright law. As the European Commission’s Report on the consultation noted:

The public consultation generated broad interest with more than 9,500 replies to the consultation document and a total of more than 11,000 messages, including questions and comments, sent to the Commission’s dedicated email address. A number of initiatives were also launched by organized stakeholders that nurtured the debate around the public consultation and drew attention to it.

Some 5,600 citizens took the trouble to respond, despite the lack of an easy online interface to do so: responses required a document to be completed then emailed. Numerous problems with the existing copyright system were raised, particularly in the light of the shift from analogue to digital technologies. Despite that welcome engagement, and the many substantive issues that were raised, the public’s comments and concerns were almost entirely ignored in the final result of the legisltative process. Instead, the EU Copyright Directive gave yet more rights to copyright holders, and undermined the freedom of speech and privacy rights of ordinary people.

[…]

The standard mechanism for giving the copyright world what it wants, while pretending to respect democratic processes, has been set in motion again. The European Commission has just launched a “Call for Evidence in view of the assessment of the Recommendation on combatting online piracy of sports and other live events”. The Recommendation referred to there was published two years ago. It explores the unauthorised retransmissions of live sports and other live events online, the next battleground for the copyright world, ever-keen to expand its rights and powers.

[…]

Those further measures are likely to involve yet more one-sided legislation in favour of the copyright world, as with the EU Copyright Directive. Such laws are already being discussed in the US. But there is a significant difference between what happened back in 2013, and the latest call for evidence. In 2013, people were warning about the possible effects of various bad policy options that might be adopted. The copyright world naturally dismissed those concerns as fear mongering, which allowed its allies within the European Parliament to push through precisely those bad policy options in the final text of the Directive.

But when it comes to unauthorised retransmissions of live events, we already have a wealth of evidence of how disproportionate attempts to rein in such streams can be harmful. The main example of what not to do comes from Italy, whose Piracy Shield is shaping up to be the worst copyright enforcement scheme since France’s Hadopi (also discussed in detail in Walled Culture the book).

The central problem is overblocking. For example, back in March last year, Walled Culture reported that one of Cloudflare’s Internet addresses had been blocked by Piracy Shield. There were over 40 million domains associated with the blocked address. Compounding the problem is a lack of transparency about which sites are being blocked, and the failure to provide a rigorous and rapid complaint procedure for fixing such far-reaching blunders. […] the damage could easily go well beyond the inconvenience of millions of people being blocked from accessing their files on Google Drive, as happened last year.

[…]

Despite these serious issues, Italy seems determined to make Piracy Shield even worse by building it out in a number of ill-advised ways, including the extension of blackout orders to VPNs and public DNS providers, and the obligation for search engines to de-index sites. Worryingly, a new “Study on the Effectiveness and the Legal and Technical Means of Implementing Website-Blocking Orders” from the World Intellectual Property Organisation (WIPO) holds up Italy’s approach as an example of a “well-functioning site-blocking system”.

Nor is Italy alone in demonstrating the harms this approach to dealing with unauthorised rebroadcasts of sports events gives rise to. In Spain, attempts by La Liga, the country’s top professional football league, to tackle the problem have also led to overblocking,

[…]

German ISPs have been implementing a secret block list of allegedly infringing sites, including those offering streams, for years, and without any court oversight. The lack of transparency of this approach was underlined when the list was accidentally exposed before being hidden away once more.

As the above makes clear, the blocking of allegedly infringing streaming sites is already happening across the EU in an uncontrolled way, and with little to no effective judicial oversight. The copyright industry can present this as a kind of fait accompli, and ask the EU to bring in laws to formalise the situation. In doing so, they will skirt over the numerous and deep-seated problems with this approach, not least overblocking, which shuts down entirely innocent sites and offers little or no redress for the harm this causes.

The latest Call for Evidence on this important area is open until 28 May 2025. It would be good if companies, organisations and individuals could use this opportunity to alert the European Commission to the evident dangers of Piracy Shield and similar approaches, in the hope that existing implementations might be dismantled, or at least reined in, and new ones restricted.

[…]

French courts too are ordering Cloudflare to block streaming sites, […]

Source: EU prepares to give new rights to live streaming sites, to the detriment of the Internet and its users – Walled Culture

VMware perpetual license holders receive cease-and-desist letters from Broadcom

Broadcom has been sending cease-and-desist letters to owners of VMware perpetual licenses with expired support contracts, Ars Technica has confirmed.

Following its November 2023 acquisition of VMware, Broadcom ended VMware perpetual license sales. Users with perpetual licenses can still use the software they bought, but they are unable to renew support services unless they had a pre-existing contract enabling them to do so. The controversial move aims to push VMware users to buy subscriptions to VMware products bundled such that associated costs have increased by 300 percent or, in some cases, more.

Some customers have opted to continue using VMware unsupported, often as they research alternatives, such as VMware rivals or devirtualization.

Over the past weeks, some users running VMware unsupported have reported receiving cease-and-desist letters from Broadcom informing them that their contract with VMware and, thus, their right to receive support services, has expired. The letter [PDF], reviewed by Ars Technica and signed by Broadcom managing director Michael Brown, tells users that they are to stop using any maintenance releases/updates, minor releases, major releases/upgrades extensions, enhancements, patches, bug fixes, or security patches, save for zero-day security patches, issued since their support contract ended.

The letter tells users that the implementation of any such updates “past the Expiration Date must be immediately removed/deinstalled,” adding:

Any such use of Support past the Expiration Date constitutes a material breach of the Agreement with VMware and an infringement of VMware’s intellectual property rights, potentially resulting in claims for enhanced damages and attorneys’ fees.

[…]

The cease-and-desist letters also tell recipients that they could be subject to auditing.

Failure to comply with [post-expiration reporting] requirements may result in a breach of the Agreement by Customer[,] and VMware may exercise its right to audit Customer as well as any other available contractual or legal remedy.

[…]

Since Broadcom ended VMware’s perpetual licenses and increased pricing, numerous users and channel partners, especially small-to-medium-sized companies, have had to reduce or end business with VMware. Most of Members IT Group’s VMware customer base is now running VMware unsupported

[…]

Source: VMware perpetual license holders receive cease-and-desist letters from Broadcom – Ars Technica

Iconic Gun-Makers Gave Sensitive Customer Information to Political Operatives in secret for decades

For years, America’s most iconic gun-makers turned over sensitive personal information on hundreds of thousands of customers to political operatives.

Those operatives, in turn, secretly employed the details to rally firearm owners to elect pro-gun politicians running for Congress and the White House, a ProPublica investigation has found.

The clandestine sharing of gun buyers’ identities — without their knowledge and consent — marked a significant departure for an industry that has long prided itself on thwarting efforts to track who owns firearms in America.

At least 10 gun industry businesses, including Glock, Smith & Wesson, Remington, Marlin and Mossberg, handed over names, addresses and other private data to the gun industry’s chief lobbying group, the National Shooting Sports Foundation. The NSSF then entered the gun owners’ details into what would become a massive database.

The data initially came from decades of warranty cards filled out by customers and returned to gun manufacturers for rebates and repair or replacement programs.

A ProPublica review of dozens of warranty cards from the 1970s through today found that some promised customers their information would be kept strictly confidential. Others said some information could be shared with third parties for marketing and sales. None of the cards informed buyers their details would be used by lobbyists and consultants to win elections.

[…]

The undisclosed collection of intimate gun owner information is in sharp contrast with the NSSF’s public image.

[…]

For two decades, the group positioned itself as an unwavering watchdog of gun owner privacy. The organization has raged against government and corporate attempts to amass information on gun buyers. As recently as this year, the NSSF pushed for laws that would prohibit credit card companies from creating special codes for firearms dealers, claiming the codes could be used to create a registry of gun purchasers.

As a group, gun owners are fiercely protective about their personal information. Many have good reasons. Their ranks include police officers, judges, domestic violence victims and others who have faced serious threats of harm.

In a statement, the NSSF defended its data collection. Any suggestion of “unethical or illegal behavior is entirely unfounded,” the statement said, adding that “these activities are, and always have been, entirely legal and within the terms and conditions of any individual manufacturer, company, data broker, or other entity.”

The gun industry companies either did not respond to ProPublica or declined to comment, noting they are under different ownership today and could not find evidence that customer information was previously shared. One ammunition maker named in the NSSF documents as a source of data said it never gave the trade group or its vendors any “personal information.”

ProPublica established the existence of the secret program after reviewing tens of thousands of internal corporate and NSSF emails, reports, invoices and contracts. We also interviewed scores of former gun executives, NSSF employees, NRA lobbyists and political consultants in the U.S. and the United Kingdom.

The insider accounts and trove of records lay bare a multidecade effort to mobilize gun owners as a political force. Confidential information from gun customers was central to what NSSF called its voter education program. The initiative involved sending letters, postcards and later emails to persuade people to vote for the firearms industry’s preferred political candidates. Because privacy laws shield the names of firearm purchasers from public view, the data NSSF obtained gave it a unique ability to identify and contact large numbers of gun owners or shooting sports enthusiasts.

It also allowed the NSSF to figure out whether a gun buyer was a registered voter. Those who weren’t would be encouraged to register and cast their ballots for industry-supported politicians.

From 2000 to 2016, the organization poured more than $20 million into its voter education campaign, which was initially called Vote Your Sport and today is known as GunVote. The NSSF trumpeted the success of its electioneering in reports, claiming credit for putting both George W. Bush and Donald J. Trump in the White House and firearm-friendly lawmakers in the U.S. House and Senate.

In April 2016, a contractor on NSSF’s voter education project delivered a large cache of data to Cambridge Analytica

[…]

The data given to Cambridge included 20 years of gun owners’ warranty card information as well as a separate database of customers from Cabela’s, a sporting goods retailer with approximately 70 stores in the U.S. and Canada.

Cambridge combined the NSSF data with a wide array of sensitive particulars obtained from commercial data brokers. It included people’s income, their debts, their religion, where they filled prescriptions, their children’s ages and purchases they made for their kids. For women, it revealed intimate elements such as whether the underwear and other clothes they purchased were plus size or petite.

The information was used to create psychological profiles of gun owners and assign scores to behavioral traits, such as neuroticism and agreeableness. The profiles helped Cambridge tailor the NSSF’s political messages to voters based on their personalities.

[…]

As the body count from mass shootings at schools and elsewhere in the nation has climbed, those politicians have halted proposals to resurrect the assault weapons ban and enact other gun control measures, even those popular with voters, such as raising the minimum age to buy an assault rifle from 18 to 21.

In response to questions from ProPublica, the NSSF acknowledged it had used the customer information in 2016 for “creating a data model” of potentially sympathetic voters. But the group said the “existence and proven success of that model then obviated the need to continue data acquisition via private channels and today, NSSF uses only commercial-source data to which the data model is then applied.”

[…]

Source: Iconic Gun-Makers Gave Sensitive Customer Information to Political Operatives — ProPublica

Neurotech companies are selling your brain data, senators warn

Three Democratic senators are sounding the alarm over brain-computer interface (BCI) technologies’ ability to collect — and potentially sell — our neural data. In a letter to the Federal Trade Commission (FTC), Sens. Chuck Schumer (D-NY), Maria Cantwell (D-IN), and Ed Markey (D-MA) called for an investigation into neurotechnology companies’ handling of user data, and for tighter regulations on their data-sharing policies.

“Unlike other personal data, neural data — captured directly from the human brain — can reveal mental health conditions, emotional states, and cognitive patterns, even when anonymized,” the letter reads. “This information is not only deeply personal; it is also strategically sensitive.”

While the concept of neural technologies may conjure up images of brain implants like Elon Musk’s Neuralink, there are far less invasive — and less regulated — neurotech products on the market, including headsets that help people meditate, purportedly trigger lucid dreaming, and promise to help users with online dating by helping them swipe through apps “based on your instinctive reaction.” These consumer products gobble up insights about users’ neurological data — and since they aren’t categorized as medical devices, the companies behind them aren’t barred from sharing that data with third parties.

“Neural data is the most private, personal, and powerful information we have—and no company should be allowed to harvest it without transparency, ironclad consent, and strict guardrails. Yet companies are collecting it with vague policies and zero transparency,” Schumer told The Verge via email.

The letter cites a 2024 report by the Neurorights Foundation, which found that most neurotech companies not only have few safeguards on user data but also have the ability to share sensitive information with third parties. The report looked at the data policies of 30 consumer-facing BCI companies and found that all but one “appear to have access to” users’ neural data, “and provide no meaningful limitations to this access.” The Neurorights Foundation only surveyed companies whose products are available to consumers without the help of a medical professional; implants like those made by Neuralink weren’t among them.

The companies surveyed by the Neurorights Foundation make it difficult for users to opt out of having their neurological data shared with third parties. Just over half the companies mentioned in the report explicitly let consumers revoke consent for data processing, and only 14 of the 30 give users the ability to delete their data. In some instances, user rights aren’t universal — for example, some companies only let users in the European Union delete their data but don’t grant the same rights to users elsewhere in the world.

To safeguard against potential abuses, the senators are calling on the FTC to:

  • investigate whether neurotech companies are engaging in unfair or deceptive practices that violate the FTC Act
  • compel companies to report on data handling, commercial practices, and third-party access
  • clarify how existing privacy standards apply to neural data
  • enforce the Children’s Online Privacy Protection Act as it relates to BCIs
  • begin a rulemaking process to establish safeguards for neural data, and setting limits on secondary uses like AI training and behavioral profiling
  • and ensure that both invasive and noninvasive neurotechnologies are subject to baseline disclosure and transparency standards, even when the data is anonymized

Though the senators’ letter calls out Neuralink by name, Musk’s brain implant tech is already subject to more regulations than other BCI technologies. Since Neuralink’s brain implant is considered a “medical” technology, it’s required to comply with the Health Insurance Portability and Accountability Act (HIPAA), which safeguards people’s medical data.

Stephen Damianos, the executive director of the Neurorights Foundation, said that HIPAA may not have entirely caught up to existing neurotechnologies, especially with regards to “informed consent” requirements.

“There are long-established and validated models for consent from the medical world, but I think there’s work to be done around understanding the extent to which informed consent is sufficient when it comes to neurotechnology,” Damianos told The Verge. “The analogy I like to give is, if you were going through my apartment, I would know what you would and wouldn’t find in my apartment, because I have a sense of what exactly is in there. But brain scans are overbroad, meaning they collect more data than what is required for the purpose of operating a device. It’s extremely hard — if not impossible — to communicate to a consumer or a patient exactly what can today and in the future be decoded from their neural data.”

Data collection becomes even trickier for “wellness” neurotechnology products, which don’t have to comply with HIPAA, even when they advertise themselves as helping with mental health conditions like depression and anxiety.

Damianos said there’s a “very hazy gray area” between medical devices and wellness devices.

“There’s this increasingly growing class of devices that are marketed for health and wellness as distinct from medical applications, but there can be a lot of overlap between those applications,” Damianos said. The dividing line is often whether a medical intermediary is required to help someone obtain a product, or whether they can “just go online, put in your credit card, and have it show up in a box a few days later.”

There are very few regulations on neurotechnologies advertised as being for “wellness.” In April 2024, Colorado passed the first-ever legislation protecting consumers’ neural data. The state updated its existing Consumer Protection Act, which protects users’ “sensitive data.” Under the updated legislation, “sensitive data” now includes “biological data” like biological, genetic, biochemical, physiological, and neural information. And in September, California amended its Consumer Privacy Act to protect neural data.

“We believe in the transformative potential of these technologies, and I think sometimes there’s a lot of doom and gloom about them,” Damianos told The Verge. “We want to get this moment right. We think it’s a really profound moment that has the potential to reshape what it means to be human. Enormous risks come from that, but we also believe in leveraging the potential to improve people’s lives.”

Source: Neurotech companies are selling your brain data, senators warn | The Verge

Can the EU’s Dual Strategy of Regulation and Investment Redefine AI Leadership?

Beyond sharing a pair of vowels, AI and the EU both present significant challenges when it comes to setting the right course. This article makes the case that reducing regulation for large general-purpose AI providers under the EU’s competitiveness agenda is not a silver bullet for catching Europe up to the US and China, and would only serve to entrench European dependencies on US tech. Instead, by combining its regulatory toolkit and ambitious investment strategy, the EU is uniquely positioned to set the global standard for trustworthy AI and pursue its tech sovereignty. It is an opportunity that Europe must take.

Recent advances in AI have drastically shortened the improvement cycle from years to months, thanks to new inference-time compute techniques that enable self-prompting, chain-of-thought reasoning in models like OpenAI’s o1 and DeepSeek’s R1. However, these rapid gains also increase risks like AI-enabled cyber offenses and biological attacks. Meanwhile, the EU and France recently committed €317 billion to AI development in Europe, joining a global race with comparably large announcements from both the US and China.

Now turning to EU AI policy, the newly established AI Office and 13 independent experts are nearing the end of a nine-month multistakeholder drafting process of the Code of Practice (CoP); the voluntary technical details of the AI Act’s mandatory provisions for general purpose AI providers. The vast majority of the rules will apply to only the largest model providers, ensuring proportionality: the protection of SMEs, start-ups, and other downstream industries. In the meantime, the EU has fully launched a competitiveness agenda, with the Commission’s recently published Competitiveness Compass and first omnibus simplification package outlining plans for widespread streamlining of reporting obligations amidst mounting pushback against this simplified narrative. Add to this the recent withdrawal of the AI Liability Directive, and it’s clear to see which way the political winds are blowing.

So why must this push for simplification be replaced by a push for trustworthy market creation in the case of general-purpose AI and the Code of Practice? I’ll make three main points: 1) Regulation is not the reason for Europe lacking Big Tech companies, 2) Sweeping deregulation creates legal uncertainty and liability risks for downstream deployers, and slows trusted adoption of new technologies and thereby growth, 3) Watering down the CoP for upstream model providers with systemic risk will almost exclusively benefit large US incumbents, entrenching dependency and preventing tech sovereignty.

[…]

The EU’s tech ecosystem had ample time to emerge in the years preceding and following the turn of the century, free of so-called “red tape,” but this did not happen and will not again through deregulation […] One reason presented by Bradford is that the European digital single market still remains fragmented, with differing languages, cultures, consumer preferences, administrative obstacles, and tax regimes preventing large tech companies from seamlessly growing within the bloc and throughout the world. Even more fragmented are the capital markets of the EU, resulting in poor access to venture capital for tech start-ups and scale-ups. Additional points include harsh, national-level bankruptcy laws that are “creditor-oriented” in the EU, compared to more forgiving “debtor-friendly” equivalents in the US, resulting in lower risk appetite for European entrepreneurs. Finally, skilled migration is significantly more streamlined in the US, with federal-level initiatives like the H-1B visa leading to the majority of Big Tech CEOs hailing from overseas

[…]

The downplaying of regulation as Europe’s AI hindrance has been repeated by leading industry voices such as US VC firm a16z, European VC firm Merantix Capital, and French provider MistralAI. To reiterate: the EU ‘lagging behind’ on trillion-dollar tech companies and the accompanying innovation was not a result of regulation before there was regulation, and is also not a result of regulation after.

[…]

Whether for planes, cars, or drugs, early use of dangerous new technologies, without accompanying rules, saw frequent preventable accidents, reducing consumer trust and slowing market growth. Now, with robust checks and balances in place from well-resourced regulatory authorities, such markets have been able to thrive, providing value and innovation to citizens. Other sectors, like nuclear energy and, more recently, crypto, have suffered from an initial lack of regulation, causing industry corner-cutting, leading to infamous disasters (from Fukushima to the collapse of FTX) from which public trust has been difficult to win back. Regulators around the world are currently risking the same fate for AI.

This point is particularly relevant for so-called ‘downstream deployers’: companies that build applications on top of (usually) Big Tech, provided underlying models. Touted by European VC leader Robert Lacher as Europe’s “huge opportunity” in AI, downstream deployers, particularly SMEs, serve to gain from the Code of Practice, which ensures that necessary regulatory checks and balances occur upstream at the level of model provider.

[…]

Finally, the EU’s enduring and now potentially crippling dependency on US technology companies has been importantly addressed by the new Commission, best exemplified by the title of Executive Vice President Henna Virkkunen’s file: Tech Sovereignty, Security and Democracy. With the last few months’ geopolitical developments, including all-time-low transatlantic relations and an unfolding trade war, some have gone as far as warning of the possibility of US technology being used for surveillance of Europe and of the US sharing intelligence with Russia. Clearly, the urgency of tech sovereignty has drastically increased. A strong Code of Practice would return agency to the EU, ensuring that US upstream incumbents meet basic security, safety, and ethical standards whilst also easing the EU’s AI adoption problem by ensuring technology is truly trustworthy.

So, concretely, what needs to be done? Bruegel economist Mario Mariniello summed it up concisely: “On tech regulation, the European Union should be bolder.”

[…]

This article has outlined why deregulating highly capable AI models, produced by the world’s largest companies, is not a solution to Europe’s growth problem. Instead of stripping back obligations, ensuring protections of European citizens, the EU must combine its ambitious AI investment plan with boldly pursuing leadership in setting global standards, accelerating trustworthy adoption and ensuring tech sovereignty. This combination will put Europe on the right path to drive this technological revolution forward for the benefit of all.

Source: Can the EU’s Dual Strategy of Regulation and Investment Redefine AI Leadership? | TechPolicy.Press

Europe’s Tech Sovereignty Demands More Than Competitiveness

BRUSSELS – As part of his confrontational stance toward Europe, US President Donald Trump could end up weaponizing critical technologies. The European Union must appreciate the true nature of this threat instead of focusing on competing with the US as an economic ally. To achieve true tech sovereignty, the EU should transcend its narrow focus on competitiveness and deregulation and adopt a far more ambitious strategy

[…]

Europe’s growing anxiety about competitiveness is fueled by its inability to challenge US-based tech giants where it counts: in the market. As the Draghi report points out, the productivity gap between the United States and the EU largely reflects the relative weakness of Europe’s tech sector. Recent remarks by European Commission President Ursula von der Leyen and Tech Commissioner Henna Virkkunen suggest that policymakers have taken Draghi’s message to heart, making competitiveness the central focus of EU tech policy. But this singular focus is both insufficient and potentially counterproductive at a time of technological and geopolitical upheaval. While pursuing competitiveness could reduce Big Tech’s influence over Europe’s economy and democratic institutions, it could just as easily entrench it. European leaders’ current fixation on deregulationturbocharged by the Draghi report – leaves EU policymaking increasingly vulnerable to lobbying by powerful corporate interests and risks legitimizing policies that are incompatible with fundamental European values.

As a result, the European Commission’s deregulatory measures – including its recent decision to shelve draft AI and privacy rules, and its forthcoming “simplification” of tech legislation including the GDPR – are more likely to benefit entrenched tech giants than they are to support startups and small and medium-size enterprises. Meanwhile, Europe’s hasty and uncritical push for “AI competitiveness” risks reinforcing Big Tech’s tightening grip on the AI technology stack.

It should come as no surprise that the Draghi report’s deregulatory agenda was warmly received in Silicon Valley, even by Elon Musk himself. But the ambitions of some tech leaders go far beyond cutting red tape. Musk’s use of X (formerly Twitter) and Starlink to interfere in national elections and the war in Ukraine, together with the Trump administration’s brazen attacks on EU tech regulation, show that Big Tech’s quest for power poses a serious threat to European sovereignty.

Europe’s most urgent task, then, is to defend its citizens’ rights, sovereignty, and core values from increasingly hostile American tech giants and their allies in Washington. The continent’s deep dependence on US-controlled digital infrastructure – from semiconductors and cloud computing to undersea cables – not only undermines its competitiveness by shutting out homegrown alternatives but also enables the owners of that infrastructure to exploit it for profit.

[…]

Strong enforcement of competition law and the Digital Markets Act, for example, could curb Big Tech’s influence while creating space for European startups and challengers to thrive. Similarly, implementing the Digital Services Act and the AI Act will protect citizens from harmful content and dangerous AI systems, empowering Europe to offer a genuine alternative to Silicon Valley’s surveillance-driven business models. Against this backdrop, efforts to develop homegrown European alternatives to Big Tech’s digital infrastructure have been gaining momentum. A notable example is the so-called “Eurostack” initiative, which should be viewed as a key step in defending Europe’s ability to act independently.

[…]

A “competitive” economy holds little value if it comes at the expense of security, a fair and safe digital environment, civil liberties, and democratic values. Fortunately, Europe doesn’t have to choose. By tackling its technological dependencies, protecting democratic governance, and upholding fundamental rights, it can foster the kind of competitiveness it truly needs.

Source: Europe’s Tech Sovereignty Demands More Than Competitiveness by Marietje Schaake & Max von Thun – Project Syndicate

Deregulation has led to huge amounts of problems globally, such as the monopoly / duopoly problems we can’t seem to deal with; reliance on external markets and companies that whimsically change their minds; unsustainable hardware and software choices allowing devices to be bricked, poorly secured and irreparable; vendor lock-in to closed source ecosystems; damage to innovation; privacy invasions which lead to hacking attacks; etc etc. As Europe we can make our own choices about our own values – we are not determined by the singular motive of profit. European values are inclusive and also promote things like education and happiness.

Employee monitoring app exposes 21M work screens​ to internet

A surveillance tool meant to keep tabs on employees is leaking millions of real-time screenshots onto the open web.

Your boss watching your screen isn’t the end of the story. Everyone else might be watching, too. Researchers at Cybernews have uncovered a major privacy breach involving WorkComposer, a workplace surveillance app used by over 200,000 people across countless companies.

The app, designed to track productivity by logging activity and snapping regular screenshots of employees’ screens, left over 21 million images exposed in an unsecured Amazon S3 bucket, broadcasting how workers go about their day frame by frame.

[…]

WorkComposer is one of many time-tracking tools that have crept into modern work culture. Marketed as a way to keep teams “accountable,” the software logs keystrokes, tracks how long you spend on each app, and snaps desktop screenshots every few minutes.

The leak shows just how dangerous this setup becomes when basic security hygiene is ignored. A leak of this magnitude turns everyday work activity into a goldmine for cybercriminals.

[…]

The leak’s real-time nature only amplifies the danger, as threat actors could monitor unfolding business operations as they happen, giving them access to otherwise locked-down environments.

[…]

Source: Employee monitoring app exposes 21M work screens​ | Cybernews

Can you imagine the time lost by employers going over all this data?

Microsoft Windows new built in spying tool Microsoft Recall is a really great idea too. Not.

Internet Archive Sued for $700m by Record Labels about digitising songs pre 1960. Petition to rescue the Internet Archive

A dramatic appeal hopes to ensure the survival of the nonprofit Internet Archive. The signatories of a petition, which is now open for further signatures, are demanding that the US recording industry association RIAA and participating labels such as as Universal Music Group (UMG), Capitol Records, Sony Music, and Arista drop their lawsuit against the online library. The legal dispute, pending since mid-2023 and expanded in March, centers on the “Great 78” project. This project aims to save 500,000 song recordings by digitizing 250,000 records from the period 1880 to 1960. Various institutions and collectors have donated the records, which are made for 78 revolutions per minute (“shellac”), so that the Internet Archive can put this cultural treasure online.

The music companies originally demanded Ã…372 million for the online publication of the songs and the associated “mass theft .” They recently increased their demand to Ã…700 million for potential copyright infringement. The basis for the lawsuit is the Music Modernization Act, which US President Donald Trump approved in 2018. This includes the CLASSICS Act. This law retroactively introduces federal copyright protection for sound recordings made before 1972, which until the were protected in the US by different state laws. The monopoly rights now apply US-wide for a good 100 years (for recordings made before 1946) or until 2067 (for recordings made between 1947 and 1972).

The lawsuit ultimately threatens the existence of the entire Internet Archive , including the wavy-known Wayback Machine , they say. This important public service is used by millions of people every day to access historical “snapshots” from the web. Journalists, educators, researchers, lawyers, and citizens use it to verify sources, investigate disinformation, and maintain public accountability. The legal attack also puts a “critical infrastructure of the internet” at risk. And this at a time when digital information is being deleted, overwritten, and destroyed: “We cannot afford to lose the tools that preserve memory and defend facts.” The Internet Archive was forced to delete 500,000 books as recently as 2024. It also continually struggles with IT attacks .

The case is called Universal Music Group et al. v. Internet Archive. The lawsuit was originally filed in the U.S. District Court for the Southern District of New York (Case No. 1:23-cv-07133), but is now pending in the U.S. District Court for the Northern District of California (Case No. 3:23-cv-6522). The Internet Archive takes the position that the Great 78 project does not harm the music industry. Quite the opposite: Anyone who wants to enjoy music uses commercial streaming services anyway; the old 78 rpm shellac recordings are study material for researchers.

Source: Suit of record labels: Petition to rescue the Internet Archive | heise online (NB this is a Google Translate page from the original German page)

Original page here: https://www.heise.de/news/Klage-von-Plattenlabels-Petition-zur-Rettung-des-Internet-Archive-10358777.html

How can copyright law be so incredibly wrong all the time?!

A Data Breach at Yale New Haven Health Compromised 5.5 Million Patients’ Information

[…]Yale New Haven Health (YNHHS), a massive nonprofit healthcare network in Connecticut. Hackers stole the data of more than 5.5 million individuals during an attack in March 2025.

[…]

According to a public notice on the YNHHS website, the organization discovered “unusual activity” on its system on March 8, 2025, which was later identified as unauthorized third-party access that allowed bad actors to copy certain patient data. While the information stolen varies by individual, it may include the following:

  • Name
  • Date of birth
  • Address
  • Phone number
  • Email address
  • Race
  • Ethnicity
  • Social Security number
  • Patient type
  • Medical record number

YNHHS says the breach did not include access to medical records, treatment information, or financial data (such as account and payment information).

[…]

Source: A Data Breach at Yale New Haven Health Compromised 5.5 Million Patients’ Information | Lifehacker

Wait – race and ethnicity?!

Perplexity CEO says its browser will track everything users do online to sell ‘hyper personalized’ ads

CEO Aravind Srinivas said this week on the TBPN podcast that one reason Perplexity is building its own browser is to collect data on everything users do outside of its own app. This so it can sell premium ads.

“That’s kind of one of the other reasons we wanted to build a browser, is we want to get data even outside the app to better understand you,” Srinivas said. “Because some of the prompts that people do in these AIs is purely work-related. It’s not like that’s personal.”

And work-related queries won’t help the AI company build an accurate-enough dossier.

“On the other hand, what are the things you’re buying; which hotels are you going [to]; which restaurants are you going to; what are you spending time browsing, tells us so much more about you,” he explained.

Srinivas believes that Perplexity’s browser users will be fine with such tracking because the ads should be more relevant to them.

“We plan to use all the context to build a better user profile and, maybe you know, through our discover feed we could show some ads there,” he said.

The browser, named Comet, suffered setbacks but is on track to be launched in May, Srinivas said.

He’s not wrong, of course. Quietly following users around the internet helped Google become the roughly $2 trillion market cap company it is today.

[…]

Meta’s ad tracking technology, Pixels, which is embedded on websites across the internet, is how Meta gathers data, even on people that don’t have Facebook or Instagram accounts. Even Apple, which has marketed itself as a privacy protector, can’t resist tracking users’ locations to sell advertising in some of its apps by default.

On the other hand, this kind of thing has led people across the political spectrum in the U.S. and in Europe to distrust big tech.

The irony of Srinivas openly explaining his browser-tracking ad-selling ambitions this week also can’t be overstated.

Google is currently in court fighting the U.S. Department of Justice, which has alleged Google behaved in monopolistic ways to dominate search and online advertising. The DOJ wants the judge to order Google to divest Chrome.

Both OpenAI and Perplexity — not surprisingly, given Srinivas’ reasons — said they would buy the Chrome browser business if Google was forced to sell.

Source: Perplexity CEO says its browser will track everything users do online to sell ‘hyper personalized’ ads | TechCrunch

Yup, so even if Mozilla is making Firefox more invasive, it’s still better than these guys.

Blue Shield of California Exposed the Data of 4.7 Million People to Google for targeted advertising

Blue Shield of California shared the protected health information of 4.7 million individuals with Google over a nearly three-year period, a data breach that impacts the majority of its nearly 6 million members, according to reporting from Bleeping Computer.

This isn’t the only large data breach to affect a healthcare organization the last year alone. Community Health Center records were hacked in October 2024, compromising more than a million individuals’ data, along with an attack on lab testing company Lab Services Cooperative, which affected records of 1.6 million Planned Parenthood patients. UnitedHealth Group suffered a breach in February 2024, resulting in the leak of more than 100 million people’s data.

What happened with Blue Shield of California?

According to an April 9 notice posted on Blue Shield of California’s website, the company allowed certain data, including protected health information, to be shared with Google Ads through Google Analytics, which may have allowed Google to serve targeted ads back to members. While not discovered until Feb. 11, 2025, the leak occurred for several years, from April 2021 to January 2024, when the connection between Google Analytics and Google Ads was severed on Blue Shield websites.

The following Blue Shield member information may have been compromised:

  • Insurance plan name, type, and group number
  • City and zip code
  • Gender
  • Family size
  • Blue Shield assigned identifiers for online accounts
  • Medical claim service date and provider
  • Patient name
  • Patient financial responsibility
  • “Find a Doctor” search criteria and results

According to the notice, no additional personal data—Social Security numbers, driver’s license numbers, and banking and credit card information—were disclosed. Blue Shield also states that no bad actor was involved, nor have they confirmed that the information has been used maliciously.

[…]

Source: Blue Shield of California Exposed the Data of 4.7 Million People to Google | Lifehacker

Tesla now seems to be remote hacking odometers to weasel out of warranty repairs. Time to stop DMCA type laws globally.

A lawsuit filed in February accuses Tesla of remotely altering odometer values on failure-prone cars, in a bid to push these lemons beyond the 50,000 mile warranty limit:

https://www.thestreet.com/automotive/tesla-accused-of-using-sneaky-tactic-to-dodge-car-repairs

The suit was filed by a California driver who bought a used Tesla with 36,772 miles on it. The car’s suspension kept failing, necessitating multiple servicings, and that was when the plaintiff noticed that the odometer readings for his identical daily drive were going up by ever-larger increments. This wasn’t exactly subtle: he was driving 20 miles per day, but the odometer was clocking 72.35 miles/day. Still, how many of us monitor our daily odometer readings?

In short order, his car’s odometer had rolled over the 50k mark and Tesla informed him that they would no longer perform warranty service on his lemon. Right after this happened, the new mileage clocked by his odometer returned to normal. This isn’t the only Tesla owner who’s noticed this behavior: Tesla subreddits are full of similar complaints:

https://www.reddit.com/r/RealTesla/comments/1ca92nk/is_tesla_inflating_odometer_to_show_more_range/

This isn’t Tesla’s first dieselgate scandal. In the summer of 2023, the company was caught lying to drivers about its cars’ range:

https://pluralistic.net/2023/07/28/edison-not-tesla/#demon-haunted-world

Drivers noticed that they were getting far fewer miles out of their batteries than Tesla had advertised. Naturally, they contacted the company for service on their faulty cars. Tesla then set up an entire fake service operation in Nevada that these calls would be diverted to, called the “diversion team.” Drivers with range complaints were put through to the “diverters” who would claim to run “remote diagnostics” on their cars and then assure them the cars were fine. They even installed a special xylophone in the diversion team office that diverters would ring every time they successfully deceived a driver.

These customers were then put in an invisible Tesla service jail. Their Tesla apps were silently altered so that they could no longer book service for their cars for any reason – instead, they’d have to leave a message and wait several days for a callback. The diversion center racked up 2,000 calls/week and diverters were under strict instructions to keep calls under five minutes. Eventually, these diverters were told that they should stop actually performing remote diagnostics on the cars of callers – instead, they’d just pretend to have run the diagnostics and claim no problems were found (so if your car had a potentially dangerous fault, they would falsely claim that it was safe to drive).

Most modern cars have some kind of internet connection, but Tesla goes much further. By design, its cars receive “over-the-air” updates, including updates that are adverse to drivers’ interests. For example, if you stop paying the monthly subscription fee that entitles you to use your battery’s whole charge, Tesla will send a wireless internet command to your car to restrict your driving to only half of your battery’s charge.

This means that your Tesla is designed to follow instructions that you don’t want it to follow, and, by design, those instructions can fundamentally alter your car’s operating characteristics. For example, if you miss a payment on your Tesla, it can lock its doors and immobilize itself, then, when the repo man arrives, it will honk its horn, flash its lights, back out of its parking spot, and unlock itself so that it can be driven away:

https://tiremeetsroad.com/2021/03/18/tesla-allegedly-remotely-unlocks-model-3-owners-car-uses-smart-summon-to-help-repo-agent/

Some of the ways that your Tesla can be wirelessly downgraded (like disabling your battery) are disclosed at the time of purchase. Others (like locking you out and summoning a repo man) are secret. But whether disclosed or secret, both kinds of downgrade depend on the genuinely bizarre idea that a computer that you own, that is in your possession, can be relied upon to follow orders from the internet even when you don’t want it to. This is weird enough when we’re talking about a set-top box that won’t let you record a TV show – but when we’re talking about a computer that you put your body into and race down the road at 80mph inside of, it’s frankly terrifying.

[…]

Laws that ban reverse-engineering are a devastating weapon that corporations get to use in their bid to subjugate and devour the human race.

The US isn’t the only country with a law like Section 1201 of the DMCA. Over the past 25 years, the US Trade Representative has arm-twisted nearly every country in the world into passing laws that are nearly identical to America’s own disastrous DMCA. Why did countries agree to pass these laws? Well, because they had to, or the US would impose tariffs on them:

https://pluralistic.net/2025/03/03/friedmanite/#oil-crisis-two-point-oh

The Trump tariffs change everything, including this thing. There is no reason for America’s (former) trading partners to continue to enforce the laws it passed to protect Big Tech’s right to twiddle their citizens. That goes double for Tesla: rather than merely complaining about Musk’s Nazi salutes, countries targeted by the regime he serves could retaliate against him, in a devastating fashion. By abolishing their anticircuvmention laws, countries around the world would legalize jailbreaking Teslas, allowing mechanics to unlock all the subscription features and software upgrades for every Tesla driver, as well as offering their own software mods. Not only would this tank Tesla stock and force Musk to pay back the loans he collateralized with his shares (loans he used to buy Twitter and the US predidency), it would also abolish sleazy gimmicks like hacking drivers’ odometers to get out of paying for warranty service:

https://pluralistic.net/2025/03/08/turnabout/#is-fair-play

Source: Pluralistic: Tesla accused of hacking odometers to weasel out of warranty repairs (15 Apr 2025) – Pluralistic: Daily links from Cory Doctorow

Discord Wants Your Face: Begins Testing Facial Scans for Age Verification

Discord has begun requiring some users in the United Kingdom and Australia to verify their age through a facial scan before being permitted to access sensitive content. The chat app’s new process has been described as an “experiment,” and comes in response to laws passed in those countries that place guardrails on youth access to online platforms. Discord has also been the target of concerns that it does not sufficiently protect minors from sexual content.

Users may be asked to verify their age when encountering content that has been flagged by Discord’s systems as being sensitive in nature, or when they change their settings to enable access to sensitive content. The app will ask users to scan their face through a computer or smartphone webcam; alternatively, they can scan a driver’s license or other form of ID.

[…]

Source: Discord Begins Testing Facial Scans for Age Verification

Age verification is impossible to do correctly, incredibly privacy invasive and a really hacker tempting target. The UK and Australia and every other country considering age verification are seriously endangering their citizens.

Fortunately you can always hold up a picture from a magazine in front of the webcam.