The Linkielist

Linking ideas with the world

The Linkielist

NY Post: Fact Checking Is Now Censorship

This was inevitable, ever since Donald Trump and the MAGA world freaked out when social media’s attempts to fact-check the President were deemed “censorship.” The reaction was both swift and entirely predictable. After all, how dare anyone question Dear Leader’s proclamations, even if they are demonstrably false? It wasn’t long before we started to see opinion pieces from MAGA folks breathlessly declaring that “fact-checking private speech is outrageous.” There were even politicians proposing laws to ban fact-checking.

In their view, the best way to protect free speech is apparently (?!?) to outlaw speech you don’t like.

This trend has only accelerated in recent years. Last year, Congress got in on the game, arguing that fact-checking is a form of censorship that needs to be investigated. Not to be outdone, incoming FCC chair Brendan Carr has made the same argument.

With last week’s announcement by Mark Zuckerberg that Meta was ending its fact-checking program, the anti-fact-checking rhetoric hasn’t slowed down one bit.

The NY Post now has an article with the hilarious headline: “The incredible, blind arrogance of the ‘fact-checking’ censors.”

So let’s be clear here: fact-checking is speech. Fact-checking is not censorship. It is protected by the First Amendment. Indeed, in olden times, when free speech supporters would talk about the “marketplace of ideas” and the “best response to bad speech is more speech,” they meant things like fact-checking. They meant that if someone were blathering on about utter nonsense, then a regime that enabled more speech could come along and fact-check folks.

There is no “censorship” involved in fact-checking. There is only a question of how others respond to the fact checks.

[…]

There’s a really fun game that the Post Editorial Board is playing here, pretending that they’re just fine with fact-checking, unless it leads to “silencing.”

The real issue, that is, isn’t the checking, it’s the silencing.

But what “silencing” ever actually happened due to fact-checking? And when was it caused by the government (which would be necessary for it to violate the First Amendment)? The answer is none.

The piece whines about a few NY Post articles that had limited reach on Facebook, but that’s Facebook’s own free speech as well, not censorship.

[…]

The Post goes on with this fun set of words:

Yes, the internet is packed with lies, misrepresentations and half-truths: So is all human conversation.

The only practical answer to false speech is and always been true speech; it doesn’t stop the liars or protect all the suckers, but most people figure it out well enough.

Shutting down debate in the name of “countering disinformation” only serves the liars with power or prestige or at least the right connections.

First off, the standard saying is that the response to false speech should be “more speech” not necessarily “true speech” but more to the point, uh, how do you get that “true speech”? Isn’t it… fact checking? And, if, as the NY Post suggests, the problem here is false speech in the fact checks, then shouldn’t the response be more speech in response rather than silencing the fact checkers?

I mean, their own argument isn’t even internally consistent.

They’re literally saying that we need more “truthful speech” and less “silencing of speech” while cheering on the silencing of organizations who try to provide more truthful speech.

[…]

Source: NY Post: Fact Checking Is Now Censorship | Techdirt

Hello Fascism in the 4th Reich!

Why Has Zuckerberg stopped Meta Fact Checking? Trump lifetime prison threats and FCC section 230 removal threats?

If you only remember two things about the government pressure campaign to influence Mark Zuckerberg’s content moderation decisions, make it these: Donald Trump directly threatened to throw Zuck in prison for the rest of his life, and just a couple months ago FCC Commissioner (soon to be FCC chair) Brendan Carr threatened Meta that if it kept on fact-checking stories in a way Carr didn’t like, he would try to remove Meta’s Section 230 protections in response.

Two months later — what do you know? — Zuckerberg ended all fact-checking on Meta. But when he went on Joe Rogan, rather than blaming those actual obvious threats, he instead blamed the Biden administration, because some admin officials sent angry emails… which Zuck repeatedly admits had zero impact on Meta’s actual policies.

[…]

this is a more simplified version of what happened, which can be summarized as: the actual threats came from the GOP, to which Zuckerberg quickly caved. The supposed threats from the Biden admin were overhyped, exaggerated, and misrepresented, and Zuck directly admits he was able to easily refuse those requests.

All the rest is noise.

[Here follows a long detailed unpacking of the Rogan interview]

as mentioned in my opening, Donald Trump directly threatened to throw Zuck in prison for the rest of his life if Facebook didn’t moderate the way he wanted. And just a couple months ago, FCC Commissioner (soon to be FCC chair) Brendan Carr threatened Meta that if it kept on fact-checking stories in a way Carr didn’t like, he would try to remove Meta’s Section 230 protections in response.

None of that came up in this discussion. The only “government pressure” that Zuck talks about is from the Biden admin with “cursing,” which he readily admits they weren’t intimidated by.

So we have Biden officials who were, perhaps, mean, but not so threatening that Meta felt the need to bow down to them. And then we have Trump himself and leading members of his incoming administration who sent direct and obvious threats, which Zuck almost immediately bowed down to and caved.

And yet Rogan (and much of the media covering this podcast) claims he “revealed” how the Biden admin violated the First Amendment. Hell, the NY Post even ran an editorial pretending that Zuck didn’t go far enough because he didn’t reveal all of this in time for the Murthy case. And that’s only because the author doesn’t realize he literally is talking about the documents in the Murthy case.

The real story here is that Zuckerberg caved to Trump’s threats and felt fine pushing back on the Biden admin. Rogan at one point rants about how Trump will now protect Zuck because Trump “uniquely has felt the impact of not being able to have free speech.” That seems particularly ironic given the real story: Zuckerberg caved to Trump’s threats while pushing back on the Biden admin.

[…]

Strip away all the spin and misdirection, and the truth is inescapable: Zuckerberg folded like a cheap suit in the face of direct threats from Trump and his lackeys, while barely batting an eye at some sternly worded emails from Biden officials.

[…]

Source: Rogan Misses The Mark: How Zuck’s Misdirection On Gov’t Pressure Goes Unchallenged | Techdirt

Google won’t add fact-checks despite new EU law

Google has told the EU it will not add fact checks to search results and YouTube videos or use them in ranking or removing content, despite the requirements of a new EU law, according to a copy of a letter obtained by Axios.

The big picture: Google has never included fact-checking as part of its content moderation practices. The company had signaled privately to EU lawmakers that it didn’t plan to change its practices, but it’s reaffirming its stance ahead of a voluntary code becoming law in the near future.

Zoom in: In a letter written to Renate Nikolay, the deputy director general under the content and technology arm at the European Commission, Google’s global affairs president Kent Walker said the fact-checking integration required by the Commission’s new Disinformation Code of Practice “simply isn’t appropriate or effective for our services” and said Google won’t commit to it.

  • The code would require Google to incorporate fact-check results alongside Google’s search results and YouTube videos. It would also force Google to build fact-checking into its ranking systems and algorithms.
  • Walker said Google’s current approach to content moderation works and pointed to successful content moderation during last year’s “unprecedented cycle of global elections” as proof.
  • He said a new feature added to YouTube last year that enables some users to add contextual notes to videos “has significant potential.” (That program is similar to X’s Community Notes feature, as well as new program announced by Meta last week.)

Catch up quick: The EU’s Code of Practice on Disinformation, introduced in 2022, includes several voluntary commitments that tech firms and private companies, including fact-checking organizations, are expected to deliver on.

  • The Code, originally created in 2018, predates the EU’s new content moderation law, the Digital Services Act (DSA), which went into effect in 2022.

State of play: The Commission has held private discussions over the past year with tech companies, urging them to convert the voluntary measures into an official code of conduct under the DSA.

  • Walker said in his letter Thursday that Google had already told the Commission that it didn’t plan to comply.
  • Google will “pull out of all fact-checking commitments in the Code before it becomes a DSA Code of Conduct,” he wrote.
  • He said Google will continue to invest in improvements to its current content moderation practices, which focus on providing people with more information about their search results through features like Synth ID watermarking and AI disclosures on YouTube.

Zoom out: The news comes amid a global reckoning about the role tech platforms should play in fact-checking and policing speech.

Source: Google won’t add fact-checks despite new EU law

You don’t need to make up like a clown to defeat AI face detection

In a pre-print paper titled “Novel AI Camera Camouflage: Face Cloaking Without Full Disguise,” David Noever, chief scientist, and Forrest McKee, data scientist, describe their efforts to baffle face recognition systems through the minimal application of makeup and manipulation of image files.

Noever and McKee recount various defenses that have been proposed against facial recognition systems, including CV Dazzle, which creates asymmetries using high-contrast makeup, adversarial attack graphics that confuse algorithms, and Juggalo makeup, which can be used to obscure jaw and cheek detection.

And of course, there are masks, which have the advantage of simplicity and tend to be reasonably effective regardless of the facial recognition algorithm being used.

But as the authors observe, these techniques draw attention.

“While previous efforts, such as CV Dazzle, adversarial patches, and Juggalo makeup, relied on bold, high-contrast modifications to disrupt facial detection, these approaches often suffer from two critical limitations: their theatrical prominence makes them easily recognizable to human observers, and they fail to address modern face detectors trained on robust key-point models,” they write.

“In contrast, this study demonstrates that effective disruption of facial recognition can be achieved through subtle darkening of high-density key-point regions (e.g., brow lines, nose bridge, and jaw contours) without triggering the visibility issues inherent to overt disguises.”

Image from arXiv:2412.13507 depicting man's face with Darth Maul-style makeup

Image from the pre-print depicting man’s face with Darth Maul-style makeup … Click to enlarge

The research focuses on two areas: applying minimal makeup to fool Haar cascade classifiers – used for object detection in machine learning, and hiding faces in image files by manipulating the alpha transparency layer in a way that keeps faces visible to human observers but conceals them from specific reverse image search systems like BetaFaceAPI and Microsoft Bing Visual Search.

[…]

“Despite a lot of research, masks remain one of the few surefire ways of evading these systems [for now],” she said. “However, gait recognition is becoming quite powerful, and it’s also unclear if this will supplant face recognition. It is harder to imagine practical and effective evasion strategies against this technology.”

Source: Subtle makeup tweaks can outsmart facial recognition • The Register

Meta says it isn’t ending fact-checks outside the US yet

Social media platform Meta has confirmed that its fact-checking feature on Facebook, Instagram and Threads will only be removed in the US for now, according to a Jan. 13 letter sent to Brazil’s government.

“Meta has already clarified that, at this time, it is terminating its independent Fact-Checking Program only in the United States, where we will test and refine the community notes [feature] before expanding to other countries,” Meta told Brazil’s Attorney General of the Union (AGU) in a Portuguese-translated letter.

Meta’s letter followed a 72-hour deadline Brazil’s AGU set for Meta to clarify to whom the removal of the third-party fact verification feature would apply.

It comes after Meta announced on Jan. 7 that it would remove the feature to ensure more “freedom of expression” on its platforms — as part of a broader effort to comply with corporate human rights policies.

Meta’s fact-checking program will be replaced with a community notes feature — similar to the one on Elon Musk’s X — in the US to strike a better balance between freedom of expression and security, Mark Zuckerberg’s company explained to Brazil’s AGU.

It acknowledged that abusive forms of freedom of expression might ensue and cause harm and already has automated systems in place that will identify and handle high-severity violations on its platforms — from terrorism and child sexual exploitation to fraud, scams and drug matters.

Source: Mike Benz

However, Brazil has expressed dissatisfaction with Meta’s removal of its fact check feature, Brazil Attorney-General Jorge Messias said on Jan. 10.

“Brazil has rigorous legislation to protect children and adolescents, vulnerable populations, and the business environment, and we will not allow these networks to transform the environment into digital carnage or barbarity.”

Related: Death of Meta’s stablecoin project was ‘100% a political kill’ — Ex Diem boss

It comes as Meta’s Zuckerberg said he would work with the incoming Trump administration to push back against foreign governments going after US companies to censor more.

Zuckerberg is expected to attend Republican Donald Trump’s inauguration on Jan. 20.

Source: Meta says it isn’t ending fact-checks outside the US yet

Does anyone actually believe the shit Zuckerberg is pushing? It’s a great way to save money. Lots of money. And kowtow to the incoming Oligarch in chief.

Venezuela’s Internet Censorship Sparks Surge in VPN Demand

What’s Important to Know:

  • Venezuela’s Supreme Court fined TikTok USD$10 million for failing to prevent viral video challenges that resulted in the deaths of three Venezuelan children.
  • TikTok faced temporary blockades by Internet Service Providers (ISPs) in Venezuela for not paying the fine.
  • ISPs used IP, HTTP, and DNS blocks to restrict access to TikTok and other platforms in early January 2025.
  • While this latest round of blockades was taking place, protests against Nicolás Maduro’s attempt to retain the presidency of Venezuela were happening across the country. The riot police were deployed in all major cities, looking to quell any protesters.
  • A significant surge in demand for VPN services has been observed in Venezuela since the beginning of 2025. Access to some VPN providers’ websites has also been restricted in the country.

In November 2024, Nicolás Maduro announced that two children had died after participating in challenges on TikTok. After a third death was announced by Education Minister Héctor Rodriguez, Venezuela’s Supreme Court issued a $10 million fine against the social media platform for failing to implement measures to prevent such incidents.

The court also ordered TikTok to open an office in Venezuela to oversee content compliance with local laws, giving the platform eight days to comply and pay the fine. TikTok failed to meet the court’s deadline to pay the fine or open an office in the country. As a result, ISPs in Venezuela, including CANTV — the state’s internet provider — temporarily blocked access to TikTok.

The blockades happened on January 7 and later on January 8, lasting several hours each. According to Netblocks.org, various methods were used to restrict access to TikTok, including IP, HTTP, and DNS blocks.

This screenshot shows Netblocks.org report, indicating zero reachability on TikTok using different Venezuelan ISPs.

On January 9, under orders of CONATEL (Venezuela’s telecommunications regulator), CANTV and other private ISPs in the country implemented further blockades to restrict access to TikTok. For instance, they blocked 21 VPN providers along with 33 public DNS services as reported by VeSinFiltro.org.

[…]

vpnMentor’s Research Team first observed a significant surge in the demand for VPN services in the country back in 2024, when X was first blocked. Since then, VPN usage has continued to rise in Venezuela, reaching another remarkable surge in the beginning of 2025. VPN demand grew over 200% only from January 7th to the 8th, totaling a 328% growth from January 1st to January 8th. This upward trend shows signs of further growth according to partial data from January 9th.

The increased demand for VPN services indicates a growing interest in circumventing censorship and accessing restricted content online. This trend suggests that Venezuelan citizens are actively seeking ways to bypass government-imposed restrictions on social media platforms and maintain access to a free flow of information.

[…]

Other Recent VPN Demand Growths

Online platforms are no strangers to geoblocks in different parts of the world. In fact, there have been cases where platforms themselves impose location-based access restrictions to users. For instance, Aylo/Pornhub previously geo-blocked 17 US states in response to age-verification laws that the adult site deemed unjust.

vpnMentor’s Research Team recently published a report about a staggering 1,150% VPN demand surge in Florida following the IP-block of Pornhub in the state.

Source: Venezuela’s Internet Censorship Sparks Surge in VPN Demand

VPN Demand Surge in Florida after Adult Sites Age Restriction Kicks In

What’s important to know:

  • On March 25, 2024 Florida’s Gov. Ron DeSantis signed a law requiring age verification for accessing pornographic sites. This law, known as House Bill 3 (HB3), passed with bipartisan support and has caused quite a stir in the online community.
  • HB3 was set to come into effect on January 1, 2025. It allows hefty fines of up to $50,000 for websites that fail to comply with the regulations.
  • In response to this new legislation, Aylo, the parent company of Pornhub confirmed on December 18, 2024 that it will deny access for all users geo-located in the state as a form of protest to the new age verification requirements imposed by a state law.
  • Pornhub, which registered 3 billion visits from the United States in January 2024, had previously imposed access restrictions in Kentucky, Indiana, Idaho, Kansas, Nebraska, Texas, North Carolina, Montana, Mississippi, Virginia, Arkansas, and Utah. This makes Florida the 13th state without access to their website.

The interesting development following Aylo’s geo-block on Florida IP addresses is the dramatic increase in the demand for Virtual Private Network (VPN) services in the state. A VPN allows users to mask their IP addresses and encrypt their internet traffic, providing an added layer of privacy and security while browsing online.

The vpnMentor Research Team observed a significant surge in VPN usage across the state of Florida, with a staggering increase noted in the first hours of January 1st increasing consistently since the last minutes of 2024 and reaching its peak of 1150% only four hours after the HB3 law came into effect.
Additionally, there was a noteworthy 51% spike in demand for VPN services in the state on December 19, 2024, the day after Aylo released their statement of geo-blocking Florida IP addresses to access their website.

Florida’s new law on pornographic websites and the consequent rise of VPN usage emphasize the intricate interplay between technology, privacy, and regulatory frameworks. With laws pertaining to online activities constantly changing, it is imperative for users and website operators alike to remain knowledgeable about regulations and ensure compliance.

Past VPN Demand Growths

Aylo/Pornhub has previously geo-blocked 12 states all of which have enforced age-verification laws that the adult site deemed unjust.

In May 2023, Pornhub’s banning of Utah-based users caused a 967% spike in VPN demand in the state and last year, the passing of adult-site-related age restriction laws in Texas caused a surge in demand of 234.8% in the state.

Source: VPN Demand Surge in Florida after Adult Sites Age Restriction Kicks In

Google brings back digital fingerprinting to track users for advertising

Google is tracking your online behavior in the name of advertising, reintroducing a data collection process that ingests all of your online signals (from IP address to complex browser information) and pinpoints unique users or devices, also known as “digital fingerprinting.”

The company’s updated platform program policies include relaxed restrictions on advertisers and personalized ad targeting across a range of devices, an outcome of a larger “advertising ecosystem shift” and the advancement of privacy-enhancing technologies (PETs) like on-device processing and trusted execution environments, in the words of the company.

A departure from its longstanding pledge to user choice and privacy, Google argues these technologies offer enough protection for users while also creating “new ways for brands to manage and activate their data safely and securely.” The new feature will be available to advertisers beginning Feb. 16, 2025.

[…]

Contrary to other data collection tools like cookies, digital fingerprinting is difficult to spot, and thus even harder for even privacy-conscious users to erase or block. On Dec. 19, the UK’s Information Commissioner’s Office (ICO) — a data protection and privacy regulator — labeled Google “irresponsible” for the policy change, saying the shift to fingerprinting is an unfair means of tracking users, reducing choice and control over their personal information. The watchdog also warned that the move could encourage riskier advertiser behavior.

“Google itself has previously said that fingerprinting does not meet users’ expectations for privacy, as users cannot easily consent to it as they would cookies. This in turn means they cannot control how their information is collected. To quote Google’s own position on fingerprinting from 2019: ‘We think this subverts user choice and is wrong,'” wrote ICO executive director of regulatory risk Stephen Almond.

The ICO warned that it will intervene if Google cannot demonstrate existing legal requirements for such tech, including options to secure freely-given consent, ensure fair processing, and uphold the right to erasure: “Businesses should not consider fingerprinting a simple solution to the loss of third-party cookies and other cross-site tracking signals.”

Source: Google brings back digital fingerprinting to track users for advertising | Mashable

Telegram hands over data on 2253 users last year (up from 108 in 2023) to US law enforcement alone after arrest of boss

Telegram reveals that the communications platform has fulfilled 900 U.S. government requests, sharing the phone number or IP address information of 2,253 users with law enforcement.

This number is a steep increase from previous years, with most requests processed after the platform’s policy shift on sharing user data, announced in September 2024.

While Telegram has long been a platform used to communicate with friends and family, talk with like-minded peers, and as a way to bypass government censorship, it is also heavily used for cybercrime.

Threat actors commonly utilize the platform to sell illegal services, conduct attacks, sell stolen data, or as a command and control server for their malware.

As first reported by 404 Media, the new information on fulfilled law enforcement requests comes from the Telegram Transparency Report for the period between 1/1/24 and 12/13/24.

Previously, Telegram would only share users’ IP addresses and phone numbers in cases of terrorism and had only fulfilled 14 requests affecting 108 users until September 30, 2024.

Current numbers (left) and previous period figures (right)
Current numbers (left) and previous period figures (right)
Source: BleepingComputer

Following the change in its privacy policy, Telegram will now share user data with law enforcement in other cases of crime, including cybercrime, the selling of illegal goods, and online fraud.

[…]

This change came in response to pressure from the authorities, culminating in the arrest of Telegram’s founder and CEO, Pavel Durov, in late August in France.

Durov subsequently faced a long list of charges, including complicity in cybercrime, organized fraud, and distribution of illegal material, as well as refusal to facilitate lawful interceptions aimed at aiding crime investigations.

[…]

To access Telegram transparency reports for your country, use the platform’s dedicated bot from here.

Source: Telegram hands over data on thousands of users to US law enforcement

That’s one way to get what you want – make up spurious charges, arrest someone and hold them for as long as it takes for you to get what you want without having to actually prove you can legally get at it. If it wasn’t the government doing it this would be called kidnapping and extortion.

Google goes to court for collecting data on users who opted out… again…

A federal judge this week rejected Google’s motion to throw out a class-action lawsuit alleging that it invaded the privacy of users who opted out of functionality that records a users’ web and app activities. A jury trial is scheduled for August 2025 in US District Court in San Francisco.

The lawsuit concerns Google’s Web & App Activity (WAA) settings, with the lead plaintiff representing two subclasses of people with Android and non-Android phones who opted out of tracking. “The WAA button is a Google account setting that purports to give users privacy control of Google’s data logging of the user’s web app and activity, such as a user’s searches and activity from other Google services, information associated with the user’s activity, and information about the user’s location and device,” wrote US District Judge Richard Seeborg, the chief judge in the Northern District Of California.

Google says that Web & App Activity “saves your activity on Google sites and apps, including associated info like location, to give you faster searches, better recommendations, and more personalized experiences in Maps, Search, and other Google services.” Google also has a supplemental Web App and Activity setting that the judge’s ruling refers to as “(s)WAA.”

“The (s)WAA button, which can only be switched on if WAA is also switched on, governs information regarding a user’s ‘[Google] Chrome history and activity from sites, apps, and devices that use Google services.’ Disabling WAA also disables the (s)WAA button,” Seeborg wrote.

Google sends data to developers

But data is still sent to third-party app developers through the Google Analytics for Firebase (GA4F), “a free analytical tool that takes user data from the Firebase kit and provides app developers with insight on app usage and user engagement,” the ruling said. GA4F “is integrated in 60 percent of the top apps” and “works by automatically sending to Google a user’s ad interactions and certain identifiers regardless of a user’s (s)WAA settings, and Google will, in turn, provide analysis of that data back to the app developer.”

Plaintiffs have brought claims of privacy invasion under California law. Plaintiffs “present evidence that their data has economic value,” and “a reasonable juror could find that Plaintiffs suffered damage or loss because Google profited from the misappropriation of their data,” Seeborg wrote.

[…]

In a proposed settlement of a different lawsuit, Google last year agreed to delete records reflecting users’ private browsing activities in Chrome’s Incognito mode.

[…]

Google contends that its system is harmless to users. “Google argues that its sole purpose for collecting (s)WAA-off data is to provide these analytic services to app developers. This data, per Google, consists only of non-personally identifiable information and is unrelated (or, at least, not directly related) to any profit-making objectives,” Seeborg wrote.

On the other side, plaintiffs say that Google’s tracking contradicts its “representations to users because it gathers exactly the data Google denies saving and collecting about (s)WAA-off users,” Seeborg wrote. “Moreover, Plaintiffs insist that Google’s practices allow it to personalize ads by linking user ad interactions to any later related behavior—information advertisers are likely to find valuable—leading to Google’s lucrative advertising enterprise built, in part, on (s)WAA-off data unlawfully retrieved.”

[…]

Google, as the judge writes, purports to treat user data as pseudonymous by creating a randomly generated identifier that “permits Google to recognize the particular device and its later ad-related behavior… Google insists that it has created technical barriers to ensure, for (s)WAA-off users, that pseudonymous data is delinked to a user’s identity by first performing a ‘consent check’ to determine a user’s (s)WAA settings.”

Whether this counts as personal information under the law is a question for a jury, the judge wrote. Seeborg pointed to California law that defines personal information to include data that “is reasonably capable of being associated with, or could reasonably be linked, directly or indirectly, with a particular consumer or household.” Given the legal definition, “a reasonable juror could view the (s)WAA-off data Google collected via GA4F, including a user’s unique device identifiers, as comprising a user’s personal information,” he wrote.

[…]

Source: Google loses in court, faces trial for collecting data on users who opted out – Ars Technica

Siri “unintentionally” recorded private convos on phone, watch, then sold them to advertisers; yes those ads are very targeted Apple agrees to pay $95M, laughs to the bank

Apple has agreed to pay $95 million to settle a lawsuit alleging that its voice assistant Siri routinely recorded private conversations that were then shared with third parties and used for targeted ads.

In the proposed class-action settlement—which comes after five years of litigation—Apple admitted to no wrongdoing. Instead, the settlement refers to “unintentional” Siri activations that occurred after the “Hey, Siri” feature was introduced in 2014, where recordings were apparently prompted without users ever saying the trigger words, “Hey, Siri.”

Sometimes Siri would be inadvertently activated, a whistleblower told The Guardian, when an Apple Watch was raised and speech was detected. The only clue that users seemingly had of Siri’s alleged spying was eerily accurate targeted ads that appeared after they had just been talking about specific items like Air Jordans or brands like Olive Garden, Reuters noted (claims which remain disputed).

[…]

It’s currently unknown how many customers were affected, but if the settlement is approved, the tech giant has offered up to $20 per Siri-enabled device for any customers who made purchases between September 17, 2014, and December 31, 2024. That includes iPhones, iPads, Apple Watches, MacBooks, HomePods, iPod touches, and Apple TVs, the settlement agreement noted. Each customer can submit claims for up to five devices.

A hearing when the settlement could be approved is currently scheduled for February 14. If the settlement is certified, Apple will send notices to all affected customers. Through the settlement, customers can not only get monetary relief but also ensure that their private phone calls are permanently deleted.

While the settlement appears to be a victory for Apple users after months of mediation, it potentially lets Apple off the hook pretty cheaply. If the court had certified the class action and Apple users had won, Apple could’ve been fined more than $1.5 billion under the Wiretap Act alone, court filings showed.

But lawyers representing Apple users decided to settle, partly because data privacy law is still a “developing area of law imposing inherent risks that a new decision could shift the legal landscape as to the certifiability of a class, liability, and damages,” the motion to approve the settlement agreement said. It was also possible that the class size could be significantly narrowed through ongoing litigation, if the court determined that Apple users had to prove their calls had been recorded through an incidental Siri activation—potentially reducing recoverable damages for everyone.

“The percentage of those who experienced an unintended Siri activation is not known,” the motion said. “Although it is difficult to estimate what a jury would award, and what claims or class(es) would proceed to trial, the Settlement reflects approximately 10–15 percent of Plaintiffs expected recoverable damages.”

Siri’s unintentional recordings were initially exposed by The Guardian in 2019, plaintiffs’ complaint said. That’s when a whistleblower alleged that “there have been countless instances of recordings featuring private discussions between doctors and patients, business deals, seemingly criminal dealings, sexual encounters and so on. These recordings are accompanied by user data showing location, contact details, and app data.”

[…]

Meanwhile, Google faces a similar lawsuit in the same district from plaintiffs represented by the same firms over its voice assistant, Reuters noted. A win in that suit could affect anyone who purchased “Google’s own smart home speakers, Google Home, Home Mini, and Home Max; smart displays, Google Nest Hub, and Nest Hub Max; and its Pixel smartphones” from approximately May 18, 2016 to today, a December court filing noted. That litigation likely won’t be settled until this fall.

Source: Siri “unintentionally” recorded private convos; Apple agrees to pay $95M – Ars Technica

PayPal Honey extension to find deals instead hides discounts and reroutes commissions from promoters

PayPal-owned browser extension Honey manipulates affiliate marketing systems and withholds discount information from users, according to an investigation by YouTube channel MegaLag.

The extension — which rose in popularity after promising consumers it would find them the best online deals — replaces existing affiliate cookies with its own during checkout, diverting commission payments from content creators who promoted the products to PayPal, MegaLag reported in a 23-minute video [YouTube link].

The investigation revealed that Honey, which PayPal acquired in 2019 for $4 billion, allows merchants in its cashback program to control which coupons appear to users, hiding better publicly available discounts.

Source: PayPal’s Honey Accused of Misleading Users, Hiding Discounts

Hundreds of websites to shut down under UK’s ‘chilling’ internet laws

Hundreds of websites will be shut down on the day that Britain’s Online Safety Act comes into effect, in what are believed to be the first casualties of the new internet laws.

Microcosm, a web forum hosting service that runs 300 sites including cycling forums and local community hubs, said that the sites would go offline on March 16, the day that Ofcom starts enforcing the Act.

Its owner said they were unable to comply with the lengthy requirements of the Act, which created a “disproportionately high personal liability”.

The new laws, which were designed to crack down on illegal content and protect children, threaten fines of up to £18m or 10pc of revenue for sites that fail to comply with the laws.

On Monday, Ofcom set out more than 40 measures that it expects online services to follow by March, such as carrying out risk assessments about their sites and naming senior people accountable for ensuring safety.

Microcosm, which has hosted websites including cycling forum LFGSS since 2007, is run as a non-profit funded by donations and largely relies on users to follow community guidelines. Its sites attract a combined 250,000 users.

Dee Kitchen, who operates the service and moderates its 300 sites, said: “What this is, is a chilling effect [on small sites].

“For the really small sites and the charitable sites and the local sports club there’s no carve-out for anything.

“It feels like a huge risk, and it feels like it can be so easily weaponised by angry people who are the subject of moderation.

“It’s too vague and too broad and I don’t want to take that personal risk.”

Announcing the shutdown on the LFGSS forum, they said: “It’s devastating to just … turn it off … but this is what the Act forces a sole individual running so many social websites for a public good to do.”

[…]

Source: Hundreds of websites to shut down under UK’s ‘chilling’ internet laws

Android will let you find unknown Bluetooth trackers instead of just warning you about them

The advent of Bluetooth trackers has made it a lot easier to find your bag or keys when they’re lost, but it has also put inconspicuous tracking tools in the hands of people who might misuse them. Apple and Google have both implemented tracker alerts to let you know if there’s an unknown Bluetooth tracker nearby, and now as part of a new update, Google is letting Android users actually locate those trackers, too.

The feature is one of two new tools Google is adding to Find My Device-compatible trackers. The first, “Temporarily Pause Location” is what you’re supposed to enable when you first receive an unknown tracker notification. It blocks your phone from updating its location with trackers for 24 hours. The second, “Find Nearby,” helps you pinpoint where the tracker is if you can’t see it or easily hear it.

By clicking on an unknown tracker notification you’ll be able to see a map of where the tracker was last spotted moving with you. From there, you can play a sound to see if you can locate it (Google says the owner won’t be notified). If you can’t find it, Find Nearby will connect your phone to the tracker over Bluetooth and display a shape that fills in the closer you get to it.

The Find Nearby button and interface from Google's Find My Device network.
Google / Engadget

The tool is identical to what Google offers for locating trackers and devices you actually own, but importantly, you don’t need to use Find My Device or have your own tracker to benefit. Like Google’s original notifications feature, any device running Android 6.0 and up can deal with unknown Bluetooth trackers safely.

Expanding Find Nearby seems like the final step Google needed to take to tamp down Bluetooth tracker misuse, something Apple already does with its Precision Finding tool for AirTags. The companies released a shared standard for spotting unknown Bluetooth trackers regardless of whether you use Android or iOS in May 2024, following the launch of Google’s Find My Device network in April. Both Google and Apple offered their own methods of dealing with unknown trackers before then to prevent trackers from being used for everything from robbery to stalking.

Source: Android will let you find unknown Bluetooth trackers instead of just warning you about them

300 Artists Back Internet Archive in $621 Million Copyright Attack from Record Labels – over music older than the 1950s

[…]300-plus musicians who have signed an open letter supporting the Internet Archive as it faces a $621 million copyright infringement lawsuit over its efforts to preserve 78 rpm records.

The letter, spearheaded by the digital advocacy group Fight for the Future, states that the signatories “wholeheartedly oppose” the lawsuit, which they suggest benefits “shareholder profits” more than actual artists. It continues: “We don’t believe that the Internet Archive should be destroyed in our name. The biggest players of our industry clearly need better ideas for supporting us, the artists, and in this letter we are offering them.”

[…]

(The full letter, and a list of signatories, is here.)

The lawsuit was brought last year by several major music rights holders, led by Universal Music Group and Sony Music. They claimed the Internet Archive’s Great 78 Project — an unprecedented effort to digitize hundreds of thousands of obsolete shellac discs produced between the 1890s and early 1950s — constituted the “wholesale theft of generations of music,” with “preservation and research” used as a “smokescreen.” (The Archive has denied the claims.)

While more than 400,000 recordings have been digitized and made available to listen to on the Great 78 Project, the lawsuit focuses on about 4,000, most by recognizable legacy acts like Billie Holiday, Frank Sinatra, Elvis Presley, and Ella Fitzgerald. With the maximum penalty for statutory damages at $150,000 per infringing incident, the lawsuit has a potential price tag of over $621 million. A broad enough judgement could end the Internet Archive.

Supporters of the suit — including the estates of many of the legacy artists whose recordings are involved — claim the Archive is doing nothing more than reproducing and distributing copyrighted works, making it a clear-cut case of infringement. The Archive, meanwhile, has always billed itself as a research library (albeit a digital one), and its supporters see the suit (as well as a similar one brought by book publishers) as an attack on preservation efforts, as well as public access to the cultural record.

[…]

“Musicians are struggling, but libraries like the Internet Archive are not our problem! Corporations like Spotify, Apple, Live Nation and Ticketmaster are our problem. If labels really wanted to help musicians, they would be working to raise streaming rates. This lawsuit is just another profit-grab.”

Tommy Cappel, who co-founded the group Beats Antique, says the Archive is “hugely valued in the music community” for its preservation of everything from rare recordings to live sets. “This is important work that deserves to continue for generations to come, and we don’t want to see everything they’ve already done for musicians and our legacy erased,” he added. “Major labels could see all musicians, past and present, as partners — instead of being the bad guy in this dynamic. They should drop their suit. Archives keep us alive.”

Rather than suing the Archive, Fight for the Future’s letter calls on labels, streaming services, ticketing outlets, and venues to align on different goals. At the top of the list is boosting preservation efforts by partnering with “valuable cultural stewards like the Internet Archive.” They also call for greater investment in working musicians through more transparency in in ticketing practices, an end to venue merch cuts, and fair streaming compensation.

[…]

Source: Kathleen Hanna, Tegan and Sara, More Back Internet Archive in $621 Million Copyright Fight

How is it possible that there is still income generated from something released in the 1950s to people who had absolutely nothing to do with the creation and don’t put in any effort whatsoever to put out the content?

Why Italy’s Piracy Shield destroys huge internet companies and small businesses with no recourse (unless you are rich) and can lay out the entire internet in Italy to… protect against football streaming?!

Walled Culture has been following the sorry saga of Italy’s automated blocking system Piracy Shield for a year now. Blocklists are drawn up by copyright companies, without any review, or the possibility of any objections, and those blocks must be enforced within 30 minutes. Needless to say, such a ham-fisted and biased approach to copyright infringement is already producing some horrendous blunders.

For example, back in March Walled Culture reported that one of Cloudflare’s Internet addresses had been blocked by Piracy Shield. There were over 40 million domains associated with the blocked address – which shows how this crude approach can cause significant collateral damage to millions of sites not involved in any alleged copyright infringement.

Every new system has teething troubles, although not normally on this scale. But any hope that Italy’s national telecoms regulator, Autorità per le Garanzie nelle Comunicazioni (Authority for Communications Guarantees, AGCOM), the body running Piracy Shield, would have learned from the Cloudflare fiasco in order to stop it happening again was dispelled by what took place in October. TorrentFreak explains:

After blocking Cloudflare to prevent IPTV piracy just a few months ago, on Saturday the rightsholders behind Piracy Shield ordered Italy’s ISPs to block Google Drive. The subsequent nationwide blackout, affecting millions of Italians, wasn’t just a hapless IP address blunder. This was the reckless blocking of a Google.com subdomain that many 10-year-olds could identify as being important. Reckless people and internet infrastructure, what could possibly go wrong next?

The following day, there was a public discussion online involving the current and former AGCOM Commissioners, as well as various experts in relevant areas. The current AGCOM Commissioner Capitanio showed no sense of remorse for what happened. According to TorrentFreak’s report on the discussion:

Capitanio’s own focus on blocking to protect football was absolute. There was no concern expressed towards Google or the millions of users affected by the extended blackout, only defense of the Piracy Shield system.

Moreover:

AGCOM’s chief then went on to complain about Google’s refusal to delete Android apps already installed on users devices and other measures AGCOM regularly demands, none of which are required by law.

It seems that Capitanio regards even the current, one-sided and extreme Piracy Shield as too weak, and was trying to persuade Google to go even further than the law required – a typical copyright maximalist attitude. But worse was to come. Another participant in the discussion, former member of the Italian parliament, IT expert, and founder of Rialto Venture Capital, Stefano Quintarelli, pointed out a deeply worrying possibility:

the inherent insecurity of the Piracy Shield platform introduces a “huge systemic vulnerability” that eclipses the fight against piracy. Italy now has a system in place designed to dramatically disrupt internet communications and since no system is entirely secure, what happens if a bad actor somehow gains control?

Quintarelli says that if the Piracy Shield platform were to be infiltrated and maliciously exploited, essential services like hospitals, transportation systems, government functions, and critical infrastructure would be exposed to catastrophic blocking.

In other words, by placing the sanctity of copyright above all else, the Piracy Shield system could be turned against any aspect of Italian society with just a few keyboard commands. A malicious actor that managed to gain access to a system that has twice demonstrated a complete lack of even the most basic controls and checks could wreak havoc on computers and networks throughout Italy in a few seconds. Moreover, the damage could easily go well beyond the inconvenience of millions of people being blocked from accessing their files on Google Drive. A skilled intruder could carry out widespread sabotage of vital services and infrastructure that would cost billions of euros to rectify, and could even lead to the loss of lives.

No wonder, then, that an AGCOM board member, Elisa Giomi, has gone public with her concerns about the system. Giomi’s detailed rundown of Piracy Shield’s long-standing problems was posted in Italian on LinkedIn; TorrentFreak has a translation, and summarises the current situation as follows:

Despite a series of failures concerning Italy’s IPTV blocking platform Piracy Shield and the revelation that the ‘free’ platform will cost €2m per year, telecoms regulator AGCOM insists that all is going to plan. After breaking ranks, AGCOM board member Elisa Giomi called for the suspension of Piracy Shield while decrying its toll on public resources. When she was warned for her criticism, coupled with a threat of financial implications, Giomi came out fighting.

It’s clear that the Piracy Shield tragedy is far from over. It’s good to see courageous figures like Giomi joining the chorus of disapproval.

Source: Why Italy’s Piracy Shield risks moving from tiresome digital farce to serious national tragedy – Walled Culture

Italy, copyright – retarded doesn’t even begin to describe it.

Police bust pirate streaming service making €250 million per month: doesn’t this show the TV market is hugely broken?

An international law enforcement operation has dismantled a pirate streaming service that served over 22 million users worldwide and made €250 million ($263M) per month.

Italy’s Postal and Cybersecurity Police Service announced the action, codenamed “Taken Down,” stating they worked with Eurojust, Europol, and many other European countries, making this the largest takedown of its kind in Italy and internationally.

“More than 270 Postal Police officers, in collaboration with foreign law enforcement, carried out 89 searches in 15 Italian regions and 14 additional searches in the United Kingdom, the Netherlands, Sweden, Switzerland, Romania, Croatia, and China, involving 102 individuals,” reads the announcement.

“As part of the investigative framework initiated by the Catania Prosecutor’s Office and the Italian Postal Police, and with international cooperation, the Croatian police executed 11 arrest warrants against suspects.”

“Additionally, three high-ranking administrators of the IT network were identified in England and the Netherlands, along with 80 streaming control panels for IPTV channels managed by suspects throughout Italy,” mentions the police in the same announcement.

The pirated TV and content streaming service was operated by a hierarchical, transnational organization that illegally captured and resold the content of popular content platforms.

The copyrighted content included redistributed IPTV, live broadcasts, and on-demand content from major broadcasters like Sky, Dazn, Mediaset, Amazon Prime, Netflix, Disney+, and Paramount.

The police say that these illegal streams were made accessible through numerous live-streaming websites but have not published any domains.

It is estimated that the amount of financial damages suffered annually from the illegal service is a massive €10 billion ($10.5B).

These broadcasts were resold to 22 million subscribed members via multiple distribution channels and an extensive seller network.

As a result of operation “Taken Down,” the authorities seized over 2,500 illegal channels and their servers, including nine servers in Romania and Hong Kong.

[…]

Source: Police bust pirate streaming service making €250 million per month

Bad licensing decisions by TV stations and broadcasters have given these streamers a product that people apparently really really want and are willing to pay for.

Don’t shut down the streamers, shut down the system that makes this kind of product impossible to get.

BBC Gives Away huge Sound Effects Library, with readable and sensible terms of use

BBC Sound Effects website top

Terms for using our content

A few rules to stop you (and us) getting in trouble.

a) Don’t mess with our content
What do we mean by that? This sort of thing:

  • Removing or altering BBC logos, and copyright notices from the content (if there are any)
  • Not removing content from your device or systems when we ask you to. This might happen when we take down content either temporarily or permanently, which we can do at any time, without notice.
b) Don’t use our content for harmful or offensive purposes
Here’s a list of things that may harm or offend:

  • Insulting, misleading, discriminating or defaming (damaging people’s reputations)
  • Promoting pornography, tobacco or weapons
  • Putting children at risk
  • Anything illegal. Like using hate speech, inciting terrorism or breaking privacy law
  • Anything that would harm the BBC’s reputation
  • Using our content for political or social campaigning purposes or for fundraising.
c) Don’t make it look like our content costs money

If you put our content on a site that charges for content, you have to say it is free-to-view.

d) Don’t make our content more prominent than non-BBC content

Otherwise it might look like we’re endorsing you. Which we’re not allowed to do.

Also, use our content alongside other stuff (e.g. your own editorial text). You can’t make a service of your own that contains only our content.

Speaking of which…

e) Don’t exaggerate your relationship with the BBC

You can’t say we endorse, promote, supply or approve of you.

And you can’t say you have exclusive access to our content.

f) Don’t associate our content with advertising or sponsorship
That means you can’t:

  • Put any other content between the link to our content and the content itself. So no ads or short videos people have to sit through
  • Put ads next to or over it
  • Put any ads in a web page or app that contain mostly our content
  • Put ads related to their subject alongside our content. So no trainer ads with an image of shoes
  • Add extra content that means you’d earn money from our content.
g) Don’t be misleading about where our content came from

You can’t remove or alter the copyright notice, or imply that someone else made it.

h) Don’t pretend to be the BBC
That includes:

  • Using our brands, trade marks or logos without our permission
  • Using or mentioning our content in press releases and other marketing materials
  • Making money from our content. You can’t charge people to view our images, for example
  • Sharing our content. For example, no uploading to social media sites. Sharing links is OK.

Source: Licensing | BBC Sound Effects

This is how licenses should be written. Well done, BBC.

Epic Allows Internet Archive To Distribute For Free ‘Unreal’ & ‘Unreal Tournament’ Forever

One of the most frustrating aspects in the ongoing conversation around the preservation of older video games, also known as cultural output, is the collision of IP rights and some publishers’ unwillingness to both continue to support and make available these older games and their refusal to release those same games into the public domain so that others can do so. It creates this crazy situation in which a company insists on retaining its copyrights over a video game that it has effectively disappeared with no good or legitimate way for the public to preserve them. As I’ve argued for some time now, this breaks the copyright contract with the public and should come with repercussions. The whole bargain that is copyright law is that creative works are granted a limited monopoly on the production of that work, with that work eventually arriving into the public domain. If that arrival is not allowed to occur, the bargain is broken, and not by anyone who would supposedly “infringe” on the copyright of that work.

[…]

But it just doesn’t have to be like this. Companies could be willing to give up their iron-fisted control over their IP for these older games they aren’t willing to support or preserve themselves and let others do it for them. And if you need a real world example of that, you need look only at how Epic is working with The Internet Archive to do exactly that.

Epic, now primarily known for Fortnite and the Unreal Engine, has given permission for two of the most significant video games ever made, Unreal and Unreal Tournament, to be freely accessed via the Internet Archive. As spotted by RPS, via ResetEra, the OldUnreal group announced the move on their Discord, along with instructions for how to easily download and play them on modern machines.

Huge kudos to Epic for being cool with this, because while it shouldn’t be unusual to happily let people freely share a three-decade-old game you don’t sell any more, it’s vanishingly rare. And if you remain in any doubt, we just got word back from Epic confirming they’re on board.

“We can confirm that Unreal 1 and Unreal Tournament are available on archive.org,” a spokesperson told us by email, “and people are free to independently link to and play these versions.”

Importantly, OldUnreal and The Internet Archive very much know what they’re doing here. Grabbing the ZIP file for the game sleekly pulls the ISO directly from The Internet Archive, installs it, and there are instructions for how to get the game up and running on modern hardware. This is obviously a labor of love from fans dedicated toward keeping these two excellent games alive.

[…]

But this is just two games. What would be really nice to see is this become a trend, or, better yet, a program run by The Internet Archive. Don’t want to bother to preserve your old game? No problem, let the IA do it for you!

Source: Epic Allows Internet Archive To Distribute For Free ‘Unreal’ & ‘Unreal Tournament’ Forever | Techdirt

HarperCollins Confirms It Has a Deal to Bleed Authors to allow their Work to be used as training for AI Company

HarperCollins, one of the biggest publishers in the world, made a deal with an “artificial intelligence technology company” and is giving authors the option to opt in to the agreement or pass, 404 Media can confirm.

[…]

On Friday, author Daniel Kibblesmith, who wrote the children’s book Santa’s Husband and published it with HarperCollins, posted screenshots on Bluesky of an email he received, seemingly from his agent, informing him that the agency was approached by the publisher about the AI deal. “Let me know what you think, positive or negative, and we can handle the rest of this for you,” the screenshotted text in an email to Kibblesmith says. The screenshots show the agent telling Kibblesmith that HarperCollins was offering $2,500 (non-negotiable).

[…]

“You are receiving this memo because we have been informed by HarperCollins that they would like permission to include your book in an overall deal that they are making with a large tech company to use a broad swath of nonfiction books for the purpose of providing content for the training of an Al language learning model,” the screenshots say. “You are likely aware, as we all are, that there are controversies surrounding the use of copyrighted material in the training of Al models. Much of the controversy comes from the fact that many companies seem to be doing so without acknowledging or compensating the original creators. And of course there is concern that these Al models may one day make us all obsolete.”

“It seems like they think they’re cooked, and they’re chasing short money while they can. I disagree,” Kibblesmith told the AV Club. “The fear of robots replacing authors is a false binary. I see it as the beginning of two diverging markets, readers who want to connect with other humans across time and space, or readers who are satisfied with a customized on-demand content pellet fed to them by the big computer so they never have to be challenged again.”

Source: HarperCollins Confirms It Has a Deal to Sell Authors’ Work to AI Company

Now the copyright industry wants to apply deep, automated blocking to the Internet’s core routers

A central theme of Walled Culture the book (free digital versions available) and this blog is that the copyright industry is never satisfied. Now matter how long the term of copyright, publishers and recording companies want more. No matter how harsh the punishments for infringement, the copyright intermediaries want them to be even more severe.

Another manifestation of this insatiability is seen in the ever-widening use of Internet site blocking. What began as a highly-targeted one-off in the UK, when a court ordered the Newzbin2 site to be blocked, has become a favoured method of the copyright industry for cutting off access to thousands of sites around the world, including many blocked by mistake. Even more worryingly, the approach has led to blocks being implemented in some key parts of the Internet’s infrastructure that have no involvement with the material that flows through them: they are just a pipe. For example, last year we wrote about courts ordering the content delivery network Cloudflare to block sites. But even that isn’t enough it seems. A post on TorrentFreak reports on a move to embed site blocking at the very heart of the Internet. This emerges from an interview about the Brazilian telecoms regulator Anatel:

In an interview with Tele.Sintese, outgoing Anatel board member Artur Coimbra recalls the lack of internet infrastructure in Brazil as recently as 2010. As head of the National Broadband Plan under the Ministry of Communications, that’s something he personally addressed. For Anatel today, blocking access to pirate websites and preventing unauthorized devices from communicating online is all in a day’s work.

Here’s the key revelation spotted by TorrentFreak:

“The second step, which we still need to evaluate because some companies want it, and others are more hesitant, is to allow Anatel to have access to the core routers to place a direct order on the router,” Coimbra reveals, referencing IPTV [Internet Protocol television] blocking.

“In these cases, these companies do not need to have someone on call to receive the [blocking] order and then implement it.”

Later on, Coimbra clarifies how far along this plan is:

“Participation is voluntary. We are still testing with some companies. So, it will take some time until it actually happens,” Coimbra says. “I can’t say [how long]. Our inspection team is carrying out tests with some operators, I can’t say which ones.”

Even if this is still in the testing phase, and only with “some” companies, it’s a terrible precedent. It means that blocking – and thus censorship – can be applied automatically, possibly without judicial oversight, to some of the most fundamental parts of the Internet’s plumbing. Once that happens, it will spread, just as the original single site block in the UK has spread worldwide. There’s even a hint that might already be happening. Asked if such blocking is being applied anywhere else, Coimbra replies:

“I don’t know. Maybe in Spain and Portugal, which are more advanced countries in this fight. But I don’t have that information,” Coimbra responds, randomly naming two countries with which Brazil has consulted extensively on blocking matters.

Although it’s not clear from that whether Spain and Portugal are indeed taking this route, the fact that Coimbra suggests that they might be is deeply troubling. And even if they aren’t, we can be sure that the copyright industry will keep demanding Internet blocks and censorship at the deepest level until they get them.

Source: Now the copyright industry wants to apply deep, automated blocking to the Internet’s core routers – Walled Culture

Judge: Just Because AI Trains On Your Publication, Doesn’t Mean It Infringes On Your Copyright. Another case thrown out.

I get that a lot of people don’t like the big AI companies and how they scrape the web. But these copyright lawsuits being filed against them are absolute garbage. And you want that to be the case, because if it goes the other way, it will do real damage to the open web by further entrenching the largest companies. If you don’t like the AI companies find another path, because copyright is not the answer.

So far, we’ve seen that these cases aren’t doing all that well, though many are still ongoing.

Last week, a judge tossed out one of the early ones against OpenAI, brought by Raw Story and Alternet.

Part of the problem is that these lawsuits assume, incorrectly, that these AI services really are, as some people falsely call them, “plagiarism machines.” The assumption is that they’re just copying everything and then handing out snippets of it.

But that’s not how it works. It is much more akin to reading all these works and then being able to make suggestions based on an understanding of how similar things kinda look, though from memory, not from having access to the originals.

Some of this case focused on whether or not OpenAI removed copyright management information (CMI) from the works that they were being trained on. This always felt like an extreme long shot, and the court finds Raw Story’s arguments wholly unconvincing in part because they don’t show any work that OpenAI distributed without their copyright management info.

For one thing, Plaintiffs are wrong that Section 1202 “grant[ s] the copyright owner the sole prerogative to decide how future iterations of the work may differ from the version the owner published.” Other provisions of the Copyright Act afford such protections, see 17 U.S.C. § 106, but not Section 1202. Section 1202 protects copyright owners from specified interferences with the integrity of a work’s CMI. In other words, Defendants may, absent permission, reproduce or even create derivatives of Plaintiffs’ works-without incurring liability under Section 1202-as long as Defendants keep Plaintiffs’ CMI intact. Indeed, the legislative history of the DMCA indicates that the Act’s purpose was not to guard against property-based injury. Rather, it was to “ensure the integrity of the electronic marketplace by preventing fraud and misinformation,” and to bring the United States into compliance with its obligations to do so under the World Intellectual Property Organization (WIPO) Copyright Treaty, art. 12(1) (“Obligations concerning Rights Management Information”) and WIPO Performances and Phonograms Treaty….

Moreover, I am not convinced that the mere removal of identifying information from a copyrighted work-absent dissemination-has any historical or common-law analogue.

Then there’s the bigger point, which is that the judge, Colleen McMahon, has a better understanding of how ChatGPT works than the plaintiffs and notes that just because ChatGPT was trained on pretty much the entire internet, that doesn’t mean it’s going to infringe on Raw Story’s copyright:

Plaintiffs allege that ChatGPT has been trained on “a scrape of most of the internet,” Compl. , 29, which includes massive amounts of information from innumerable sources on almost any given subject. Plaintiffs have nowhere alleged that the information in their articles is copyrighted, nor could they do so. When a user inputs a question into ChatGPT, ChatGPT synthesizes the relevant information in its repository into an answer. Given the quantity of information contained in the repository, the likelihood that ChatGPT would output plagiarized content from one of Plaintiffs’ articles seems remote.

Finally, the judge basically says, “Look, I get it, you’re upset that ChatGPT read your stuff, but you don’t have an actual legal claim here.”

Let us be clear about what is really at stake here. The alleged injury for which Plaintiffs truly seek redress is not the exclusion of CMI from Defendants’ training sets, but rather Defendants’ use of Plaintiffs’ articles to develop ChatGPT without compensation to Plaintiffs. See Compl. ~ 57 (“The OpenAI Defendants have acknowledged that use of copyright-protected works to train ChatGPT requires a license to that content, and in some instances, have entered licensing agreements with large copyright owners … They are also in licensing talks with other copyright owners in the news industry, but have offered no compensation to Plaintiffs.”). Whether or not that type of injury satisfies the injury-in-fact requirement, it is not the type of harm that has been “elevated” by Section 1202(b )(i) of the DMCA. See Spokeo, 578 U.S. at 341 (Congress may “elevate to the status of legally cognizable injuries, de facto injuries that were previously inadequate in law.”). Whether there is another statute or legal theory that does elevate this type of harm remains to be seen. But that question is not before the Court today.

While the judge dismisses the case with prejudice and says they can try again, it would appear that she is skeptical they could do so with any reasonable chance of success:

In the event of dismissal Plaintiffs seek leave to file an amended complaint. I cannot ascertain whether amendment would be futile without seeing a proposed amended pleading. I am skeptical about Plaintiffs’ ability to allege a cognizable injury but, at least as to injunctive relief, I am prepared to consider an amended pleading.

I totally get why publishers are annoyed and why they keep suing. But copyright is the wrong tool for the job. Hopefully, more courts will make this clear and we can get past all of these lawsuits.

Source: Judge: Just Because AI Trains On Your Publication, Doesn’t Mean It Infringes On Your Copyright | Techdirt

The Open Source Project DeFlock Is Mapping License Plate Surveillance Cameras All Over the World

[…] Flock is one of the largest vendors of automated license plate readers (ALPRs) in the country. The company markets itself as having the goal to fully “eliminate crime” with the use of ALPRs and other connected surveillance cameras, a target experts say is impossible.

In Huntsville, Freeman noticed that license plate reader cameras were positioned in a circle at major intersections, forming a perimeter that could track any car going into or out of the city’s downtown. He started to look for cameras all over Huntsville and the surrounding areas, and soon found that Flock was not the only game in town. He found cameras owned by Motorola, and a third, owned by a company called Avigilon (a subsidiary of Motorola). Flock and automated license plate reader cameras owned by other companies are now in thousands of neighborhoods around the country. Many of these systems talk to each other and plug into other surveillance systems, making it possible to track people all over the country.

[…]

And so he made a map, and called it DeFlock. DeFlock runs on Open Street Map, an open source, editable mapping software. He began posting signs for DeFlock to the posts holding up Huntsville’s ALPR cameras, and made a post about the project to the Huntsville subreddit, which got good attention from people who lived there.

[…]

When I first talked to Freeman, DeFlock had a few dozen cameras mapped in Huntsville and a handful mapped in Southern California and in the Seattle suburbs. A week later, as I write this, DeFlock has crowdsourced the locations of thousands of cameras in dozens of cities across the United States and the world.

“It still just scratches the surface,” Freeman said. “I added another page to the site that tracks cities and counties who have transparency reports on Flock’s site, and many of those don’t have any reported ALPRs though, so it’ll help people focus on where to look for them.”

[…]

He said so far more than 1,700 cameras have been reported in the United States and more than 5,600 have been reported around the world. He has also begun scraping parts of Flock’s website to give people a better idea of where to look to map them. For example, Flock says that Colton, California, a city with just over 50,000 people outside of San Bernardino, has 677 cameras.

A ring of Flock cameras in Huntsville’s downtown, pointing outward.

People who submit cameras to DeFlock have the ability to note the direction that they are pointing in, which can help people understand how these cameras are being positioned and the strategies that companies and police departments are using when deploying them.

[…]

Freeman also said he eventually wants to find a way to offer navigation directions that will allow people to avoid known ALPR cameras. The fact that it is impossible to drive in some cities without being passing ALPR cameras that track and catalog your car’s movements is one of the core arguments in a Fourth Amendment challenge to Flock’s existence in Norfolk, Virginia; this project will likely show how infeasible traveling without being tracked actually is in America. Knowing where they are is the first step toward resisting them.

Source: The Open Source Project DeFlock Is Mapping License Plate Surveillance Cameras All Over the World

Singapore to increase road capacity by GPS tracking all vehicles. Because location data is not sensitive and will never be hacked *cough*

Singapore’s Land Transport Authority (LTA) estimated last week that by tracking all vehicles with GPS it will be able to increase road capacity by 20,000 over the next few years.

The densely populated island state is moving from what it calls Electric Road Pricing (ERP) 1.0 to ERP 2.0. The first version used gantries – or automatic tolls – to charge drivers a fee through an in-car device when they used specific roadways during certain hours.

ERP 2.0 sees the vehicle instead tracked through GPS, which can tell where a vehicle is at all operating times.

“ERP 2.0 will provide more comprehensive aggregated traffic information and will be able to operate without physical gantries. We will be able to introduce new ‘virtual gantries,’ which allow for more flexible and responsive congestion management,” explained the LTA.

But the island’s government doesn’t just control inflow into urban areas through toll-like charging – it also aggressively controls the total number of cars operating within its borders.

Singapore requires vehicle owners to bid for a set number of Certificates of Entitlement – costly operating permits valid for only ten years. The result is an increase of around SG$100,000 ($75,500) every ten years, depending on that year’s COE price, on top of a car’s usual price. The high total price disincentivizes mass car ownership, which helps the government manage traffic and emissions.

[…]

Source: Singapore to increase road capacity by GPS tracking vehicles • The Register

Washington Post and NYTimes suppressed by fascist Trump Through Billionaire Cowardice

Newspaper presidential endorsements may not actually matter that much, but billionaire media owners blocking editorial teams from publishing their endorsements out of concern over potential retaliation from a future Donald Trump presidency should matter a lot.

If people were legitimately worried about the “weaponization of government” and the idea that companies might silence speech over threats from the White House, what has happened over the past few days should raise alarm bells. But somehow I doubt we’ll be seeing the folks who were screaming bloody murder over the nothingburger that was the Murthy lawsuit saying a word of concern about billionaire media owners stifling the speech of their editorial boards to curry favor with Donald Trump.

In 2017, the Washington Post changed its official slogan to “Democracy Dies in Darkness.”

The phrase was apparently a favorite of Bob Woodward, who was one of the main reporters who broke the Watergate story decades ago. Lots of people criticized the slogan at the time (and have continued to do so since then), but no more so than today, as Jeff Bezos apparently stepped in to block the newspaper from endorsing Kamala Harris for President.

An endorsement of Harris had been drafted by Post editorial page staffers but had yet to be published, according to two people who were briefed on the sequence of events and who spoke on the condition of anonymity because they were not authorized to speak publicly. The decision to no longer publish presidential endorsements was made by The Post’s owner, Amazon founder Jeff Bezos, according to the same two people.

This comes just days after a similar situation with the LA Times, whose billionaire owner, Patrick Soon-Shiong, similarly blocked the editorial board from publishing its planned endorsement of Harris. Soon-Shiong tried to “clarify” by claiming he had asked the team to instead publish something looking at the pros and cons of each candidate. However, as members of the editorial board noted in response, that’s what you’d expect the newsroom to do. The editorial board is literally supposed to express its opinion.

In the wake of that decision, at least three members of the LA Times editorial board have resigned. Mariel Garza quit almost immediately, and Robert Greene and Karin Klein followed a day later. As of this writing, it appears at least one person, editor-at-large Robert Kagan, has resigned from the Washington Post.

Or, as the Missing The Point account on Bluesky noted, perhaps the Washington Post is changing its slogan to “Hello Darkness My Old Friend”:

Marty Baron, who had been the Executive Editor of the Washington Post when it chose “Democracy Dies in Darkness” as a slogan, called Bezos’ decision out as “cowardice” and warned that Trump would see this as a victory of his intimidation techniques, and it would embolden him:

The thing is, for all the talk over the past decade or so about “free speech” and “the weaponization of government,” this sure looks like these two billionaires suppressing speech from their organizations over fear of how Trump will react, should he be elected.

During his last term, Donald Trump famously targeted Amazon in retaliation for coverage he didn’t like from the Washington Post. His anger at WaPo coverage caused him to ask the Postmaster General to double Amazon’s postage rates. Trump also told his Secretary of Defense James Mattis to “screw Amazon” and to kill a $10 billion cloud computing deal the Pentagon had lined up.

For all the (misleading) talk about the Biden administration putting pressure on tech companies, what Trump did there seemed like legitimate First Amendment violations. He punished Amazon for speech he didn’t like. It’s funny how all the “weaponization of the government” people never made a peep about any of that.

As for Soon-Shiong, it’s been said that he angled for a cabinet-level “health care czar” position in the last Trump administration, so perhaps he’s hoping to increase his chances this time around.

In both cases, though, this sure looks like Trump’s past retaliations and direct promises of future retaliation against all who have challenged him are having a very clear censorial impact. In the last few months Trump has been pretty explicit that, should he win, he intends to punish media properties that reported on him in ways he dislikes. These are all reasons why anyone who believes in free speech should be speaking out about the dangers of Donald Trump towards our most cherished First Amendment rights.

Especially those in the media.

Bezos and Soon-Shiong are acting like cowards. Rather than standing up and doing what’s right, they’re pre-caving, before the election has even happened. It’s weak and pathetic, and Trump will see it (accurately) to mean that he can continue to walk all over them, and continue to get the media to pull punches by threatening retaliation.

If democracy dies in darkness, it’s because Bezos and Soon-Shiong helped turn off the light they were carrying.

Source: Democracy Dies In Darkness… Helped Along By Billionaire Cowardice | Techdirt