EU Commission’s nameless experts behind its “spy on all EU citizens” *cough* “child sexual abuse” law

The EU Ombudsman has found a case of maladministration in the European Commission’s refusal to provide the list of experts, which it first denied existing, with whom they worked together in drafting the regulation to detect and remove online child sexual abuse material.

Last December, the Irish Council for Civil Liberties (ICCL) filed complaints to the European Ombudsman against the European Commission for refusing to provide the list of external experts involved in drafting the regulation to detect and remove online child sexual abuse material (CSAM).

Consequently, the Ombudsman concluded that “the Commission’s failure to identify the list of experts as falling within the scope of the complainant’s public access request constitutes maladministration”.

The EU watchdog also slammed the Commission for not respecting the deadlines for handling access to document requests, delays that have become somewhat systematic.

The Commission told the Ombudsman inquiry team during a meeting that the requests by the ICCL “seemed to be requests to justify a political decision rather than requests for public access to a specific set of documents”.

The request was about getting access to the list of experts the Commission was in consultations with and who also participated in meetings with the EU Internet Forum, which took place in 2020, according to an impact assessment report dated 11 May 2022.

The main political groups of the EU Parliament reached an agreement on the draft law to prevent the dissemination of online child sexual abuse material (CSAM) on Tuesday (24 October).

The list of experts was of public interest because independent experts have stated on several occasions that detecting CSAM in private communications without violating encryption would be impossible.

The Commission, however, suggested otherwise in their previous texts, which has sparked controversy ever since the introduction of the file last year.

During the meetings, “academics, experts and companies were invited to share their perspectives on the matter as well as any documents that could be valuable for the discussion.”

Based on these discussions, and both oral and written inputs, an “outcome document” was produced, the Commission said.

According to a report about the meeting between the Commission and the Ombudsman, this “was the only document that was produced in relation to these workshops.”

The phantom list

While a list of participants does exist, it was not disclosed “for data protection and public security reasons, given the nature of the issues discussed”, the Commission said, according to the EU Ombudsman.

Besides security reasons, participants were also concerned about their public image, the Commission told the EU Ombudsman, adding that “disclosure could be exploited by malicious actors to circumvent detection mechanisms and moderation efforts by companies”.

Moreover, “revealing some of the strategies and tactics of companies, or specific technical approaches also carries a risk of informing offenders on ways to avoid detection”.

However, the existence of this list was at first denied by the Commission.

Kris Shrishak, senior fellow at the Irish Council for Civil Liberties, told Euractiv that the Commission had told him that no such list exists. However, later on, he was told by the EU Ombudsman that that was not correct since they found a list of experts.

The only reason the ICCL learned that there is a list is because of the Ombudsman, Shrishak emphasised.

Previously, the Commission said there were email exchanges about the meetings, which contained only the links to the online meetings.

“Following the meeting with the Ombudsman inquiry team, the Commission tried to retrieve these emails” but since they were more than two years old at the time, “they had already been deleted in line with the Commission’s retention policy” and were “not kept on file”.

Euractiv reached out to the European Commission for a comment but did not get a response by the time of publication.

Source: EU Commission’s nameless experts behind its child sexual abuse law – EURACTIV.com

This law is an absolute travesty – it’s talking about the poor children (how can we not protect them!) whilst being a wholesale surveillance law being put in by nameless faces and unelected officials.

See also: EU Trys to Implement Client-Side Scanning, death to encryption By Personalised Targeting of EU Residents With Misleading Ads

They basically want to spy on all electronic signals. All of them. Without a judge.

Researchers posed as foreign actors, and data brokers sold them information on military servicemembers anyway – for pennies

[…]

Researchers at Duke University released a study on Monday tracking what measures data brokers have in place to prevent unidentified or potentially malign actors from buying personal data on members of the military. As it turns out, the answer is often few to none — even when the purchaser is actively posing as a foreign agent.

A 2021 Duke study by the same lead researcher revealed that data brokers advertised that they had access to — and were more than happy to sell —information on US military personnel. In this more recent study researchers used wiped computers, VPNs, burner phones bought with cash and other means of identity obfuscation to go undercover. They scraped the websites of data brokers to see which were likely to have available data on servicemembers. Then they attempted to make those purchases, posing as two entities: datamarketresearch.org and dataanalytics.asia. With little-or-no vetting, several of the brokers transferred the requested data not only to the presumptively Chicago-based datamarketresearch, but also to the server of the .asia domain which was located in Singapore. The records only cost between 12 to 32 cents a piece.

The sensitive information included health records and financial information. Location data was also available, although the team at Duke decided not to purchase that — though it’s not clear if this was for financial or ethical reasons. “Access to this data could be used by foreign and malicious actors to target active-duty military personnel, veterans, and their families and acquaintances for profiling, blackmail, targeting with information campaigns, and more,” the report cautions. At an individual level, this could also include identity theft or fraud.

This gaping hole in our national security apparatus is due in large part to the absence of comprehensive federal regulations governing either individual data privacy, or much of the business practices engaged in by data brokers. Senators Elizabeth Warren, Bill Cassidy and Marco Rubio introduced the Protecting Military Service Members’ Data Act in 2022 to give power to the Federal Trade Commission to prevent data brokers from selling military personnel information to adversarial nations. They reintroduced the bill in March 2023 after it stalled out. Despite bipartisan support, it still hasn’t made it past the introduction phase.

Source: Researchers posed as foreign actors, and data brokers sold them information on military servicemembers anyway

YouTube cares less for your privacy than its revenues

YouTube wants its pound of flesh. Disable your ad blocker or pay for Premium, warns a new message being shown to an unsuspecting test audience, with the barely hidden subtext of “you freeloading scum.” Trouble is, its ad blocker detecting mechanism doesn’t exactly comply with EU law, say privacy activists. Ask for user permission or taste regulatory boot. All good clean fun.

Privacy advocate challenges YouTube’s ad blocking detection scripts under EU law

READ MORE

Only it isn’t. It’s profoundly depressing. The battleground between ad tech and ad blockers has been around so long that in the internet’s time span it’s practically medieval. In 2010, Ars Technica started blocking ad blockers; in under a day, the ad blocker blocker was itself blocked by the ad blockers. The editor then wrote an impassioned plea saying that ad blockers were killing online journalism. As the editor ruefully notes, people weren’t using blockers because they didn’t care about the good sites, it was because so much else of the internet was filled with ad tech horrors.

Nothing much has changed. If your search hit ends up with an “ERROR: Ad blocker detected. Disable it to access this content” then it’s browser back button and next hit down, all day, every day. It’s like running an app that asks you to disable your firewall; that app is never run again. Please disable my ad blocker? Sure, if you stop pushing turds through my digital letterbox.

The reason YouTube has been dabbling with its own “Unblock Or Eff Off” strategy instead of bringing down the universal banhammer is that it knows how much it will upset the balance of the ecosystem. That it’s had to pry deep enough into viewers’ browsers to trigger privacy laws shows just how delicate that balance is. It’s unstable because it’s built on bad ideas.

In that ecosystem of advertisers, content consumers, ad networks, and content distributors, ad blockers aren’t the disease, they’re the symptom. Trying to neutralize a symptom alone leaves the disease thriving while the host just gets sicker. In this case, the disease isn’t cynical freeloading by users, it’s the basic dishonesty of online advertising. It promises things to advertisers that it cannot deliver, while blocking better ways of working. It promises revenue to content providers while keeping them teetering on the brink of unviability, while maximizing its own returns. Google has revenues in the hundreds of billions of dollars, while publishers struggle to survive, and users have to wear a metaphorical hazmat suit to stay sane. None of this is healthy.

Content providers have to be paid. We get that. Advertising is a valid way of doing that. We get that too. Advertisers need to reach audiences. Of course they do. But like this? YouTube needs its free, ad-supported model, or it would just force Premium on everyone, but forcing people to watch adverts will not force them to pony up for what’s being advertised.

The pre-internet days saw advertising directly support publishers who knew how to attract the right audiences who would respond well to the right adverts. Buy a computer magazine and it would be full of adverts for computer stuff – much of which you’d actually want to look at. The publisher didn’t demand you have to see ads for butter or cars or some dodgy crypto. That model has gone away, which is why we need ad blockers.

YouTube’s business model is a microcosm of the bigger ad tech world, where it basically needs to spam millions to generate enough results for its advertisers. It cannot stomach ad blockers, but it can’t neutralize them technically or legally. So it should treat them like the cognitive firewalls they are. If YouTube developed ways to control what and how adverts appeared back into the hands of its content providers and viewers, perhaps we’d tell our ad blockers to leave YouTube alone – punch that hole through the firewall for the service you trust. We’d get to keep blocking things that needed to be blocked, content makers could build their revenues by making better content, and advertisers would get a much better return on their ad spend.

Of course, this wouldn’t provide the revenues to YouTube or the ad tech business obtainable by being spammy counterfeits of responsible companies with a lock on the market. That a harmful business model makes a shipload of money does not make it good, in fact quite the reverse.

So, to YouTube we say: you appear to be using a bad lock-in. Disable it, or pay the price

Source: YouTube cares less for your privacy than its revenues • The Register

EU Parliament Fails To Understand That The Right To Read Is The Right To Train. Understands the copyright lobby has money though.

Walled Culture recently wrote about an unrealistic French legislative proposal that would require the listing of all the authors of material used for training generative AI systems. Unfortunately, the European Parliament has inserted a similarly impossible idea in its text for the upcoming Artificial Intelligence (AI) Act. The DisCo blog explains that MEPs added new copyright requirements to the Commission’s original proposal:

These requirements would oblige AI developers to disclose a summary of all copyrighted material used to train their AI systems. Burdensome and impractical are the right words to describe the proposed rules.

In some cases it would basically come down to providing a summary of half the internet.

Leaving aside the impossibly large volume of material that might need to be summarized, another issue is that it is by no means clear when something is under copyright, making compliance even more infeasible. In any case, as the DisCo post rightly points out, the EU Copyright Directive already provides a legal framework that addresses the issue of training AI systems:

The existing European copyright rules are very simple: developers can copy and analyse vast quantities of data from the internet, as long as the data is publicly available and rights holders do not object to this kind of use. So, rights holders already have the power to decide whether AI developers can use their content or not.

This is a classic case of the copyright industry always wanting more, no matter how much it gets. When the EU Copyright Directive was under discussion, many argued that an EU-wide copyright exception for text and data mining (TDM) and AI in the form of machine learning would be hugely beneficial for the economy and society. But as usual, the copyright world insisted on its right to double dip, and to be paid again if copyright materials were used for mining or machine learning, even if a license had already been obtained to access the material.

As I wrote in a column five years ago, that’s ridiculous, because the right to read is the right to mine. Updated for our AI world, that can be rephrased as “the right to read is the right to train”. By failing to recognize that, the European Parliament has sabotaged its own AI Act. Its amendment to the text will make it far harder for AI companies to thrive in the EU, which will inevitably encourage them to set up shop elsewhere.

If the final text of the AI Act still has this requirement to provide a summary of all copyright material that is used for training, I predict that the EU will become a backwater for AI. That would be a huge loss for the region, because generative AI is widely expected to be one of the most dynamic and important new tech sectors. If that happens, backward-looking copyright dogma will once again have throttled a promising digital future, just as it has done so often in the recent past.

Source: EU Parliament Fails To Understand That The Right To Read Is The Right To Train | Techdirt

EU Trys to Implement Client-Side Scanning, death to encryption By Personalised Targeting of EU Residents With Misleading Ads

The EU Commission has been pushing client-side scanning for well over a year. This new intrusion into private communications has been pitched as perhaps the only way to prevent the sharing of child sexual abuse material (CSAM).

Mandates proposed by the EU government would have forced communication services to engage in client-side scanning of content. This would apply to every communication or service provider. But it would only negatively affect providers incapable of snooping on private communications because their services are encrypted.

Encryption — especially end-to-end encryption — protects the privacy and security of users. The EU’s pitch said protecting more than the children was paramount, even if it meant sacrificing the privacy and security of millions of EU residents.

Encrypted services would have been unable to comply with the mandate without stripping the client-side end from their end-to-end encryption. So, while it may have been referred to with the legislative euphemism “chat control” by EU lawmakers, the reality of the situation was that this bill — if passed intact — basically would have outlawed E2EE.

Fortunately, there was a lot of pushback. Some of it came from service providers who informed the EU they would no longer offer their services in EU member countries if they were required to undermine the security they provided for their users.

The more unexpected resistance came from EU member countries who similarly saw the gaping security hole this law would create and wanted nothing to do with it. On top of that, the EU government’s own lawyers told the Commission passing this law would mean violating other laws passed by this same governing body.

This pushback was greeted by increasingly nonsensical assertions by the bill’s supporters. In op-eds and public statements, backers insisted everyone else was wrong and/or didn’t care enough about the well-being of children to subject every user of any communication service to additional government surveillance.

That’s what happened on the front end of this push to create a client-side scanning mandate. On the back end, however, the EU government was trying to dupe people into supporting their own surveillance with misleading ads that targeted people most likely to believe any sacrifice of their own was worth making when children were on the (proverbial) line.

That’s the unsettling news being delivered to us by Vas Panagiotopoulos for Wired. A security researcher based in Amsterdam took a long look at apparently misleading ads that began appearing on Twitter as the EU government amped up its push to outlaw encryption.

Danny Mekić was digging into the EU’s “chat control” law when he began seeing disturbing ads on Twitter. These ads featured young women being (apparently) menaced by sinister men, backed by a similarly dark background and soundtrack. The ads displayed some supposed “facts” about the sexual abuse of children and ended with the notice that the ads had been paid for by the EU Commission.

The ads also cited survey results that supposedly said most European citizens supported client-side scanning of content and communications, apparently willing to sacrifice their own privacy and security for the common good.

But Mekić dug deeper and discovered the cited survey wasn’t on the level.

Following closer inspection, he discovered that these findings appeared biased and otherwise flawed. The survey results were gathered by misleading the participants, he claims, which in turn may have misled the recipients of the ads; the conclusion that EU citizens were fine with greater surveillance couldn’t be drawn from the survey, and the findings clashed with those of independent polls.

This discovery prompted Mekić to dig even deeper. What Mekić found was that the ads were very tightly targeted — so tightly targeted, in fact, that they could not have been deployed in this manner without violating European laws that are aimed to prevent exactly this sort of targeting, i.e. by using “sensitive data” like religious beliefs and political affiliations.

The ads were extremely targeted, meant to find people most likely to be swayed towards the EU Commission’s side, either because the targets never appeared to distrust their respective governments or because their governments had yet to tell the EU Commission to drop its proposed anti-encryption proposal.

Mekić found that the ads were meant to be seen by select targets, such as top ministry officials, while they were concealed from people interested in Julian Assange, Brexit, EU corruption, Eurosceptic politicians (Marine Le Pen, Nigel Farage, Viktor Orban, Giorgia Meloni), the German right-wing populist party AfD, and “anti-Christians.”

Mekić then found out that the ads, which have garnered at least 4 million views, were only displayed in seven EU countries: the Netherlands, Sweden, Belgium, Finland, Slovenia, Portugal, and the Czech Republic.

A document leaked earlier this year exposed which EU members were in favor of client-side scanning and its attendant encryption backdoors, as well as those who thought the proposed mandate was completely untenable.

The countries targeted by the EU Commission ad campaign are, for the most part, supportive of/indifferent to broken encryption, client-side scanning, and expanded surveillance powers. Slovenia (along with Spain, Cyprus, Lithuania, Croatia, and Hungary) were all firmly in favor of bringing an end to end-to-end encryption.

[…]

While we’re accustomed to politicians airing misleading ads during election runs, this is something different. This is the representative government of several nations deliberately targeting countries and residents it apparently thinks might be receptive to its skewed version of the facts, which comes in the form of the presentation of misleading survey results against a backdrop of heavily-implied menace. And that’s on top of seeming violations of privacy laws regarding targeted ads that this same government body created and ratified.

It’s a tacit admission EU proposal backers think they can’t win this thing on its merits. And they can’t. The EU Commission has finally ditched its anti-encryption mandates after months of backlash. For the moment, E2EE survives in Europe. But it’s definitely still under fire. The next exploitable tragedy will bring with it calls to reinstate this part of the “chat control” proposal. It will never go away because far too many governments believe their citizens are obligated to let these governments shoulder-surf whenever they deem it necessary. And about the only thing standing between citizens and that unceasing government desire is end-to-end encryption.

Source: EU Pitched Client-Side Scanning By Targeting Certain EU Residents With Misleading Ads | Techdirt

As soon as you read that legislation is ‘for the kids’ be very very wary – as it’s usually for something completely beyond that remit. And this kind of legislation is the installation of Big Brother on every single communications line you use.

YouTube is cracking down on ad blockers globally. Time to go to the next video site. Vimeo, are you listening?

YouTube is no longer preventing just a small subset of its userbase from accessing its videos if they have an ad blocker. The platform has gone all out in its fight against the use of add-ons, extensions and programs that prevent it from serving ads to viewers around the world, it confirmed to Engadget. “The use of ad blockers violate YouTube’s Terms of Service,” a spokesperson told us. “We’ve launched a global effort to urge viewers with ad blockers enabled to allow ads on YouTube or try YouTube Premium for an ad free experience. Ads support a diverse ecosystem of creators globally and allow billions to access their favorite content on YouTube.”

YouTube started cracking down on the use of ad blockers earlier this year. It initially showed pop-ups to users telling them that it’s against the website’s TOS, and then it put a timer on those notifications to make sure people read it. By June, it took on a more aggressive approach and warned viewers that they wouldn’t be able to play more than three videos unless they disable their ad blockers. That was a “small experiment” meant to urge users to enable ads or to try YouTube Premium, which the website has now expanded to its entire userbase. Some people can’t even play videos on Microsoft Edge and Firefox browsers even if they don’t have ad blockers, according to Android Police, but we weren’t able to replicate that behavior. [Note –  I was!]

People are unsurprisingly unhappy about the development and have taken to social networks like Reddit to air their grievances. If they don’t want to enable ads, after all, the only way they can watch videos with no interruptions is to pay for a YouTube Premium subscription. Indeed, the notification viewers get heavily promotes the subscription service. “Ads allow YouTube to stay free for billions of users worldwide,” it says. But with YouTube Premium, viewers can go ad-free, and “creators can still get paid from [their] subscription.”

[…]

Source: YouTube is cracking down on ad blockers globally

It doesn’t help YouTube much that the method they have of detecting your ad blocker basically comes down to using spyware. Source: Privacy advocate challenges YouTube’s ad blocking detection (which isn’t spyware)

Researchers devise method using mirrors to monitor nuclear stockpiles offsite

Researchers say they have developed a method to remotely track the movement of objects in a room using mirrors and radio waves, in the hope it could one day help monitor nuclear weapons stockpiles.

According to the non-profit org International Campaign to Abolish Nuclear Weapons, nine countries, including Russia, the United States, China, France, the United Kingdom, Pakistan, India, Israel and North Korea collectively own about 12,700 nuclear warheads.

Meanwhile, over 100 nations have signed the United Nations’ Treaty on the Prohibition of Nuclear Weapons, promising to not “develop, test, produce, acquire, possess, stockpile, use or threaten to use” the tools of mass destruction. Tracking signs of secret nuclear weapons development, or changes in existing warhead caches, can help governments identify entities breaking the rules.

A new technique devised by a team of researchers led by the Max Planck Institute for Security and Privacy (MPI-SP) aims to remotely monitor the removal of warheads stored in military bunkers. The scientists installed 20 adjustable mirrors and two antennae to monitor the movement of a blue barrel stored in a shipping container. One antenna emits radio waves that bounce off each mirror to create a unique reflection pattern detected by the other antenna.

The signals provide information on the location of objects in the room. Moving the objects or mirrors will produce a different reflection pattern. Experiments showed that the system was sensitive enough to detect whether the blue barrel had shifted by just a few millimetres. Now, the team reckons that it could be applied to monitor whether nuclear warheads have been removed from stockpiles.

At this point, readers may wonder why this tech is proposed for the job when CCTV, or Wi-Fi location, or any number of other observation techniques could do the same job.

The paper explains that the antenna-and-mirror technique doesn’t require secure communication channels or tamper-resistant sensor hardware. The paper’s authors argue it is also “robust against major physical and computational attacks.”

“Seventy percent of the world’s nuclear weapons are kept in storage for military reserve or awaiting dismantlement,” Sebastien Philippe, co-author of a research paper published in Nature Communications. Philippe is an associate research scholar at the School of Public and International Affairs at Princeton University, explained.

“The presence and number of such weapons at any given site cannot be verified easily via satellite imagery or other means that are unable to see into the storage vaults. Because of the difficulties to monitor them, these 9,000 nuclear weapons are not accounted for under existing nuclear arms control agreements. This new verification technology addresses this long-standing challenge and contributes to future diplomatic efforts that would seek to limit all nuclear weapon types,” he said in a statement.

In practice, officials from and organisation such as UN-led International Atomic Energy Agency, which promotes peaceful uses of nuclear energy, could install the system in a nuclear bunker and measure the radio waves reflecting off its mirrors. The unique fingerprint signal can then be stored in a database.

They could later ask the government controlling the nuclear stockpile to measure the radio wave signal recorded by its detector antenna and compare it to the initial result to check whether any warheads have been moved.

If both measurements are the same, the nuclear weapon stockpile has not been tampered with. But if they’re different, it shows something is afoot. The method is only effective if the initial radio fingerprint detailing the original configuration of the warheads is kept secret, however.

Unfortunately, it’s not quite foolproof, considering adversaries could technically use machine learning algorithms to predict how the positions of the mirrors generate the corresponding radio wave signal detected by the antenna.

“With 20 mirrors, it would take eight weeks for an attacker to decode the underlying mathematical function,” said Johannes Tobisch, co-author of the study and a researcher at the MPI-SP. “Because of the scalability of the system, it’s possible to increase the security factor even more.”

To prevent this, the researchers said that the verifier and prover should agree to send back a radio wave measurement within a short time frame, such as within a minute or so. “Beyond nuclear arms control verification, our inspection system could find application in the financial, information technology, energy, and art sectors,” they concluded in their paper.

“The ability to remotely and securely monitor activities and assets is likely to become more important in a world that is increasingly networked and where physical travel and on-site access may be unnecessary or even discouraged.”

Source: Researchers devise new method to monitor nuclear stockpiles • The Register

Drugmakers Are Set To Pay 23andMe Millions To Access Your DNA – which is also your families DNA

GSK will pay 23andMe $20 million for access to the genetic-testing company’s vast trove of consumer DNA data, extending a five-year collaboration that’s allowed the drugmaker to mine genetic data as it researches new medications.

Under the new agreement, 23andMe will provide GSK with one year of access to anonymized DNA data from the approximately 80% of gene-testing customers who have agreed to share their information for research, 23andMe said in a statement Monday. The genetic-testing company will also provide data-analysis services to GSK.

23andMe is best known for its DNA-testing kits that give customers ancestry and health information. But the DNA it collects is also valuable, including for scientific research. With information from more than 14 million customers, the only data sets that rival the size of the 23andMe library belong to Ancestry.com and the Chinese government. The idea for drugmakers is to comb the data for hints about genetic pathways that might be at the root of disease, which could significantly speed up the long, slow process of drug development. GSK and 23andMe have already taken one potential medication to clinical trials: a cancer drug that works to block CD96, a protein that helps modulate the body’s immune responses. It entered that testing phase in four years, compared to an industry average of about seven years. Overall, the partnership between GSK and 23andMe has produced more than 50 new drug targets, according to the statement.

The new agreement changes some components of the collaboration. Any discoveries GSK makes with the 23andMe data will now be solely owned by the British pharmaceutical giant, while the genetic-testing company will be eligible for royalties on some projects. In the past, the two companies pursued new drug targets jointly. GSK’s new deal with 23andMe is also non-exclusive, leaving the genetic-testing company free to license its database to other drugmakers.

Source: Drugmakers Are Set To Pay 23andMe Millions To Access Consumer DNA – Slashdot

So – you paid for a DNA test and it turns out you didn’t think of the privacy aspect at all. Neither did you think up that you gave up your families DNA. Or that you can’t actually change your DNA either. Well done. It’s being spread all over the place. And no, the data is not anonymous – DNA is the most personal information you can give up ever.

Privacy advocate challenges YouTube’s ad blocking detection (which isn’t spyware)

Last week, privacy advocate (and very occasional Reg columnist) Alexander Hanff filed a complaint with the Irish Data Protection Commission (DPC) decrying YouTube’s deployment of JavaScript code to detect the use of ad blocking extensions by website visitors.

On October 16, according to the Internet Archives’ Wayback Machine, Google published a support page declaring that “When you block YouTube ads, you violate YouTube’s Terms of Service.”

“If you use ad blockers,” it continues, “we’ll ask you to allow ads on YouTube or sign up for YouTube Premium. If you continue to use ad blockers, we may block your video playback.”

YouTube’s Terms of Service do not explicitly disallow ad blocking extensions, which remain legal in the US [PDF], in Germany, and elsewhere. But the language says users may not “circumvent, disable, fraudulently engage with, or otherwise interfere with any part of the Service” – which probably includes the ads.

Image of 'Ad blockers are not allowed' popup

Image of ‘Ad blockers are not allowed’ popup – Click to enlarge

YouTube’s open hostility to ad blockers coincides with the recent trial deployment of a popup notice presented to web users who visit the site with an ad-blocking extension in their browser – messaging tested on a limited audience at least as far back as May.

In order to present that popup YouTube needs to run a script, changed at least twice a day, to detect blocking efforts. And that script, Hanff believes, violates the EU’s ePrivacy Directive – because YouTube did not first ask for explicit consent to conduct such browser interrogation.

[…]

Asked how he hopes the Irish DPC will respond, Hanff replied via email, “I would expect the DPC to investigate and issue an enforcement notice to YouTube requiring them to cease and desist these activities without first obtaining consent (as per [Europe’s General Data Protection Regulation (GDPR)] standard) for the deployment of their spyware detection scripts; and further to order YouTube to unban any accounts which have been banned as a result of these detections and to delete any personal data processed unlawfully (see Article 5(1) of GDPR) since they first started to deploy their spyware detection scripts.”

Hanff’s use of strikethrough formatting acknowledges the legal difficulty of using the term “spyware” to refer to YouTube’s ad block detection code. The security industry’s standard defamation defense terminology for such stuff is PUPs, or potentially unwanted programs.

[…]

Hanff’s contention that ad-blocker detection without consent is unlawful in the EU was challenged back in 2016 by the maker of a detection tool called BlockAdblock. The software maker’s argument is that JavaScript code is not stored in the way considered in Article 5(3), which the firm suggests was intended for cookies.

Hanff disagrees, and maintains that “The Commission and the legislators have been very clear that any access to a user’s terminal equipment which is not strictly necessary for the provision of a requested service, requires consent.

“This is also bound by CJEU Case C-673/17 (Planet49) from October 2019 which *all* Member States are legally obligated to comply with, under the [Treaty on the Functioning of the European Union] – there is no room for deviation on this issue,” he elaborated.

“If a script or other digital technology is strictly necessary (technically required to deliver the requested service) then it is exempt from the consent requirements and as such would pose no issue to publishers engaging in legitimate activities which respect fundamental rights under the Charter.

“It is long past time that companies meet their legal obligations for their online services,” insisted Hanff. “This has been law since 2002 and was further clarified in 2009, 2012, and again in 2019 – enough is enough.”

Google did not respond to a request for comment.

Source: Privacy advocate challenges YouTube’s ad blocking detection • The Register

Apple’s MAC Address Privacy Feature Has Never Worked

Ever since Apple re-branded as the “Privacy” company several years back, it’s been rolling out features designed to show its commitment to protecting users. Yet while customers might feel safer using an iPhone, there’s already plenty of evidence that Apple’s branding efforts don’t always match the reality of its products. In fact, a lot of its privacy features don’t actually seem to work.

Case in point: new research shows that one of Apple’s proffered privacy tools—a feature that was supposed to anonymize mobile users’ connections to Wifi—is effectively “useless.” In 2020, Apple debuted a feature that, when switched on, was supposed to hide an iPhone user’s media access control—or MAC—address. When a device connects to a WiFi network, it must first send out its MAC address so the network can identify it; when the same MAC address pops up in network after network, it can be used to by network observers to identify and track a specific mobile user’s movements.

Apple’s feature was supposed to provide randomized MAC addresses for users as a way of stop this kind of tracking from happening. But, apparently, a bug in the feature persisted for years that made the feature effectively useless.

According to a new report from Ars Technica, researchers recently tested the feature to see if it actually concealed their MAC addresses, only to find that it didn’t do that at all. Ars writes:

Despite promises that this never-changing address would be hidden and replaced with a private one that was unique to each SSID, Apple devices have continued to display the real one, which in turn got broadcast to every other connected device on the network.

One of the researchers behind the discovery of the vulnerability, Tommy Mysk, told Ars that, from the jump, “this feature was useless because of this bug,” and that, try as they might, he “couldn’t stop the devices from sending these discovery requests, even with a VPN. Even in the Lockdown Mode.”

What Apple’s justification for advertising a feature that just plainly does not work is, I’m not sure. Gizmodo reached out to the company for comment and will update this story if they respond. A recent update, iOS 17.1, apparently patches the problem and ensures that the feature actually works.

Source: Apple’s MAC Address Privacy Feature Has Never Worked

Air Canada Sues Website That Helps People Book More Flights simultaneously Calls own website team incompetent beyond belief

I am so frequently confused by companies that sue other companies for making their own sites and services more useful. It happens quite often. And quite often, the lawsuits are questionable CFAA claims against websites that scrape data to provide a better consumer experience, but one that still ultimately benefits the originating site.

Over the last few years various airlines have really been leading the way on this, with Southwest being particularly aggressive in suing companies that help people find Southwest flights to purchase. Unfortunately, many of these lawsuits are succeeding, to the point that a court has literally said that a travel company can’t tell others how much Southwest flights cost.

But the latest lawsuit of this nature doesn’t involve Southwest, and is quite possibly the dumbest one. Air Canada has sued the site Seats.aero that helps users figure out the best flights for their frequent flyer miles. Seats.aero is a small operation run by the company with the best name ever: Localhost, meaning that the lawsuit is technically “Air Canada v. Localhost” which sounds almost as dumb as this lawsuit is.

The Air Canada Group brings this action because Mr. Ian Carroll—through Defendant Localhost LLC—created a for-profit website and computer application (or “app”)— both called Seats.aero—that use substantial amounts of data unlawfully scraped from the Air Canada Group’s website and computer systems. In direct violation of the Air Canada Group’s web terms and conditions, Carroll uses automated digital robots (or “bots”) to continuously search for and harvest data from the Air Canada Group’s website and database. His intrusions are frequent and rapacious, causing multiple levels of harm, e.g., placing an immense strain on the Air Canada Group’s computer infrastructure, impairing the integrity and availability of the Air Canada Group’s data, soiling the customer experience with the Air Canada Group, interfering with the Air Canada Group’s business relations with its partners and customers, and diverting the Air Canada Group’s resources to repair the damage. Making matters worse, Carroll uses the Air Canada Group’s federally registered trademarks and logo to mislead people into believing that his site, app, and activities are connected with and/or approved by the real Air Canada Group and lending an air of legitimacy to his site and app. The Air Canada Group has tried to stop Carroll’s activities via a number of technological blocking measures. But each time, he employs subterfuge to fraudulently access and take the data—all the while boasting about his exploits and circumvention online.

Almost nothing in this makes any sense. Having third parties scrape sites for data about prices is… how the internet works. Whining about it is stupid beyond belief. And here, it’s doubly stupid, because anyone who finds a flight via seats.aero is then sent to Air Canada’s own website to book that flight. Air Canada is making money because Carroll’s company is helping people find Air Canada flights they can take.

Why are they mad?

Air Canada’s lawyers also seem technically incompetent. I mean, what the fuck is this?

Through screen scraping, Carroll extracts all of the data displayed on the website, including the text and images.

Carroll also employs the more intrusive API scraping to further feed Defendant’s website.

If the “API scraping” is “more intrusive” than screen scraping, you’re doing your APIs wrong. Is Air Canada saying that its tech team is so incompetent that its API puts more load on the site than scraping? Because, if so, Air Canada should fire its tech team. The whole point of an API is to make it easier for those accessing data from your website without needing to do the more cumbersome process of scraping.

And, yes, this lawsuit really calls into question Air Canada’s tech team and their ability to run a modern website. If your website can’t handle having its flights and prices scraped a few times every day, then you shouldn’t have a website. Get some modern technology, Air Canada:

Defendant’s avaricious data scraping generates frequent and myriad requests to the Air Canada Group’s database—far in excess of what the Air Canada Group’s infrastructure was designed to handle. Its scraping collects a large volume of data, including flight data within a wide date range and across extensive flight origins and destinations—multiple times per day.

Maybe… invest in better infrastructure like basically every other website that can handle some basic scraping? Or, set up your API so it doesn’t fall over when used for normal API things? Because this is embarrassing:

At times, Defendant’s voluminous requests have placed such immense burdens on the Air Canada Group’s infrastructure that it has caused “brownouts.” During a brownout, a website is unresponsive for a period of time because the capacity of requests exceeds the capacity the website was designed to accommodate. During brownouts caused by Defendant’s data scraping, legitimate customers are unable to use or the Air Canada + Aeroplan mobile app, including to search for available rewards, redeem Aeroplan points for the rewards, search for and view reward travel availability, book reward flights, contact Aeroplan customer support, and/or obtain service through the Aeroplan contact center due to the high volume of calls during brownouts.

Air Canada’s lawyers also seem wholly unfamiliar with the concept of nominative fair use for trademarks. If you’re displaying someone’s trademarks for the sake of accurately talking about them, there’s no likelihood of confusion and no concern about the source of the information. Air Canada claiming that this is trademark infringement is ridiculous:

I guarantee that no one using Seats.aero thinks that they’re on Air Canada’s website.

The whole thing is so stupid that it makes me never want to fly Air Canada again. I don’t trust an airline that can’t set up its website/API to handle someone making its flights more attractive to buyers.

But, of course, in these crazy times with the way the CFAA has been interpreted, there’s a decent chance Air Canada could win.

For its part, Carroll says that he and his lawyers have reached out to Air Canada “repeatedly” to try to work with them on how they “retrieve availability information,” and that “Air Canada has ignored these offers.” He also notes that tons of other websites are scraping the very same information, and he has no idea why he’s been singled out. He further notes that he’s always been open to adjusting the frequency of searches and working with the airlines to make sure that his activities don’t burden the website.

But, really, the whole thing is stupid. The only thing that Carroll’s website does is help people buy more flights. It points people to the Air Canada site to buy tickets. It makes people want to fly more on Air Canada.

Why would Air Canada want to stop that other than that it can’t admit that it’s website operations should all be replaced by a more competent team?

Source: Air Canada Would Rather Sue A Website That Helps People Book More Flights Than Hire Competent Web Engineers | Techdirt

New French AI Copyright Law Would Effectively Tax AI Companies, Enrich French taxman

This blog has written a number of times about the reaction of creators to generative AI. Legal academic and copyright expert Andres Guadamuz has spotted what may be the first attempt to draw up a new law to regulate generative AI. It comes from French politicians, who have developed something of a habit of bringing in new laws attempting to control digital technology that they rarely understand but definitely dislike.

There are only four articles in the text of the proposal, which are intended to be added as amendments to existing French laws. Despite being short, the proposal contains some impressively bad ideas. The first of these is found in Article 2, which, as Guadamuz summarises, “assigns ownership of the [AI-generated] work (now protected by copyright) to the authors or assignees of the works that enabled the creation of the said artificial work.” Here’s the huge problem with that idea:

How can one determine the author of the works that facilitated the conception of the AI-generated piece? While it might seem straightforward if AI works are viewed as collages or summaries of existing copyrighted works, this is far from the reality. As of now, I’m unaware of any method to extract specific text from ChatGPT or an image from Midjourney and enumerate all the works that contributed to its creation. That’s not how these models operate.

Since there is no way to find out exactly who the creators are whose work helped generate a new piece of AI material using aggregated statistics, Guadamuz suggests that the French lawmakers might want creators to be paid according to their contribution to the training material that went into creating the generative AI system itself. Using his own writings as an example, he calculates what fraction of any given payout he would receive with this approach. For ChatGPT’s output, Guadamuz estimates he might receive 0.00001% of any payout that was made. To give an example, even if the licensing fee for a some hugely popular work generated using AI were €1,000,000, Guadamuz would only receive 10 cents. Most real-life payouts to creators would be vanishingly small.

Article 3 of the French proposal builds on this ridiculous approach by requiring the names of all the creators who contributed to some AI-generated output to be included in that work. But as Guadamuz has already noted, there’s no way to find out exactly whose works have contributed to an output, leaving the only option to include the names of every single creator whose work is present in the training set – potentially millions of names.

Interestingly, Article 4 seems to recognize the payment problem raised above, and offers a way to deal with it. Guadamuz explains:

As it will be not possible to find the author of an AI work (which remember, has copyright and therefore isn’t in the public domain), the law will place a tax on the company that operates the service. So it’s sort of in the public domain, but it’s taxed, and the tax will be paid by OpenAI, Google, Midjourney, StabilityAI, etc. But also by any open source operator and other AI providers (Huggingface, etc). And the tax will be used to fund the collective societies in France… so unless people are willing to join these societies from abroad, they will get nothing, and these bodies will reap the rewards.

In other words, the net effect of the French proposal seems to be to tax the emerging AI giants (mostly US companies) and pay the money to French collecting societies. Guadumuz goes so far as to say: “in my view, this is the real intention of the legislation”. Anyone who thinks this is a good solution might want to read Chapter 7 of Walled Culture the book (free digital versions available), which quotes from a report revealing “a long history of corruption, mismanagement, confiscation of funds, and lack of transparency [by collecting societies] that has deprived artists of the revenues they earned”. Trying to fit generative AI into the straitjacket of an outdated copyright system designed for books is clearly unwise; using it as a pretext for funneling yet more money away from creators and towards collecting societies is just ridiculous.

Source: New French AI Copyright Law Would Effectively Tax AI Companies, Enrich Collection Societies | Techdirt

Empowering Responsible and Compliant Practices: Bridging the Gap for US Citizens and Corporations with the New EU-US Data Privacy Framework

The Data Privacy Framework (DPF) presents new legal guidance to facilitate personal data sharing between US companies and their counterparts in the EU and the UK. This framework empowers individuals with greater control over their personal data and streamlines business operations by creating common rules around interoperable dataflows. Moreover, the DPF will help enable clear contract terms and business codes of conduct for corporations that collect, use, and transfer personal data across borders.

Any business that collects data related to people in the EU must comply with the EU’s General Data Protection Regulation (GDPR), which is the toughest privacy and security law across the globe. Thus, the DPF helps US corporations avoid potentially hefty fines and penalties by ensuring their data transfers align with GDPR regulations.

Data transfer procedures, which were historically time-consuming and riddled with legal complications, are now faster and more straightforward with the DPF, which allows for more transatlantic dataflows agreed on by US companies and their EU and UK counterparts. On July 10, 2023, the European Commission finalized an adequacy decision that assures the US offers data protection levels similar to the EU’s.

[…]

US companies can register with the DPF through the Department of Commerce DPF website. Companies that previously self-certified compliance with the EU-US Privacy Shield can transition to DPF by recertifying their adherence to DPF principles, including updating privacy policies to reflect any change in procedures and data subject rights that are crucial for this transition. Businesses should develop privacy policies that identify an independent recourse mechanism that can address data protection concerns. To qualify for the DPF the company must fall under the jurisdiction of either the Federal Trade Commission or the US Department of Transportation, though this reach may broaden in the future.

Source: Empowering Responsible and Compliant Practices: Bridging the Gap for US Citizens and Corporations with the New EU-US Data Privacy Framework | American Enterprise Institute – AEI

The whole self-certification things seems leaky as a sieve to me… And once data has gone into the US intelligence services you can assume it will go everywhere and there will be no stopping it from the EU side.

Clearview Gets $10 Million UK Fine Reversed, Now Owes Slightly Less To Governments Around The World

Here’s how things went for the world’s most infamous purveyor of facial recognition tech when it came to its dealings with the United Kingdom. In a word: not great.

In addition to supplying its scraped data to known human rights abusers, Clearview was found to have supplied access to a multitude of UK and US entities. At that point (early 2020), it was also making its software available to a number of retailers, suggesting the tool its CEO claimed was instrumental in fighting serious crime (CSAM, terrorism) was just as great at fighting retail theft. For some reason, an anti-human-trafficking charity headed up by author J.K. Rowling was also on the customer list obtained by Buzzfeed.

Clearview’s relationship with the UK government soon soured. In December 2021, the UK government’s Information Commissioner’s Office (ICO) said the company had violated UK privacy laws with its non-consensual scraping of UK residents’ photos and data. That initial declaration from the ICO came with a $23 million fine attached, one that was reduced to a little less than $10 million ($9.4 million) roughly six months later, accompanied by demands Clearview immediately delete all UK resident data in its possession.

This fine was one of several the company managed to obtain from foreign governments. The Italian government — citing EU privacy law violations — levied a $21 million fine. The French government came to the same conclusions and the same penalty, adding another $21 million to Clearview’s European tab.

The facial recognition tech company never bothered to proclaim its innocence after being fined by the UK government. Instead, it simply stated the UK government had no power to enforce this fine because Clearview was a United States company with no physical presence in the United Kingdom.

In addition to engaging in reputational rehab on the UK front, Clearview went to court to challenge the fine levied by the UK government. And it appears to have won this round for the moment, reducing its accounts payable ledger by about $10 million, as Natasha Lomas reports for TechCrunch.

[I]n a ruling issued yesterday its legal challenge to the ICO prevailed on jurisdiction grounds after the tribunal ruled the company’s activities fall outside the jurisdiction of U.K. data protection law owing to an exemption related to foreign law enforcement.

Which is pretty much the argument Clearview made months ago, albeit less elegantly after it was first informed of the fine. The base argument is that Clearview is a US entity providing services to foreign entities and that it’s up to its foreign customers to comply with local laws, rather than Clearview itself.

That argument worked. And it worked because it appears the ICO chose the wrong law to wield against Clearview. The UK’s GDPR does not protect UK residents from actions taken by “competent authorities for law enforcement purposes.” (lol at that entire phrase.) Government customers of Clearview are only subject to the adopted parts of the EU’s Data Protection Act post-Brexit, which means the company’s (alleged) pivot to the public sector puts both its actions — and the actions of its UK law enforcement clients — outside of the reach of the GDPR.

Per the ruling, Clearview argued it’s a foreign company providing its service to “foreign clients, using foreign IP addresses, and in support of the public interest activities of foreign governments and government agencies, in particular in relation to their national security and criminal law enforcement functions.”

That’s enough to get Clearview off the hook. While the GDPR and EU privacy laws have extraterritorial provisions, they also make exceptions for law enforcement and national security interests. GDPR has more exceptions, which made it that much easier for Clearview to walk away from this penalty by claiming it only sold to entities subject to this exception.

Whether or not that’s actually true has yet to be determined. And it might have made more sense for ICO to prosecute this under the parts of EU law the UK government decided to adopt after deciding it no longer wanted to be part of this particular union.

Even if the charges had stuck, it’s unlikely Clearview would ever have paid the fine. According to its CEO and spokespeople, Clearview owes nothing to anyone. Whatever anyone posts publicly is fair game. And if the company wants to hoover up everything on the web that isn’t nailed down, well, that’s a problem for other people to be subjected to, possibly at gunpoint. Until someone can actually make something stick, all they’ve got is bills they can’t collect and a collective GFY from one of the least ethical companies to ever get into the facial recognition business.

Source: Clearview Gets $10 Million UK Fine Reversed, Now Owes Slightly Less To Governments Around The World | Techdirt

Google Decides To Pull Up The Ladder On The Open Internet, Pushes For Unconstitutional Regulatory Proposals

It’s pretty much the way of the world: beyond the basic enshittification story that has been so well told over the past year or so about how companies get worse and worse as they get more and more powerful, there’s also the well known concept of successful innovative companies “pulling up the ladder” behind them, using the regulatory process to make it impossible for other companies to follow their own path to success. We’ve talked about this in the sense of political entrepreneurship, which is when the main entrepreneurial effort is not to innovate in newer and better products for customers, but rather using the political system for personal gain and to prevent competitors from havng the same opportunities.

It happens all too frequently. And it’s been happening lately with the big internet companies, which relied on the open internet to become successful, but under massive pressure from regulators (and the media), keep shooting the open internet in the back, each time they can present themselves as “supportive” of some dumb regulatory regime. Facebook did it six years ago by supporting FOSTA wholeheartedly, which was the key tide shift that made the law viable in Congress.

And, now, it appears that Google is going down that same path. There have been hints here and there, such as when it mostly gave up the fight on net neutrality six years ago. However, Google had still appeared to be active in various fights to protect an open internet.

But, last week, Google took a big step towards pulling up the open internet ladder behind it, which got almost no coverage (and what coverage it got was misleading). And, for the life of me, I don’t understand why it chose to do this now. It’s one of the dumbest policy moves I’ve seen Google make in ages, and seems like a complete unforced error.

Last Monday, Google announced “a policy framework to protect children and teens online,” which was echoed by subsidiary YouTube, which posted basically the same thing, talking about it’s “principled approach for children and teenagers.” Both of these pushed not just a “principled approach” for companies to take, but a legislative model (and I hear that they’re out pushing “model bills” across legislatures as well).

The “legislative” model is, effectively, California’s Age Appropriate Design Code. Yes, the very law that was just declared unconstitutional just a few weeks before Google basically threw its weight behind the approach. What’s funny is that many, many people have (incorrectly) believed that Google was some sort of legal mastermind behind the NetChoice lawsuits challenging California’s law and other similar laws, when the reality appears to be that Google knows full well that it can handle the requirements of the law, but smaller competitors cannot. Google likes the law. It wants more of them, apparently.

The model includes “age assurance” (which is effectively age verification, though everyone pretends it’s not), greater parental surveillance, and the compliance nightmare of “impact assessments” (we talked about this nonsense in relation to the California law). Again, for many companies this is a good idea. But just because something is a good idea for companies to do does not mean that it should be mandated by law.

But that’s exactly what Google is pushing for here, even as a law that more or less mimics its framework was just found to be unconstitutional. While cynical people will say that maybe Google is supporting these policies hoping that they will continue to be found unconstitutional, I see little evidence to support that. Instead, it really sounds like Google is fully onboard with these kinds of duty of care regulations that will harm smaller competitors, but which Google can handle just fine.

It’s pulling up the ladder behind it.

And yet, the press coverage of this focused on the fact that this was being presented as an “alternative” to a full on ban for kids under 18 to be on social media. The Verge framed this as “Google asks Congress not to ban teens from social media,” leaving out that it was Google asking Congress to basically make it impossible for any site other than the largest, richest companies to be able to allow teens on social media. Same thing with TechCrunch, which framed it as Google lobbying against age verification.

But… it’s not? It’s basically lobbying for age verification, just in the guise of “age assurance,” which is effectively “age verification, but if you’re a smaller company you can get it wrong some undefined amount of time, until someone sues you.” I mean, what’s here is not “lobbying against age verification,” it’s basically saying “here’s how to require age verification.”

A good understanding of user age can help online services offer age-appropriate experiences. That said, any method to determine the age of users across services comes with tradeoffs, such as intruding on privacy interests, requiring more data collection and use, or restricting adult users’ access to important information and services. Where required, age assurance – which can range from declaration to inference and verification – should be risk-based, preserving users’ access to information and services, and respecting their privacy. Where legislation mandates age assurance, it should do so through a workable, interoperable standard that preserves the potential for anonymous or pseudonymous experiences. It should avoid requiring collection or processing of additional personal information, treating all users like children, or impinging on the ability of adults to access information. More data-intrusive methods (such as verification with “hard identifiers” like government IDs) should be limited to high-risk services (e.g., alcohol, gambling, or pornography) or age correction. Moreover, age assurance requirements should permit online services to explore and adapt to improved technological approaches. In particular, requirements should enable new, privacy-protective ways to ensure users are at least the required age before engaging in certain activities. Finally, because age assurance technologies are novel, imperfect, and evolving, requirements should provide reasonable protection from liability for good-faith efforts to develop and implement improved solutions in this space.

Much like Facebook caving on FOSTA, this is Google caving on age verification and other “duty of care” approaches to regulating the way kids have access to the internet. It’s pulling up the ladder behind itself, knowing that it was able to grow without having to take these steps, and making sure that none of the up-and-coming challenges to Google’s position will have the same freedom to do so.

And, for what? So that Google can go to regulators and say “look, we’re not against regulations, here’s our framework”? But Google has smart policy people. They have to know how this plays out in reality. Just as with FOSTA, it completely backfired on Facebook (and the open internet). This approach will do the same.

Not only will these laws inevitably be used against the companies themselves, they’ll also be weaponized and modified by policymakers who will make them even worse and even more dangerous, all while pointing to Google’s “blessing” of this approach as an endorsement.

For years, Google had been somewhat unique in continuing to fight for the open internet long after many other companies were switching over to ladder pulling. There were hints that Google was going down this path in the past, but with this policy framework, the company has now made it clear that it has no intention of being a friend to the open internet any more.

Source: Google Decides To Pull Up The Ladder On The Open Internet, Pushes For Unconstitutional Regulatory Proposals | Techdirt

Well, with chrome only support, dns over https and browser privacy sandboxing, Google has been off the do no evil for some time and has been closing off the openness of the web by rebuilding or crushing competition for quite some time

Universal Music sues AI start-up Anthropic for scraping song lyrics – will they come after you for having read the lyrics or memorised the song next?

Universal Music has filed a copyright infringement lawsuit against artificial intelligence start-up Anthropic, as the world’s largest music group battles against chatbots that churn out its artists’ lyrics.

Universal and two other music companies allege that Anthropic scrapes their songs without permission and uses them to generate “identical or nearly identical copies of those lyrics” via Claude, its rival to ChatGPT.

When Claude is asked for lyrics to the song “I Will Survive” by Gloria Gaynor, for example, it responds with “a nearly word-for-word copy of those lyrics,” Universal, Concord, and ABKCO said in a filing with a US court in Nashville, Tennessee.

“This copyrighted material is not free for the taking simply because it can be found on the Internet,” the music companies said, while claiming that Anthropic had “never even attempted” to license their copyrighted work.

[…]

Universal earlier this year asked Spotify and other streaming services to cut off access to its music catalogue for developers using it to train AI technology.

Source: Universal Music sues AI start-up Anthropic for scraping song lyrics | Ars Technica

So don’t think about memorising or even listening to copyrighted material from them because apparently they will come after you with the mighty and crazy arm of the law!

Equifax poked with paltry $13.4 million following 147m customer data breach in 2017

Credit bureau company, Equifax, has been fined US$13.4 million by The Financial Conduct Authority (FCA), a UK financial watchdog, following its involvement in “one of the largest” data breaches ever.

This cyber security incident took place in 2017 and saw Equifax’s US-based parent company, Equifax Inc., suffer a data breach that saw the personal data of up to 147.9 million customers accessed by malicious actors during the hack. The FCA also revealed that, as this data was stored in company servers in the US, the hack also exposed the personal data of 13.8 million UK customers.

The data accessed during the hack included Equifax membership login details, customer names, dates of birth, partial credit card details and addresses.

According the FCA, the cyber attack and subsequent data breach was “entirely preventable” and exposed UK customers to financial crime.
“There were known weaknesses in Equifax Inc’s data security systems and Equifax failed to take appropriate action in response to protect UK customer data,” the FCA explained.

The authority also noted that the UK arm of Equifax was not made aware that malicious actors had been accessed during the hack until six weeks after the cyber security incident was discovered by Equifax Inc.

The company was fined $60,727 by the British Information Commissioner’s Office (ICO) relating to the data breach in 2018.

On October 13th, Equifax stated that it had fully cooperated with the FCA during the investigation, which has been extensive. The FCA also said that the fine levelled at Equifax Inc had been reduced following the company’s agreement to cooperate with the watchdog and resolve the cyber attack.

Patricio Remon, president for Europe at Equifax, said that since the cyber attack against Equifax in 2017, the company has “invested over $1.5 billion in a security and technology transformation”. Remon also said that “few companies have invested more time and resources than Equifax to ensure that consumers’ information is protected”.

Source: Equifax fined $13.4 million following data breach

Museum Collection Of Historical TV Culture At Risk Due To Copyright Takedowns

[…]

the informal nature of their collections means that they are exposed to serious threats from copyright, as the recent experience of The Museum of Classic Chicago Television makes clear. The Museum explains why it exists:

The Museum of Classic Chicago Television (FuzzyMemoriesTV) is constantly searching out vintage material on old videotapes saved in basements or attics, or sold at flea markets, garage sales, estate sales and everywhere in between. Some of it would be completely lost to history if it were not for our efforts. The local TV stations have, for the most part, regrettably done a poor job at preserving their history. Tapes were very expensive 25-30 years ago and there also was a lack of vision on the importance of preserving this material back then. If the material does not exist on a studio master tape, what is to be done? Do we simply disregard the thousands of off-air recordings that still exist holding precious “lost” material? We believe this would be a tragic mistake.

Dozens of TV professionals and private individuals have donated to the museum their personal copies of old TV programmes made in the 1970s and 1980s, many of which include rare and otherwise unavailable TV advertisements that were shown as part of the broadcasts. In addition to the main Museum of Classic Chicago Television site, there is also a YouTube channel with videos. However, as TorrentFreak recounts, the entire channel was under threat because of copyright takedown requests:

In a series of emails starting Friday and continuing over the weekend, [the museum’s president and lead curator] Klein began by explaining his team’s predicament, one that TorrentFreak has heard time and again over the past few years. Acting on behalf of a copyright owner, in this case Sony, India-based anti-piracy company Markscan hit the MCCTv channel with a flurry of copyright claims. If these cannot be resolved, the entire project may disappear.

One issue is that Klein was unable to contact Markscan to resolve the problem directly. He is quoted by TorrentFreak as saying: “I just need to reach a live human being to try to resolve this without copyright strikes. I am willing to remove the material manually to get the strikes reversed.”

Once the copyright enforcement machine is engaged, it can be hard to stop. As Walled Culture the book (free digital versions available) recounts, there are effectively no penalties for unreasonable or even outright false claims. The playing field is tipped entirely in the favour of the copyright world, and anyone that is targeted using one of the takedown mechanisms is unlikely to be able to do much to contest them, unless they have good lawyers and deep pockets. Fortunately, in this case, an Ars Technica article on the issue reported that:

Sony’s copyright office emailed Klein after this article was published, saying it would “inform MarkScan to request retractions for the notices issued in response to the 27 full-length episode postings of Bewitched” in exchange for “assurances from you that you or the Fuzzy Memories TV Channel will not post or re-post any infringing versions from Bewitched or other content owned or distributed by SPE [Sony Pictures Entertainment] companies.”

That “concession” by Sony highlights the main problem here: the fact that a group of public-spirited individuals trying to preserve unique digital artefacts must live with the constant threat of copyright companies taking action against them. Moreover, there is also the likelihood that some of their holdings will have to be deleted as a result of those legal threats, despite the material’s possible cultural value or the fact that it is the only surviving copy. No one wins in this situation, but the purity of copyright must be preserved at all costs, it seems.

[…]

Source: Museum Collection Of Historical TV Culture At Risk Due To Copyright Takedowns | Techdirt

ICE, CBP, Secret Service All Illegally Used Smartphone Location Data

In a bombshell report, an oversight body for the Department of Homeland Security (DHS) found that Immigration and Customs Enforcement (ICE), Customs and Border Enforcement (CBP), and the Secret Service all broke the law while using location data harvested from ordinary apps installed on smartphones. In one instance, a CBP official also inappropriately used the technology to track the location of coworkers with no investigative purpose. For years U.S. government agencies have been buying access to location data through commercial vendors, a practice which critics say skirts the Fourth Amendment requirement of a warrant. During that time, the agencies have typically refused to publicly explain the legal basis on which they based their purchase and use of the data. Now, the report shows that three of the main customers of commercial location data broke the law while doing so, and didn’t have any supervisory review to ensure proper use of the technology. The report also recommends that ICE stop all use of such data until it obtains the necessary approvals, a request that ICE has refused.

The report, titled “CBP, ICE, and Secret Service Did Not Adhere to Privacy Policies or Develop Sufficient Policies Before Procuring and Using Commercial Telemetry Data,” is dated September 28, 2023, and comes from Joseph V. Cuffari, the Inspector General for DHS. The report was originally marked as “law enforcement sensitive,” but the Inspector General has now released it publicly.

Source: ICE, CBP, Secret Service All Illegally Used Smartphone Location Data – Slashdot

EPIC urges FTC to investigate Grindr’s data practices

On Wednesday, EPIC filed a complaint with the US government watchdog over Grindr’s “apparent failure to safeguard users’ sensitive personal data.” This includes both present and past users who have since deleted their accounts, according to the complaint. Despite promising in its privacy policy to delete personal info if customers remove their account, Grindr allegedly retained and disclosed some of this data to third parties.

Considering that people trust the dating app with a ton of very sensitive information — this includes their sexual preferences, self-reported HIV status, chat history, photos including nudes, and location information — “learning that Grindr breaks the promises it makes to users would likely affect a consumer’s decision regarding whether to use Grindr,” the complaint states [PDF].

Grindr, for its part, says privacy is of the uppermost importance to it, and that these “unfounded” claims stem from allegations made by a disgruntled ex-worker. So that’s all right then.

“Privacy is a top priority for Grindr and the LGBTQ+ community we serve, and we have adopted industry-leading privacy practices and tools to protect and empower our users,” a spokesperson told The Register.

“We are sorry that the former employee behind the unfounded allegations in today’s request is dissatisfied with his departure from the company; we wish him the best.”

The former employee in question is Grindr’s ex-chief privacy officer Ron De Jesus. In June, De Jesus filed a wrongful termination lawsuit [PDF] against his former bosses that also accused the dating app of violating privacy laws.

According to the lawsuit, De Jesus was “leading the charge to keep Grindr compliant with state, national, and international laws” after Norway’s data protection agency fined the dating app biz about $12 million in December 2021 and a Wall Street Journal article in May 2022 accused the application developer of selling users’ location data.

But despite De Jesus’ attempts, “Grindr placed profit over privacy and got rid of Mr De Jesus for his efforts and reports,” the lawsuit alleges.

EPIC’s complaint, which highlights De Jesus’ allegations, asks the FTC to look into potential violations of privacy law, including detection data retention and disclosure practices.

It also accuses Grindr of violating the Health Breach Notification Rule (HNBR). The dating app is subject to the HNBR because it asks users to self-report health data including HIV status, last-tested date, and vaccination status. By sharing these records with third parties and retaining health data after users deleted their accounts, Grindr allegedly breached the HNBR, EPIC says.

The privacy advocates at EPIC want the FTC to make Grindr comply with the laws and stop any “unlawful or impermissible” data retention practices. Additionally, the complaint calls on the federal agency to force Grindr to notify any users’ whose data was misused, and impose fines against the dating app for any violations of the HBNR.

Source: EPIC urges FTC to investigate Grindr’s data practices • The Register

Singapore plans to scan your face instead of your passport

[…] “Singapore will be one of the first few countries in the world to introduce automated, passport-free immigration clearance,” said minister for communications and information Josephine Teo in a wrap-up speech for the bill. Teo did concede that Dubai had such clearance for select enrolled travelers, but there was no assurance of other countries planning similar actions.

[…]

Another consideration for why passports will likely remain relevant in Singapore airports is for checking in with airlines. Airlines check passports not just to confirm identity, but also visas and more. Airlines are often held responsible for stranded passengers so will likely be required to confirm travelers have the documentation required to enter their destination.

The Register asked Singapore Airlines to confirm if passports will still be required on the airline after the implementation of biometric clearance. They deferred to Changi’s operator, Changi Airport Group (CAG), which The Reg also contacted – and we will update if a relevant reply arises.

What travelers will see is an expansion of a program already taking form. Changi airport currently uses facial recognition software and automated clearance for some parts of immigration.

[…]

Passengers who pre-submit required declarations online can already get through Singapore’s current automated immigration lanes in 20 to 30 seconds once they arrive to the front of the queue. It’s one reason Changi has a reputation for being quick to navigate.

[…]

According to CAG, the airport handled 5.12 million passenger movements in June 2023 alone. This figure is expected to only increase as it currently stands at 88 percent of pre-COVID levels and the government sees such efficiency as critical to managing the impending growth.

But the reasoning for biometric clearance go beyond a boom in travelers. With an aging population and shrinking workforce, Singapore’s Immigration & Checkpoints Authority (ICA) will have “to cope without a significant increase in manpower,” said Teo.

Additionally, security threats including pandemics and terrorism call for Singapore to “go upstream” on immigration measures, “such as the collection of advance passenger and crew information, and entry restrictions to be imposed on undesirable foreigners, even before they arrive at our shores,” added the minister.

This collection and sharing of biometric information is what enables the passport-free immigration process – passenger and crew information will need to be disclosed to the airport operator to use for bag management, access control, gate boarding, duty-free purchases, as well as tracing individuals within the airport for security purposes.

The shared biometrics will serve as a “single token of authentication” across all touch points.

Members of Singapore’s parliament have raised concerns about shifting to universal automated clearance, including data privacy, and managing technical glitches.

According to Teo, only Singaporean companies will be allowed ICA-related IT contracts, vendors will be given non-disclosure agreements, and employees of such firms must undergo security screening. Traveler data will be encrypted and transported through data exchange gateways.

As for who will protect the data, that role goes to CAG, with ICA auditing its compliance.

In case of disruptions that can’t be handled by an uninterruptible power supply, off-duty officers will be called in to go back to analog.

And even though the ministry is pushing universal coverage, there will be some exceptions, such as those who are unable to provide certain biometrics or are less digitally literate. Teo promised their clearance can be done manually by immigration officers.

Source: Singapore plans to scan your face instead of your passport • The Register

Data safety is a real issue here – how long will the data be collected and for what other purposes will it be used?

UK passport and immigration images database could be repurposed to catch shoplifters

Britain’s passport database could be used to catch shoplifters, burglars and other criminals under urgent plans to curb crime, the policing minister has said.

Chris Philp said he planned to integrate data from the police national database (PND), the Passport Office and other national databases to help police find a match with the “click of one button”.

But civil liberty campaigners have warned the plans would be an “Orwellian nightmare” that amount to a “gross violation of British privacy principles”.

Foreign nationals who are not on the passport database could also be found via the immigration and asylum biometrics system, which will be part of an amalgamated system to help catch thieves.

[…]

Until the new platform is created, he said police forces should search each database separately.

[…]

Emmanuelle Andrews, policy and campaigns manager at the campaign group, said: “Time and time again the government has relied on the social issue of the day to push through increasingly authoritarian measures. And that’s just what we’re seeing here with these extremely worrying proposals to encourage the police to scan our faces as we go to buy a pint of milk and trawl through our personal information.

“By enabling the police to use private dashcam footage, as well as the immigration and asylum system, and passport database, the government are turning our neighbours, loved ones, and public service officials into border guards and watchmen.

[…]

Silkie Carlo, director of Big Brother Watch, said: “Philp’s plan to subvert Brits’ passport photos into a giant police database is Orwellian and a gross violation of British privacy principles. It means that over 45 million of us with passports who gave our images for travel purposes will, without any kind of consent or the ability to object, be part of secret police lineups.

“To scan the population’s photos with highly inaccurate facial recognition technology and treat us like suspects is an outrageous assault on our privacy that totally overlooks the real reasons for shoplifting. Philp should concentrate on fixing broken policing rather than building an automated surveillance state.

“We will look at every possible avenue to challenge this Orwellian nightmare.”

Source: UK passport images database could be used to catch shoplifters | Police | The Guardian

Also, time and again we have seen that centralised databases are a really really bad idea – the data gets stolen and misused by the operators.

Firefox now has private browser-based website translation – no cloud servers required

Web browsers have had tools that let you translate websites for years. But they typically rely on cloud-based translation services like Google Translate or Microsoft’s Bing Translator.

The latest version of Mozilla’s Firefox web browser does things differently. Firefox 118 brings support for Fullpage Translation, which can translate websites entirely in your browser. In other words, everything happens locally on your computer without any data sent to Microsoft, Google, or other companies.

Here’s how it works. Firefox will notice when you visit a website in a supported language that’s different from your default language, and a translate icon will show up in the address bar.

Tap that icon and you’ll see a pop-up window that asks what languages you’d like to translate from and to. If the browser doesn’t automatically detect the language of the website you’re visiting, you can set these manually.

Then click the “Translate” button, and a moment later the text on the page should be visible in your target language. If you’d prefer to go back to the original language, just tap the translate icon again and choose the option that says “show original.”

You can also tap the settings icon in the translation menu and choose to “always translate” or “never translate” a specific language so that you won’t have to manually invoke the translation every time you visit sites in that language.

Now for the bad news: Firefox Fullpage Translation only supports 9 languages so far:

  • Bulgarian
  • Dutch
  • English
  • French
  • German
  • Italian
  • Polish
  • Portuguese
  • Spanish

[…]

Source: Firefox 118 brings browser-based website translation (no cloud servers required… for a handful of supported languages) – Liliputing

Feds Probing Tesla For Lying About EV Ranges, Bullshitting Customers Who Complained

Back in July, Reuters released a bombshell report documenting how Tesla not only spent a decade falsely inflating the range of their EVs, but created teams dedicated to bullshitting Tesla customers who called in to complain about it. If you recall, Reuters noted how these teams would have a little, adorable party every time they got a pissed off user to cancel a scheduled service call. Usually by lying to them:

“Inside the Nevada team’s office, some employees celebrated canceling service appointments by putting their phones on mute and striking a metal xylophone, triggering applause from coworkers who sometimes stood on desks. The team often closed hundreds of cases a week and staffers were tracked on their average number of diverted appointments per day.”

The story managed to stay in the headlines for all of a day or two, quickly supplanted by gossip surrounding a non-existent Elon Musk Mark Zuckerberg fist fight.

But here in reality, Tesla’s routine misrepresentation of their product (and almost joyous gaslighting of their paying customers) has caught the eye of federal regulators, who are now investigating the company for fraudulent behavior:

“federal prosecutors have opened a probe into Tesla’s alleged range-exaggerating scheme, which involved rigging its cars’ software to show an inflated range projection that would then abruptly switch to an accurate projection once the battery dipped below 50% charged. Tesla also reportedly created an entire secret “diversion team” to dissuade customers who had noticed the problem from scheduling service center appointments.”

This pretty clearly meets the threshold definition of “unfair and deceptive” under the FTC Act, so this shouldn’t be that hard of a case. Of course, whether it results in any sort of meaningful penalties or fines is another matter entirely. It’s very clear Musk historically hasn’t been very worried about what’s left of the U.S. regulatory and consumer protection apparatus holding him accountable for… anything.

Still, it’s yet another problem for a company that’s facing a flood of new competitors with an aging product line. And it’s another case thrown in Tesla’s lap on top of the glacially-moving inquiry into the growing pile of corpses caused by obvious misrepresentation of under-cooked “self driving” technology, and an investigation into Musk covertly using Tesla funds to build himself a glass mansion.

Source: Feds Probing Tesla For Lying About EV Ranges, Bullshitting Customers Who Complained | Techdirt

Philips Hue / Signify Ecosystem: ‘Collapsing Into Stupidity’

The Philips Hue ecosystem of home automation devices is “collapsing into stupidity,” writes Rachel Kroll, veteran sysadmin and former production engineer at Facebook. “Unfortunately, the idiot C-suite phenomenon has happened here too, and they have been slowly walking down the road to full-on enshittification.” From her blog post: I figured something was up a few years ago when their iOS app would block entry until you pushed an upgrade to the hub box. That kind of behavior would never fly with any product team that gives a damn about their users — want to control something, so you start up the app? Forget it, we are making you placate us first! How is that user-focused, you ask? It isn’t.

Their latest round of stupidity pops up a new EULA and forces you to take it or, again, you can’t access your stuff. But that’s just more unenforceable garbage, so who cares, right? Well, it’s getting worse.

It seems they are planning on dropping an update which will force you to log in. Yep, no longer will your stuff Just Work across the local network. Now it will have yet another garbage “cloud” “integration” involved, and they certainly will find a way to make things suck even worse for you. If you have just the lights and smart outlets, Kroll recommends deleting the units from the Hue Hub and adding them to an IKEA Dirigera hub. “It’ll run them just fine, and will also export them to HomeKit so that much will keep working as well.” That said, it’s not a perfect solution. You will lose motion sensor data, the light level, the temperature of that room, and the ability to set custom behaviors with those buttons.

“Also, there’s no guarantee that IKEA won’t hop on the train to sketchville and start screwing over their users as well,” adds Kroll.

Source: Is the Philips Hue Ecosystem ‘Collapsing Into Stupidity’? – Slashdot