Sacramento Sheriff is sharing license plate reader data with anti-abortion states, records show

In 2015, Democratic Elk Grove Assemblyman Jim Cooper voted for Senate Bill 34, which restricted law enforcement from sharing automated license plate reader (ALPR) data with out-of-state authorities. In 2023, now-Sacramento County Sheriff Cooper appears to be doing just that.

The Electronic Frontier Foundation (EFF) a digital rights group, has sent Cooper a letter requesting that the Sacramento County Sheriff’s Office cease sharing ALPR data with out-of-state agencies that could use it to prosecute someone for seeking an abortion.

According to documents that the Sheriff’s Office provided EFF through a public records request, it has shared license plate reader data with law enforcement agencies in states that have passed laws banning abortion, including Alabama, Oklahoma and Texas.

[…]

Schwartz said that a sheriff in Texas, Idaho or any other state with an abortion ban on the books could use that data to track people’s movements around California, knowing where they live, where they work and where they seek reproductive medical care, including abortions.

The Sacramento County Sheriff’s Office isn’t the only one sharing that data; in May, EFF released a report showing that 71 law enforcement agencies in 22 California counties — including Sacramento County — were sharing such data. The practice is in violation of a 2015 law that states “a (California law enforcement) agency shall not sell, share, or transfer ALPR information, except to another (California law enforcement) agency, and only as otherwise permitted by law.”

[…]

 

Source: Sacramento Sheriff is sharing license plate reader data with anti-abortion states, records show

Comedian, novelists sue OpenAI for reading books. Maybe we should sue people for reading them as well?

Award-winning novelists Paul Tremblay and Mona Awad, and, separately comedian Sarah Silverman and novelists Christopher Golden and Richard Kadrey, have sued OpenAI and accused the startup of training ChatGPT on their books without consent, violating copyright laws.

The lawsuits, both filed in the Northern District Court of San Francisco, say ChatGPT generates accurate summaries of their books and highlighted this as evidence for the software being trained on their work.

[…]

In the second suit, Silverman et al [PDF], make similar claims.

[…]

OpenAI trains its large language models by scraping text from the internet, and although it hasn’t revealed exactly what resources it has swallowed up, the startup has admitted to training its systems on hundreds of thousands of books protected by copyright, and stored on websites like Sci-Hub or Bibliotik.

[…]

Source: Comedian, novelists sue OpenAI for scraping books • The Register

The problem is though, that people read books too. And they can (and do) create accurate summaries from them. What is worse, is that the creativity shown by people can be shown to be influenced by the books, art, dance, etc that they have ingested. So maybe people should be banned from reading books as well under copyright?

Amazon claims it isn’t a “Very Large Online Platform” to evade EU rules

Amazon doesn’t want to comply with Europe’s Digital Services Act, and to avoid the rules the company is arguing that it doesn’t meet the definition of a Very Large Online Platform under EU law. Amazon filed an appeal at the EU General Court to challenge the European Commission decision that Amazon meets the criteria and must comply with the new regulations.

“We agree with the EC’s objective and are committed to protecting customers from illegal products and content, but Amazon doesn’t fit this description of a ‘Very Large Online Platform’ (VLOP) under the DSA and therefore should not be designated as such,” Amazon said in a statement provided to Ars today.

[…]

Amazon argued that the new law is supposed to “address systemic risks posed by very large companies with advertising as their primary revenue and that distribute speech and information,” and not businesses that are primarily retail-based. “The vast majority of our revenue comes from our retail business,” Amazon said.

Amazon claims to be “unfairly singled out”

Amazon also claims it’s unfair that some retailers with larger businesses in individual countries weren’t on the list of 19 companies that must comply with the Digital Services Act. The rules only designate platforms with over 45 million active users in the EU as of February 17.

Amazon said it is “not the largest retailer in any of the EU countries where we operate, and none of these largest retailers in each European country has been designated as a VLOP. If the VLOP designation were to be applied to Amazon and not to other large retailers across the EU, Amazon would be unfairly singled out and forced to meet onerous administrative obligations that don’t benefit EU consumers.”

Those other companies Amazon referred to include Poland’s Allegro or the Dutch Bol.com, according to a Bloomberg report. Neither of those platforms appears to have at least 45 million active users.

[…]

In April, Europe announced its designation of 19 large online platforms, which are mostly US-based companies. Five are run by Google, specifically YouTube, Google Search, the Google Play app and digital media store, Google Maps, and Google Shopping. Meta-owned Facebook and Instagram are on the list, as are Amazon’s online store, Apple’s App Store, Microsoft’s Bing search engine, TikTok, Twitter, and Wikipedia.

Listed platforms also include Alibaba AliExpress, Booking.com, LinkedIn, Pinterest, and Snapchat. The other platform is German online retailer Zalando, which was the first company to sue the EC in an attempt to get removed from the list.

Companies have until August 25 to comply and could face fines of up to 6 percent of their annual revenue if they don’t. Companies will have to submit annual risk assessments and risk mitigation plans that are subject to independent audits and oversight by the European Commission.

“Platforms will have to identify, analyze and mitigate a wide array of systemic risks ranging from how illegal content and disinformation can be amplified on their services, to the impact on the freedom of expression and media freedom,” the EC said in April. “Similarly, specific risks around gender-based violence online and the protection of minors online and their mental health must be assessed and mitigated.” One new rule bans advertisements that target users based on sensitive data such as ethnic origin, political opinions, or sexual orientation.

The EC also said that users must be given “clear information on why they are recommended certain information and will have the right to opt-out from recommendation systems based on profiling.” Users must have the ability “to report illegal content easily and platforms have to process such reports diligently.” Amazon and the other platforms must also “provide an easily understandable, plain-language summary of their terms and conditions, in the languages of the Member States where they operate.”

[…]

 

Source: Amazon claims it isn’t a “Very Large Online Platform” to evade EU rules | Ars Technica

Poor poor Amazon – the spy company monopolist marketplace that rips off the retailers in it’s own market!

An Alarming 87 Percent Of Retro Games Are Being Lost To Time

[…] The Video Game History Foundation (VGHF) partnered with the Software Preservation Network, an organization intent on advancing software preservation through collective action, to release a report on the disappearance of classic video games. “Classic” in this case has been defined as all games released before 2010, which the VGHF noted is the “year when digital game distribution started to take off.”

The status of physical video games

In the study, the two groups found that 87 percent of these classic games are not in release and considered critically endangered due to their widespread unavailability.

[…]

“For accessing nearly 9 in 10 classic games, there are few options: Seek out and maintain vintage collectible games and hardware, travel across the country to visit a library, or… piracy,” VGHF co-director Kelsey Lewin wrote.

[…]

the study claims that just 13 percent of game history is archived in libraries right now. And that’s part of the dilemma here. According to a March 2023 Ars Technica report, laws around the Digital Millennium Copyright Act (DMCA) largely prevent folks from making and distributing copies of any DRM-protected digital work. While the U.S. Copyright Office has issued exemptions to those rules so that libraries and researchers can archive digital material, video games are explicitly left out, which makes it nigh impossible for anyone to effectively study game history.

“Imagine if the only way to watch Titanic was to find a used VHS tape, and maintain your own vintage equipment so that you could still watch it,” Lewin wrote. “And what if no library, not even the Library of Congress, could do any better—they could keep and digitize that VHS of Titanic, but you’d have to go all the way there to watch it.

[…]

Though not surprised, she was still alarmed by the “flimsy” ways in which games disappear, pointing to Antstream Arcade, which houses a plethora of games from the Commodore 64 to the Game Boy that could be lost to time should it close up shop. The Nintendo eShop is a more mainstream example.

“When the eShop shut down the availability of the Game Boy library, [the number of available Game Boy games] went from something like 11 percent to 4.5 percent,” Lewin said. “The company wiped out half of the availability of the library of Game Boy games just by shutting down the Nintendo eShop.

[…]

Lewin noted that although libraries are allowed to do a lot of things “by being libraries [and] preservation institutions,” the Entertainment Software Association (ESA) has consistently lobbied against game preservation efforts such as copyright permissions and allowing the rental of digital video games.

“The ESA has basically opposed all of these new proposed exemptions,” Lewin said. “They’ve just been like, ‘No, that will hurt our bottom line,’ or, ‘That will hurt the industry’s bottom line.’ The ESA also says the industry is doing plenty to keep classic games in release, pointing to this thriving reissue market. And that’s true; there is a thriving reissue market. It’s just that it only covers 13 percent of video games, and that’s not likely to get any better any time soon.”

Read More: As More Games Disappear Forever, John Carmack Has Some Great Advice About Preservation

The study will be used in a 2024 copyright hearing to ask for exemptions for games. Lewin said she’s hopeful that progress will be made, suggesting that, should the hearing go well, games could be available on digital library apps like Libby. You can read the full 50-page study on the open repository Zenodo.

Source: An Alarming 87 Percent Of Retro Games Are Being Lost To Time

France Allows Police to Remotely Turn On GPS, Camera, Audio on Phones

Amidst ongoing protests in France, the country has just passed a new bill that will allow police to remotely access suspects’ cameras, microphones, and GPS on cell phones and other devices.

As reported by Le Monde, the bill has been criticized by the French people as a “snoopers” charter that allows police unfettered access to the location of its citizens. Moreover, police can activate cameras and microphones to take video and audio recordings of suspects. The bill will reportedly only apply to suspects in crimes that are punishable by a minimum of five years in jail

[…]

French politicians added an amendment that orders judge approval for any surveillance conducted under the scope of the bill and limits the duration of surveillance to six months

[…]

In 2021, The New York Times reported that the French Parliament passed a bill that would expand the French police force’s ability to monitor civilians using drones. French President Emmanuel Macron argued at the time that the bill was meant to protect police officers from increasingly violent protestors.

[…]

 

Source: France Passes Bill Allowing Police to Remotely Access Phones

Amazon’s iRobot Roomba acquisition under formal EU investigation

European Union regulators have opened an official investigation into Amazon’s proposed $1.7 billion acquisition of iRobot, the company behind the popular Roomba lineup of robot vacuum cleaners.

In a press release, the European Commission said it’s concerned that “the transaction would allow Amazon to restrict competition in the market for robot vacuum cleaners (‘RVCs’) and to strengthen its position as online marketplace provider.” The European Commission is also looking at how getting access to iRobot users’ data may give Amazon an advantage “in the market for online marketplace services to third-party sellers (and related advertising services) and / or other data-related markets.”

[…]

Source: Amazon’s iRobot Roomba acquisition under formal EU investigation

Do you really want Amazon to know the layout of the interior of your home?

People Are Using Forged Court Orders To Disappear Content They Don’t Like using DMCA

Copyright is still high on the list of censorial weapons. When you live in (or target) a country that protects free speech rights and offers intermediaries immunity via Section 230, you quickly surmise there’s a soft target lying between the First Amendment and the CDA.

That soft target is the DMCA. Thanks to plenty of lived-in experience, services serving millions or billions of users have decided it’s far easier to cater to (supposed) copyright holders than protect their other millions (or billions!) of users from abusive DMCA takedown demands.

There’s no immunity when it comes to the DMCA. There’s only the hope that US courts (should they be actually involved) will view good faith efforts to remove infringing content as acceptable preventative efforts.

But terrible people who neither respect the First Amendment nor the Communications Decency Act have found exploitable loopholes to disappear content they don’t like. And it’s always the worst people doing this. An entire cottage industry of “reputation management” firms has calcified into a so-called business model that views anything as acceptable until a court starts handing down sanctions.

“Cursory review” is the name of the game. Bullshit is fed to DMCA inboxes in hopes the people overseeing millions (or billions!) of pieces of uploaded content won’t spend too much time vetting takedown requests. When the initial takedown requests fail, bullshit artists (some of them hired!) decide to exploit the public sector.

Bogus litigation involving nonexistent defendants gives bad actors the legal paperwork they need to silence their critics. Bullshit default judgments are handed to bad faith plaintiffs by judges who can’t be bothered to do anything other than scan the docket to ensure at least some filings exist.

At the bottom of this miserable rung are the people who can’t even exploit these massively exploitable holes effectively. The bottom dwellers do what’s absolutely illegal, rather than just legally questionable. They forge court orders to demand takedowns of content they don’t like.

Eugene Volokh of the titular Volokh Conspiracy has plenty of experience with every variety of abusive takedown action listed above. In fact, he’s published an entire paper about these multiple levels of bullshit in the Utah Law Review.

Ironically, it’s that very paper that’s triggered the latest round of bogus takedown demands.

Yesterday, I saw that someone tried to use a different scheme, which I briefly mentioned in the article (pp. 300-01), to try to deindex the Utah Law Review version of my article: They sent a Digital Millennium Copyright Act notice to Google claiming that they owned the copyright in my article, and that the Utah Law Review version was an unauthorized copy of the version that I had posted on my own site:

Welcome to the party, “I Liam.”

But who do you represent? Volokh has some idea(s).

The submitter, therefore, asked Google to “deindex” that page—remove it from Google’s indexes, so that people searching for “mergeworthrx” or “stephen cichy” or “anthony minnuto” (another name mentioned on the page) wouldn’t see it.

So what prompted Google to remove this content that “I Liam” wished to disappear on behalf of his benefactors (presumably “mergeworthrx,” “stephen cichy,” and “anthony minnuto”)?

Well, it was a court order — one that was faked by whoever “I Liam” is:

Except there was no court order. Case No. 13-13548 CA was a completely different case. Celia Ampel, a reporter for the South Florida Daily Business Review, was never sued by MergeworthRX. The file submitted to Google was a forgery.

And definitely not an anomaly:

It was one of over 90 documents submitted to Google (and to other hosting platforms) that I believe to be forgeries. 

[…]

Source: Terrible People Are Still Using Forged Court Orders To Disappear Content They Don’t Like | Techdirt

The writer continues to say it’s terrible that there are terrible people and you can’t blame Google, when there is definitely a case to be made that Google can indeed do more due diligence. When the DMCA came into effect, people noted that this was ripe for the raping and so it happened. Alternatives were suggested but discarded. DMCA itself is very very poor law and should be revoked as it protects something we shouldn’t be protecting in the first place and does so in a way that allows people to randomly take down content with almost no recourse.

$6.3b US firm Telesign breached GDPR, reputation-scoring half of the population of the planet with mobiles

A US-based fraud prevention company is in hot water over allegations it not only collected data from millions of EU citizens and processed it using automated tools without their knowledge, but that it did so in the United States, all in violation of the EU’s data protection rules.

The complaint was filed by Austrian privacy advocacy group noyb, helmed by lawyer Max Schrems, and it doesn’t pull any punches in its claims that TeleSign, through its former Belgian parent company BICS, secretly collected data on cellphone users around the world.

That data, noyb alleges, was fed into an automated system that generates “reputation scores” that TeleSign sells to its customers, which includes TikTok, Salesforce, Microsoft and AWS, among others, for verifying the identity of a person behind a phone number and preventing fraud.

BICS, which acquired TeleSign in 2017, describes itself as “a global provider of international wholesale connectivity and interoperability services,” in essence operating as an interchange for various national cellular networks. Per noyb, BICS operates in more than 200 countries around the world and “gets detailed information (e.g. the regularity of completed calls, call duration, long-term inactivity, range activity, or successful incoming traffic) [on] about half of the worldwide mobile phone users.”

That data is regularly shared with TeleSign, noyb alleges, without any notification to the customers whose data is being collected and used.

[…]

In its complaint, an auto-translated English version of which was reviewed by The Register, noyb alleges that TeleSign is in violation of the GDPR’s provisions that ban use of automated profiling tools, as well as rules that require affirmative consent be given to process EU citizen’s data.

[…]

When BICS acquired TeleSign in 2017, it began to fall under the partial control of BICS’ parent company, Belgian telecom giant Proximus. Proximus held a partial stake in BICS, which Proximus spun off from its own operations in 1997.

In 2021, Proximus bought out BICS’ other shareholders, making it the sole owner of both the telecom interchange and TeleSign.

With that in mind, noyb is also leveling charges against Proximus and BICS. In its complaint, noyb said Proximus was asked by EU citizens from various countries to provide records of the data TeleSign processed, as is their right under Article 15 of the GDPR.

The complainants weren’t given the information they requested, says noyb, and claims what was handed over was simply a template copy of the EU’s standard contractual clause (SCC), which has been used by businesses transmitting data between the EU and US while the pair try to work out data transfer rules that Schrems won’t get struck down in court.

[…]

Noyb is seeking cessation of all data transfers from BICS to TeleSign, processing of said data, and is requesting deletion of all unlawfully transmitted data. It’s also asking for Belgian data protection authorities to fine Proximus, which noyb said could reach as high as €236 million ($257 million) – a mere 4 percent of Proximus’s global turnover.

[…]

Source: US firm ‘breached GDPR’ by reputation-scoring EU citizens • The Register

This firm is absolutely massive, yet it’s a smaller part of BICS and chances are that you’ve never ever heard of either of them!

Broadcom squeezed Samsung, now South Korea’s squeezing back

As the Commission explained in a Tuesday adjudicaiton, Broadcom and Samsung were in talks for a long-term supply agreement when the American chipmaker demanded the Korean giant sign or it would suspend shipments and support services.

Broadcom also wanted Samsung to commit to spending over $760 million a year, to make up the difference for any shortfalls, and not to buy from rivals.

With the market for the components it needs tight, Samsung reportedly signed. Then, when a certain viral pandemic cruelled its business, the giant conglomerate found itself having to buy parts it didn’t need. The chaebol estimates the deal cost it millions.

News of the deal eventually reached the regulator, which in 2022 asked Broadcom to propose a remedy – a common method of dispute resolution in South Korea.

Broadcom proposed a $15.5 million fund to stimulate South Korea’s small semiconductor outfits, plus extra support for Samsung.

On Tuesday, the Commission decided that’s not a reasonable restitution because it doesn’t include compensation for the impacted parties.

That’s bad news for Broadcom, because it means the regulator will now escalate matters – first by determining if the chipmaker broke local laws and then by considering a different penalty.

South Korea is protective of its local businesses – even giants like Samsung that are usually capable of fending for themselves. Broadcom reps will soon have some tricky-to-negotiate meetings on their agendas.

At least the corporation’s legal team has experience at this sort of thing. In 2018 it was probed by US authorities over contract practices, and in 2021 was forced to stop some anticompetitive practices. In 2022 it was in strife again – this time for allegedly forcing its customers to sign exclusive supply contracts.

The serial acquirer also lost a regulatory rumble over its attempted acquisition of Qualcomm, and is currently trying to explain why its proposed acquisition of VMware won’t harm competition.

Now it awaits South Korea’s wrath – and perhaps Samsung’s too.

Source: Broadcom squeezed Samsung, now South Korea’s squeezing back • The Register

Fitbit Privacy & security guide – no one told me it would send my data to the US

As of January 14, 2021, Google officially became the owner of Fitbit. That worried many privacy conscious users. However, Google promised that “Fitbit users’ health and wellness data won’t be used for Google ads and this data will be kept separate from other Google ad data” as part of the deal with global regulators when they bought Fitbit. This is good.

And Fitbit seems to do an OK job with privacy and security. It de-identifies the data it collects so it’s (hopefully) not personally identifiable. We say hopefully because, depending on the kind of data, it’s been found to be pretty easy to de-anonymize these data sets and track down an individual’s patterns, especially with location data. So, be aware with Fitbit—or any fitness tracker—you are strapping on a device that tracks your location, heart rate, sleep patterns, and more. That’s a lot of personal information gathered in one place.

What is not good is what can happen with all this very personal health data if others aren’t careful. A recent report showed that health data for over 61 million fitness tracker users, including both Fitbit and Apple, was exposed when a third-party company that allowed users to sync their health data from their fitness trackers did not secure the data properly. Personal information such as names, birthdates, weight, height, gender, and geographical location for Fitbit and other fitness-tracker users was left exposed because the company didn’t password protect or encrypt their database. This is a great reminder that yes, while Fitbit might do a good job with their own security, anytime you sync or share that data with anyone else, it could be vulnerable.

[…]

e Fitbit app does allow for period tracking though. And the app, like most wearable tracking apps, collects a whole bunch of person, body-related data that could potentially be used to tell if a user is pregnant.

Fortunately, Fitbit doesn’t sell this data but it does say it can share some personal data for interest-based advertising. Fitbit also can share your wellness data with other apps, insurers, and employers if you sign up for that and give your consent.

[…]

Fitbit isn’t the wearable we’d trust the most with our private reproductive health data. Apple, Garmin, Oura all make us feel a bit more comfortable with this personal information.

Source: Fitbit | Privacy & security guide | Mozilla Foundation

So when installing one it says it needs to process your data in the USA – which basically means it’s up for grabs for all and sundry. There is a reason the EU has the GDPR. But why does it need to send data anywhere other than your phone anyway?!

This is something that almost no-one mentions when you read the reviews on these things.

Amazon’s Ring used to spy on customers, children, FTC says in privacy settlement

A former employee of Amazon.com’s Ring doorbell camera unit spied for months on female customers in 2017 with cameras placed in bedrooms and bathrooms, the Federal Trade Commission said in a court filing on Wednesday when it announced a $5.8 million settlement with the company over privacy violations.

Amazon also agreed to pay $25 million to settle allegations it violated children’s privacy rights when it failed to delete Alexa recordings at the request of parents and kept them longer than necessary, according to a court filing in federal court in Seattle that outlined a separate settlement.

The FTC settlements are the agency’s latest effort to hold Big Tech accountable for policies critics say place profits from data collection ahead of privacy.

The FTC is also probing Amazon.com’s $1.7 billion deal to buy iRobot Corp (IRBT.O), which was announced in August 2022 in Amazon’s latest push into smart home devices, and has a separate antitrust probe underway into Amazon.

[…]

The FTC said Ring gave employees unrestricted access to customers’ sensitive video data: “As a result of this dangerously overbroad access and lax attitude toward privacy and security, employees and third-party contractors were able to view, download, and transfer customers’ sensitive video data.”

In one instance in 2017, an employee of Ring viewed videos made by at least 81 female customers and Ring employees using Ring products. “Undetected by Ring, the employee continued spying for months,” the FTC said.

[…]

In May 2018, an employee gave information about a customer’s recordings to the person’s ex-husband without consent, the complaint said. In another instance, an employee was found to have given Ring devices to people and then watched their videos without their knowledge, the FTC said.

[…]

rules against deceiving consumers who used Alexa. For example, the FTC complaint says that Amazon told users it would delete voice transcripts and location information upon request, but then failed to do so.

“The unlawfully retained voice recordings provided Amazon with a valuable database for training the Alexa algorithm to understand children, benefiting its bottom line at the expense of children’s privacy,” the FTC said.

Source: Amazon’s Ring used to spy on customers, FTC says in privacy settlement

The total settlement of $30m is insanely low considering the scale of the violations and the continuing nature of them.

Meta ordered to suspend Facebook EU data flows as it’s hit with record €1.2BN privacy fine under GDPR – 10 years and 3 court cases later

[…]

Today the European Data Protection Board (EDPB) announced that Meta has been fined €1.2 billion (close to $1.3 billion) — which the Board confirmed is the largest fine ever issued under the bloc’s General Data Protection Regulation (GDPR). (The prior record goes to Amazon which was stung for $887 million for misusing customers data for ad targeting back in 2021.)

Meta’s sanction is for breaching conditions set out in the pan-EU regulation governing transfers of personal data to so-called third countries (in this case the US) without ensuring adequate protections for people’s information.

European judges have previously found U.S. surveillance practices to conflict with EU privacy rights.

[…]

The decision emerging out of the Irish DPC flows from a complaint made against Facebook’s Irish subsidiary almost a decade ago, by privacy campaigner Max Schrems — who has been a vocal critic of Meta’s lead data protection regulator in the EU, accusing the Irish privacy regulator of taking an intentionally long and winding path in order to frustrate effective enforcement of the bloc’s rulebook.

On the substance of his complaint, Schrems argues that the only sure-fire way to fix the EU-U.S. data flows doom loop is for the U.S. to grasp the nettle and reform its surveillance practices.

Responding to today’s order in a statement (via his privacy rights not-for-profit, noyb), he said: “We are happy to see this decision after ten years of litigation. The fine could have been much higher, given that the maximum fine is more than 4 billion and Meta has knowingly broken the law to make a profit for ten years. Unless US surveillance laws get fixed, Meta will have to fundamentally restructure its systems.”

[… ]

This suggests the Irish regulator is routinely under-enforcing the GDPR on the most powerful digital platforms and doing so in a way that creates additional problems for efficient functioning of the regulation since it strings out the enforcement process. (In the Facebook data flows case, for example, objections were raised to the DPC’s draft decision last August — so it’s taken some nine months to get from that draft to a final decision and suspension order now.) And, well, if you string enforcement out for long enough you may allow enough time for the goalposts to be moved politically that enforcement never actually needs to happen. Which, while demonstrably convenient for data-mining tech giants like Meta, does make a mockery of citizens’ fundamental rights.

As noted above, with today’s decision, the DPC is actually implementing a binding decision taken by the EDPB last month in order to settle ongoing disagreement over Ireland’s draft decision — so much of the substance of what’s being ordered on Meta today comes, not from Dublin, but from the bloc’s supervisor body for privacy regulators.

[…]

n further public remarks today, Schrems once again hit out at the DPC’s approach — accusing the regulator of essentially working to thwart enforcement of the GDPR. “It took us ten years of litigation against the Irish DPC to get to this result. We had to bring three procedures against the DPC and risked millions of procedural costs. The Irish regulator has done everything to avoid this decision but was consistently overturned by the European Courts and institutions. It is kind of absurd that the record fine will go to Ireland — the EU Member State that did everything to ensure that this fine is not issued,” he said.

[…]

Earlier reports have suggested the European Commission could adopt the new EU-U.S. data deal in July, although it has declined to provide a date for this since it says multiple stakeholders are involved in the process.

Such a timeline would mean Meta gets a new escape hatch to avoid having to suspend Facebook’s service in the EU; and can keep relying on this high level mechanism so long as it is stands.

If that’s how the next section of this torturous complaint saga plays out it will mean that a case against Facebook’s illegal data transfers which dates back almost ten years at this point will, once again, be left twisting in the wind — raising questions about whether it’s really possible for Europeans to exercise legal rights set out in the GDPR? (And, indeed, whether deep-pocketed tech giants, whose ranks are packed with well-paid lawyers and lobbyists, can be regulated at all?)

[…]

Analysis on five years of the GDPR, put out earlier this month by the Irish Council for Civil Liberties (ICCL), dubs the enforcement situation a “crisis” — warning: “Europe’s failure to enforce the GDPR exposes everyone to acute hazard in the digital age and fingering Ireland’s DPA as a leading cause of enforcement failure against Big Tech.”

And the ICCL points the finger of blame squarely at Ireland’s DPC.

“Ireland continues to be the bottleneck of enforcement: It delivers few draft decisions on major cross-border cases, and when it does eventually do so other European enforcers routinely vote by majority to force it to take tougher enforcement action,” the report argues — before pointing out that: “Uniquely, 75% of Ireland’s GDPR investigation decisions in major EU cases were overruled by majority vote of its European counterparts at the EDPB, who demand tougher enforcement action.”

The ICCL also highlights that nearly all (87%) of cross-border GDPR complaints to Ireland repeatedly involve the same handful of Big Tech companies: Google, Meta (Facebook, Instagram, WhatsApp), Apple, TikTok, and Microsoft. But says many complaints against these tech giants never even get a full investigation — thereby depriving complaints of the ability to exercise their rights.

The analysis points out that the Irish DPC chooses “amicable resolution” to conclude the vast majority (83%) of cross-border complaints it receives (citing the oversight body’s own statistics) — further noting: “Using amicable resolution for repeat offenders, or for matters likely to impact many people, contravenes European Data Protection Board guidelines.”

[…]

The reality is a patchwork of problems frustrate effective enforcement across the bloc as you might expect with decentralized oversight structure which factors in linguistic and culture differences across 27 Member States and varying opinions on how best to approach oversight atop big (and very personal) concepts like privacy which may mean very different things to different people.

Schrems’ privacy rights not-for-profit, noyb, has been collating information on this patchwork of GDPR enforcement issues — which include things like under-resourcing of smaller agencies and a general lack of in-house expertise to deal with digital issues; transparency problems and information blackholes for complainants; cooperation issues and legal barriers frustrating cross-border complaints; and all sorts of ‘creative’ interpretations of complaints “handling” — meaning nothing being done about a complaint still remains a common outcome — to name just a few of the issues it’s encountered.

[…]

Source: Meta ordered to suspend Facebook EU data flows as it’s hit with record €1.2BN privacy fine under GDPR | TechCrunch

The article contains the history of the court cases Schrems had to enter to get the Ireland and the EU to do anything about data sharing problems – it’s an interesting read.

Online age verification is coming, and privacy is on the chopping block

A spate of child safety rules might make going online in a few years very different, and not just for kids. In 2022 and 2023, numerous states and countries are exploring age verification requirements for the internet, either as an implicit demand or a formal rule. The laws are positioned as a way to protect children on a dangerous internet. But the price of that protection might be high: nothing less than the privacy of, well, everyone.

Government agencies, private companies, and academic researchers have spent years seeking a way to solve the thorny question of how to check internet users’ ages without the risk of revealing intimate information about their online lives. But after all that time, privacy and civil liberties advocates still aren’t convinced the government is ready for the challenge.

“When you have so many proposals floating around, it’s hard to ensure that everything is constitutionally sound and actually effective for kids,” Cody Venzke, a senior policy counsel at the American Civil Liberties Union (ACLU), tells The Verge. “Because it’s so difficult to identify who’s a kid online, it’s going to prevent adults from accessing content online as well.”

In the US and abroad, lawmakers want to limit children’s access to two things: social networks and porn sites. Louisiana, Arkansas, and Utah have all passed laws that set rules for underage users on social media. Meanwhile, multiple US federal bills are on the table, and so are laws in other countries, like the UK’s Online Safety Bill. Some of these laws demand specific features from age verification tools. Others simply punish sites for letting anyone underage use them — a more subtle request for verification.

Online age verification isn’t a new concept. In the US, laws like the Children’s Online Privacy Protection Act (COPPA) already apply special rules to people under 13. And almost everyone who has used the internet — including major platforms like YouTube and Facebook — has checked a box to access adult content or entered a birth date to create an account. But there’s also almost nothing to stop them from faking it.

As a result, lawmakers are calling for more stringent verification methods. “From bullying and sex trafficking to addiction and explicit content, social media companies subject children and teens to a wide variety of content that can hurt them, emotionally and physically,” Senator Tom Cotton (R-AR), the backer of the Protect Kids Online Act, said. “Just as parents safeguard their kids from threats in the real world, they need the opportunity to protect their children online.”

Age verification systems fall into a handful of categories. The most common option is to rely on a third party that knows your identity — by directly validating a credit card or government-issued ID, for instance, or by signing up for a digital intermediary like Allpasstrust, the service Louisianans must use for porn access.

More experimentally, there are solutions that estimate a user’s age without an ID. One potential option, which is already used by Facebook and Instagram, would use a camera and facial recognition to guess whether you’re 18. Another, which is highlighted as a potential age verification solution by France’s National Commission on Informatics and Liberty (CNIL), would “guess” your age based on your online activity.

As pointed out by CNIL’s report on various online age verification options, all these methods have serious flaws. CNIL notes that identifying someone’s age with a credit card would be relatively easy since the security infrastructure is already there for online payments. But some adult users — especially those with lower incomes — may not have a card, which would seriously limit their ability to access online services. The same goes for verification methods using government-issued IDs. Children can also snap up a card that’s lying around the house to verify their age.

“As we think about kids’ online safety, we need to do so in a way that doesn’t enshrine and legitimize this very surveillance regime that we’re trying to push back on”

Similarly, the Congressional Research Service (CRS) has expressed concerns about online age verification. In a report it updated in March, the US legislature’s in-house research institute found that many kids aged 16 to 19 might not have a government-issued ID, such as a driver’s license, that they can use to verify their age online. While it says kids could use their student ID instead, it notes that they may be easier to fake than a government-issued ID. The CRS isn’t totally on board with relying on a national digital ID system for online age verification either, as it could “raise privacy and security concerns.”

Face-based age detection might seem like a quick fix to these concerns. And unlike a credit card — or full-fledged facial identification tools — it doesn’t necessarily tell a site who you are, just whether it thinks you’re over 18.

But these systems may not accurately identify the age of a person. Yoti, the facial analysis service used by Facebook and Instagram, claims it can estimate the age of people 13 to 17 years old as under 25 with 99.93 percent accuracy while identifying kids that are six to 11 years old as under 13 with 98.35 percent accuracy. This study doesn’t include any data on distinguishing between young teens and older ones, however — a crucial element for many young people.

Although Yoti claims its system has no “discernible bias across gender or skin tone,” previous research indicates that facial recognition services are less reliable for people of color, gender-nonconforming people, and people with facial differences or asymmetry. This would, again, unfairly block certain people from accessing the internet.

It also poses a host of privacy risks, as the companies that capture facial recognition data would need to ensure that this biometric data doesn’t get stolen by bad actors. UK civil liberties group Big Brother Watch argues that “face prints’ are as sensitive as fingerprints” and that “collecting biometric data of this scale inherently puts people’s privacy at risk.” CNIL points out that you could mitigate some risks by performing facial recognition locally on a user’s device — but that doesn’t solve the broader problems.

Inferring ages based on browsing history raises even more problems. This kind of inferential system has been implemented on platforms like Facebook and TikTok, both of which use AI to detect whether a user is under the age of 13 based on their activity on the platform. That includes scanning a user’s activity for “happy birthday” messages or comments that indicate they’re too young to have an account. But the system hasn’t been explored on a larger scale — where it could involve having an AI scan your entire browsing history and estimate your age based on your searches and the sites you interact with. That would amount to large-scale digital surveillance, and CNIL outright calls the system “intrusive.” It’s not even clear how well it would work.

In France, where lawmakers are working to restrict access to porn sites, CNIL worked with Ecole Polytechnique professor Olivier Blazy to develop a solution that attempts to minimize the amount of user information sent to a website. The proposed method involves using an ephemeral “token” that sends your browser or phone a “challenge” when accessing an age-restricted website. That challenge would then get relayed to a third party that can authenticate your age, like your bank, internet provider, or a digital ID service, which would issue its approval, allowing you to access the website.

The system’s goal is to make sure a user is old enough to access a service without revealing any personal details, either to the website they’re using or the companies and governments providing the ID check. The third party “only knows you are doing an age check but not for what,” Blazy explains to The Verge, and the website would not know which service verified your age nor any of the details from that transaction.

Blazy hopes this system can prevent very young children from accessing explicit content. But even with this complex solution, he acknowledges that users in France will be able to get around the method by using a virtual private network (VPN) to conceal their location. This is a problem that plagues nearly any location-specific verification system: as long as another government lets people access a site more easily, users can route their traffic through it. The only surefire solution would be draconian crackdowns on privacy tools that would dramatically compromise freedom online.

Some governments are trying to offer a variety of options and let users pick between them. A report from the European Parliament Think Tank, an in-house department that helps shape legislation, highlights an EU “browser-based interoperable age verification method” called euCONSENT, which will allow users to verify their identity online by choosing from a network of approved third-party services. Since this would give users the ability to choose the verification they want to use, this means one service might ask a user to upload an official government document, while another might rely on facial recognition.

To privacy and civil liberties advocates, none of these solutions are ideal. Venzke tells The Verge that implementing age verification systems encourages a system that collects our data and could pave the way for more surveillance in the future. “Bills that are trying to establish inferences about how old you are or who you are based on that already existing capitalistic surveillance, are just threatening to legitimize that surveillance,” Venzke says. “As we think about kids’ online safety, we need to do so in a way that doesn’t enshrine and legitimize this very surveillance regime that we’re trying to push back on.”

Age verification laws “are going to face a very tough battle in court”

The Electronic Frontier Foundation, a digital rights group, similarly argues that all age verification solutions are “surveillance systems” that will “lead us further towards an internet where our private data is collected and sold by default.”

Even some strong supporters of child safety bills have expressed concerns about making age verification part of them. Senator Richard Blumenthal (D-CT), one of the backers of the Kids Online Safety Act, objected to the idea in a call with reporters earlier this month. In a statement, he tells The Verge that “age verification would require either a national database or a goldmine of private information on millions of kids in Big Tech’s hands” and that “the potential for exploitation and misuse would be huge.” (Despite this, the EFF believes that KOSA’s requirements would inevitably result in age verification mandates anyway.)

In the US, it’s unclear whether online age verification would stand up under legal scrutiny at all. The US court system has already struck down efforts to implement online age verification several times in the past. As far back as 1997, the Supreme Court ruled parts of the 1996 Communications Decency Act unconstitutional, as it imposed restrictions on “knowing transmission of obscene or indecent messages” and required age verification online. More recently, a federal court found in 2016 that a Louisiana law, which required websites that publish “material harmful to minors” verify users’ ages, “creates a chilling effect on free speech.”

Vera Eidelman, a staff attorney with ACLU, tells The Verge that existing age verification laws “are going to face a very tough battle in court.” “For the most part, requiring content providers online to verify the ages of their users is almost certainly unconstitutional, given the likelihood but it will make people uncomfortable to exercise their rights to access certain information if they have to unmask or identify themselves,” Eidelman says.

But concerns over surveillance still haven’t stopped governments around the globe, including here in the US, from pushing ahead with online age verification mandates. There are currently several bills in the pipeline in Congress that are aimed at protecting children online, including the Protecting Kids on Social Media Act, which calls for the test of a national age verification system that would block users under the age of 13 from signing up for social media. In the UK, where the heavily delayed Online Safety Bill will likely become law, porn sites would be required to verify users’ ages, while other websites would be forced to give users the option to do so as well.

Some proponents of online safety laws say they’re no different than having to hand over an ID to purchase alcohol. “We have agreed as a society not to let a 15-year-old go to a bar or a strip club,” said Laurie Schlegel, the legislator behind Louisiana’s age restriction law, after its passage. “The same protections should be in place online.” But the comparison misses vastly different implications for free speech and privacy. “When we think about bars or ordering alcohol at a restaurant, we just assume that you can hand an ID to a bouncer or a waiter, they’ll hand it back, and that’s the end of it,” Venzke adds. “Problem is, there’s no infrastructure on the internet right now to [implement age verification] in a safe, secure, private way that doesn’t chill people’s ability to get to constitutionally protected speech.”

Most people also spend a relatively small amount of their time in real-world adults-only spaces, while social media and online communications tools are ubiquitous ways of finding information and staying in touch with friends and family. Even sites with sexually explicit content — the target of Louisiana’s bill — could be construed to include sites offering information about sexual health and LGBTQ resources, despite claims by lawmakers that this won’t happen.

Even if many of these rules are shot down, the way we use the internet may never be the same again. With age checks awaiting us online, some people may find themselves locked out of increasingly large numbers of platforms — leaving the online world more closed-off than ever.

Source: Online age verification is coming, and privacy is on the chopping block – The Verge

The Supreme Court’s Warhol decision could have huge copyright implications for ‘fair use’, apparently made by blind judges

The Supreme Court has ruled that Andy Warhol has infringed on the copyright of Lynn Goldsmith, the photographer who took the image that he used for his famous silkscreen of the musician Prince. Goldsmith won the justices over 7-2, disagreeing with Warhol’s camp that his work was transformative enough to prevent any copyright claims. In the majority opinion written by Justice Sonia Sotomayor, she noted that “Goldsmith’s original works, like those of other photographers, are entitled to copyright protection, even against famous artists.”

Goldsmith’s story goes as far back as 1984, when Vanity Fair licensed her Prince photo for use as an artist reference. The photographer received $400 for a one-time use of her photograph, which Warhol then used as the basis for a silkscreen that the magazine published. Warhol then created 15 additional works based on her photo, one of which was sold to Condé Nast for another magazine story about Prince. The Andy Warhol Foundation (AWF) — the artist had passed away by then — got $10,000 it, while Goldsmith didn’t get anything.

Typically, the use of copyrighted material for a limited and “transformative” purpose without the copyright holder’s permission falls under “fair use.” But what passes as “transformative” use can be vague, and that vagueness has led to numerous lawsuits. In this particular case, the court has decided that adding “some new expression, meaning or message” to the photograph does not constitute “transformative use.” Sotomayor said Goldsmith’s photo and Warhol’s silkscreen serve “substantially the same purpose.”

Indeed, the decision could have far ranging implications for fair use and could influence future cases on what constitutes as transformative work. Especially now that we’re living in the era of content creators who could be taking inspiration from existing music and art. As CNN reports, Justice Elena Kagan strongly disagreed with her fellow justices, arguing that the decision would stifle creativity. She said the justices mostly just cared about the commercial purpose of the work and did not consider that the photograph and the silkscreen have different “aesthetic characteristics” and did not “convey the same meaning.”

“Both Congress and the courts have long recognized that an overly stringent copyright regime actually stifles creativity by preventing artists from building on the works of others. [The decision will] impede new art and music and literature, [and it will] thwart the expression of new ideas and the attainment of new knowledge. It will make our world poorer,” she wrote.

The justices who wrote the majority opinion, however, believe that it “will not impoverish our world to require AWF to pay Goldsmith a fraction of the proceeds from its reuse of her copyrighted work. Recall, payments like these are incentives for artists to create original works in the first place.”

Source: The Supreme Court’s Warhol decision could have huge copyright implications for ‘fair use’

Well, the two pictures are above. How you can argue that they are the same thing is quite beyond me.

European Media Freedom Act is a free pass to spread fake news, directly goes against DSA

“Disinformation is a threat to our democracies” is a statement with which virtually every political group in the European Parliament agrees. Many political statements have been made on the subject calling for more to be done to counter disinformation, especially since the Russian attack on Ukraine.

As part of that effort, the EU recently adopted the Digital Services Act (DSA), the legislation that many hope will provide the necessary regulatory framework to – at least partially – tackle the disinformation challenge. Unfortunately, there is a danger we might end up not seeing the positive results that the DSA promises to bring.

There are attempts to undermine the DSA with exemptions for media content in the European Media Freedom Act (EMFA), currently on the EU legislators’ table. This contains a measure which would effectively reverse the DSA provisions and prevent platforms like Twitter and Facebook from moderating content coming from anyone claiming to be a ‘media’. A very bad idea that was already, after much debate, rejected in the DSA.

Let’s see how this would work in practice. If any self-declared media writes that “The European Parliament partners with Bill Gates and George Soros to insert 5G surveillance chips into vaccines”, and this article is published on Twitter or Facebook, for instance, the platforms will first have to contact the media. They would then wait for 24 or 48 hours before possibly adding a fact-check, or not being able to do it all if some of the most recent amendments go through.

Those who have encountered such disinformation know that the first 24 hours are critical. As the old adage goes, “A lie gets halfway around the world before the truth puts on its boots”. Enabling such a back-and-forth exchange will only benefit the spread of disinformation, which can be further amplified by algorithms and become almost unstoppable.

Many journalists and fact-checkers have complained in the past that platforms were not doing enough to reduce the visibility of such viral disinformation. The Commission itself mentions that “Global online platforms act as gateways to media content, with business models that tend to disintermediate access to media services and amplify polarising content and disinformation.” Why on Earth would the EU then encourage further polarisation and disinformation by preventing content moderation?

This is not only a question of how such a carveout would benefit bogus media outlets. Some mainstream news sources with solid reputations and visibility can make mistakes, or are often the prime targets of those running disinformation campaigns. And quite successfully, as the recent example from the acclaimed investigations by Forbidden Stories has shown. In Hungary and Poland, state media that disseminate propaganda, in some cases even pro-Russian narratives, would be exempted from content moderation as well.

It might be counterintuitive, but the role of the media in disinformation and influence operations is huge. EU DisInfoLab sees it virtually in every single investigation that we do.

This loophole in the EMFA will make it hard if not impossible for the Commission to enforce the DSA against the biggest platforms. Potentially we would have to wait for the Court of Justice to solve the conflict between the two laws: the DSA mandating platforms to do content moderation and the EMFA legally preventing them from doing it. This would not be a good look for the EU legislature and until a decision of the Court comes, what will platforms do? They will likely stop moderating anything that comes close to being a ‘media’ just to avoid difficulties and costs.

We really don’t need any media exemption. There is no evidence to suggest that media content over-moderation is a systemic issue, and the impact assessment by the Commission does not suggest that either. With the DSA, Europe has just adopted horizontal content moderation rules where media freedom and plurality are at the core. Surely we should rather give a chance for the DSA to work, instead of saying it already failed before it is even applicable.

Media exemption will not help media freedom and plurality, on the contrary. It will enable industrial-scale disinformation production, reduce visibility for reputable media and the trust of society in it even more. Last year, Maria Ressa and Dmitry Muratov, 2021 Nobel Peace Prize laureates and journalists, called on the EU to ensure that no media exemption be included in any tech or media legislation in their 10-point plan to address our information crisis. It was supported by more than 100 civil society organisations.

MEPs and member states working on the EMFA must see the risks of disinformation and other harmful content that any carveout for media would create. The decision they are facing is clear: either flood Europe with harmful content or prioritise the safety of online users by strongly enforcing horizontal content moderation rules in the DSA.

Source: European Media Freedom Act: No to any media exemption

Ed Sheeran, Once Again, Demonstrates How Modern Copyright Is Destroying, Rather Than Helping Musicians

To hear the recording industry tell the story, copyright is the only thing protecting musicians from poverty and despair. Of course, that’s always been a myth. Copyright was designed to benefit the middlemen and gatekeepers, such as the record labels, over the artists themselves. That’s why the labels have a long history of never paying artists.

But over the last few years, Ed Sheeran has been highlighting the ways in which (beyond the “who gets paid” aspect of all of this) modern copyright is stifling rather than incentivizing music creation — directly in contrast to what we’re told it’s supposed to be doing.

We’ve talked about Sheeran before, as he’s been sued repeatedly by people claiming that his songs sound too much like other songs. Sheeran has always taken a much more open approach to copyright and music, noting that kids pirating his music is how he became famous in the first place. He’s also stood up for kids who had accounts shut down via copyright claims for playing his music.

But the lawsuits have been where he’s really highlighted the absurdity of modern copyright law. After winning one of the lawsuits a year ago, he put out a heartfelt statement on how ridiculous the whole thing was. A key part:

There’s only so many notes and very few chords used in pop music. Coincidence is bound to happen if 60,000 songs are being released every day on Spotify—that’s 22 million songs a year—and there’s only 12 notes that are available.

In the aftermath of this, Sheeran has said that he’s now filming all of his recent songwriting sessions, just in case he needs to provide evidence that he and his songwriting partners came up with a song on their own, which is depressing in its own right.

[…]

with this latest lawsuit it wasn’t actually a songwriter suing. It was a private equity firm that had purchased the rights from one of the songwriters (not Marvin Gaye) of Marvin Gaye’s hit song “Let’s Get it On.”

The claim over Thinking Out Loud was originally lodged in 2018, not by Gaye’s family but by investment banker David Pullman and a company called Structured Asset Sales, which has acquired a portion of the estate of Let’s Get It On co-writer Ed Townsend.

Thankfully, Sheeran won the case as the jury sided with him over Structured Asset Sales. Sheeran, once again, used the attention to highlight just how broken copyright is if these lawsuits are what’s coming out of it:

“I’m obviously very happy with the outcome of the case, and it looks like I’m not having to retire from my day job after all. But at the same time I’m unbelievably frustrated that baseless claims like this are able to go to court.

“We’ve spent the last eight years talking about two songs with dramatically different lyrics, melodies, and four chords which are also different, and used by songwriters every day all over the world. These chords are common building blocks used long before Let’s Get it On was written, and will be used to make music long after we’re all gone.

“They are in a songwriters’ alphabet, our toolkit, and should be there for all of us to use. No one owns them or the way that they are played, in the same way that no one owns the color blue.”

[…]

Source: Ed Sheeran, Once Again, Demonstrates How Modern Copyright Is Destroying, Rather Than Helping Musicians | Techdirt

Microsoft Tests Sticking Ads in Windows 11 Settings Menu as well as start menu

[…]

In addition to ads in the Start menu, the latest test build for Windows 11 includes notices for a Microsoft 365 trial and more in the Settings menu.

On Friday, Windows beta user and routine leaker Albacore shared several screenshots of the latest Insider Preview build 23451. These shots come from the ultra-early Canary test build, and show a new “Home” tab in Settings that includes a notice to “Try Microsoft 365.” This appears to link to a free trial of the company’s office apps suite. There’s also a notice for OneDrive and another to ask users to finish setting up a Microsoft account, advertising users can use the 365 apps and its cloud storage on desktop. Another notice in the Accounts tab also blasts users with a request to sign in to their Microsoft account.

These ads are very similar to other preview builds with so-called “badging” that shows up when users click on the Start menu. In that menu, the ads are more subtle and ask users to “Sign in to your Microsoft account” or advertise to users that they can “Use Microsoft 365 for free,” of course ignoring that users have to input their credit card information to access their free month of office apps.

[…]

Source: Microsoft Tests Sticking Ads in Windows 11 Settings Menu

Mercedes Locks Better EV Engine Performance Your Car Has and you paid for Behind Subscription

Last year BMW took ample heat for its plans to turn heated seats into a costly $18 per month subscription in numerous countries. As we noted at the time, BMW is already including the hardware in new cars and adjusting the sale price accordingly. So it’s effectively charging users a new, recurring fee to enable technology that already exists in the car and consumers already paid for.

The move portends a rather idiotic and expensive future for consumers that’s arriving faster than you’d think. Consumers unsurprisingly aren’t too keen on paying an added subscription for tech that already exists in the car and was already factored into the retail price, but the lure of consistent additional revenue they can nudge ever skyward pleases automakers and Wall Street alike.

Mercedes had already been toying with this idea in its traditional gas vehicles, but now says it’s considering making better EV engine performance an added subscription surcharge:

Mercedes-Benz electric vehicle owners in North America who want a little more power and speed can now buy 60 horsepower for just $60 a month or, on other models, 80 horsepower for $90 a month.

They won’t have to visit a Mercedes dealer to get the upgrade either, or even leave their own driveway. The added power, which will provide a nearly one second decrease in zero-to-60 acceleration, will be available through an over-the-air software patch.

Again, this is simply creating artificial restrictions and then charging consumers extra to bypass them. But this being America, there will indisputably be no shortage of dumb people with disposable income willing to burn money as part of a misguided craving for status.

If you don’t want to pay monthly, Mercedes will also let you pay a one time flat fee (usually several thousand dollars) to remove the artificial restrictions they’ve imposed on your engine. That’s, of course, creating additional upward pricing funnel efforts on top of the industry’s existing efforts to upsell you on a rotating crop of trims, tiers, and options you probably didn’t want.

It’s not really clear that regulators have any interest in cracking down on charging dumb people extra for something they already owned and paid for. After all, ripping off gullible consumers is effectively now considered little more than creative marketing by a notable segment of government “leaders” (see: regulatory apathy over misleading hidden fees in everything from hotels to cable TV).

[…]

Source: Mercedes Locks Better EV Engine Performance Behind Annoying Subscription Paywalls | Techdirt

So you pay for something which is in YOUR car but you can’t use it until you pay… more!

Microsoft is forcing Outlook and Teams to open links in Edge, ignore OS default browser settings

Microsoft Edge is a good browser but for some reason Microsoft keeps trying to shove it down everyone’s throat and make it more difficult to use rivals like Chrome or Firefox. Microsoft has now started notifying IT admins that it will force Outlook and Teams to ignore the default web browser on Windows and open links in Microsoft Edge instead.

Reddit users have posted messages from the Microsoft 365 admin center that reveal how Microsoft is going to roll out this change. “Web links from Azure Active Directory (AAD) accounts and Microsoft (MSA) accounts in the Outlook for Windows app will open in Microsoft Edge in a single view showing the opened link side-by-side with the email it came from,” reads a message to IT admins from Microsoft.

While this won’t affect the default browser setting in Windows, it’s yet another part of Microsoft 365 and Windows that totally ignores your default browser choice for links. Microsoft already does this with the Widgets system in Windows 11 and even the search experience, where you’ll be forced into Edge if you click a link even if you have another browser set as default.

IT admins aren’t happy with many complaining in various threads on Reddit, spotted by Neowin. If Outlook wasn’t enough, Microsoft says “a similar experience will arrive in Teams” soon with web links from chats opening in Microsoft Edge side-by-side with Teams chats.

[…]

The notifications to IT admins come just weeks after Microsoft promised significant changes to the way Windows manages which apps open certain files or links by default. At the time Microsoft said it believed “we have a responsibility to ensure user choices are respected” and that it’s “important that we lead by example with our own first party Microsoft products.” Forcing people into Microsoft Edge and ignoring default browsers is anything but respecting user choice, and it’s gross that Microsoft continues to abuse this.

Microsoft tested a similar change to the default Windows 10 Mail app in 2018, in an attempt to force people into Edge for email links. That never came to pass, thanks to a backlash from Windows 10 testers. A similar change in 2020 saw Microsoft try and force Chrome’s default search engine to Bing using the Office 365 installer, and IT admins weren’t happy then either.

[…]

Source: Microsoft is forcing Outlook and Teams to open links in Edge, and IT admins are angry – The Verge

OpenAI Threatens Popular GitHub Project With Lawsuit Over API Use

Anyone can use ChatGPT for free, but if you want to use GPT4, the latest language model, you have to either pay for ChatGPT Plus, pay for access to OpenAI’s API, or find another site that has incorporated GPT4 into its own free chatbot. There are sites that use OpenAI such as Forefront (opens in new tab) and You.com (opens in new tab), but what if you want to make your own bot and don’t want to pay for the API?

A GitHub project called GPT4free (opens in new tab) allows you to get free access to the GPT4 and GPT3.5 models by funneling those queries through sites like You.com (opens in new tab), Quora (opens in new tab) and CoCalc (opens in new tab) and giving you back the answers. The project is GitHub’s most popular new repo, getting 14,000 stars this week.

Now, according to Xtekky, the European computer science student who runs the repo, OpenAI has sent a letter demanding that he take the whole thing down within five days or face a lawsuit.

I interviewed Xtekky via Telegram, and he said he doesn’t think OpenAI should be targeting him since he isn’t connecting directly to the company’s API, but is instead getting data from other sites that are paying for their own API licenses. If the owners of those sites have a problem with his scripts querying them, they should approach him directly, he posited.

[…]

On the backend, GPT4Free is visiting various API urls that sites like You.com, an AI-powered search engine that employs OpenAI’s GPT3.5 model for its answers, use for their own queries. For example, the main GPT4Free script hits the URL https://you.com/api/streamingSearch, feeds it various parameters, and then takes the JSON it returns and formats it. The GPT4Free repo also has scripts that grab data from other sites such as Quora, Forefront, and TheB. Any enterprising developer could use these simple scripts to make their own bot.

“One could achieve the same [thing by] just opening tabs of the sites. I can open tabs of Phind, You, etc. on my browser and spam requests,” Xtekky said. “My repo just does it in a simpler way.”

All of the sites GPT4Free draws from are paying OpenAI fees in order to use its large language models. So when you use the scripts, those sites end up footing the bill for your queries, without you ever visiting them. If those sites are relying on ad revenue from their sites to offset these API costs, they are losing money because of these queries.

Xtekky said that he is more than happy to take down scripts that use individual sites’ APIs upon request from the owners of those sites. He said that he has already taken down scripts that use phind.com, ora.sh and writesonic.com.

Perhaps more importantly, Xtekky noted, any of these sites could block external uses of their internal APIs with common security measures. One of many methods that sites like You.com could use is to block API traffic from any IPs that are not their own.

Xtekky said that he has advised all the sites that wrote to him that they should secure their APIs, but none of them has done so. So, even if he takes the scripts down from his repo, any other developer could do the same thing.

[…]

Xtekky initially told me that he hadn’t decided whether to take the repo down or not. However, several hours after this story first published, we chatted again and he told me that he plans to keep the repo up and to tell OpenAI that, if they want it taken down, they should file a formal request with GitHub instead of with him.

“I believe they contacted me before to pressurize me into deleting the repo myself,” he said. “But the right way should be an actual official DMCA, through GitHub.”

Even if the original repo is taken down, there’s a great chance that the code — and this method of accessing GPT4 and GPT3.5 — will be published elsewhere by members of the community. Even if GPT4Free had never existed anyone can find ways to use these sites’ APIs if they continue to be unsecured.

“Users are sharing and hosting this project everywhere,” he said. “Deletion of my repo will be insignificant.”

[…]

Source: OpenAI Threatens Popular GitHub Project With Lawsuit Over API Use | Tom’s Hardware

Bungie Somehow Wins $12 Million In Destiny 2 Anti-Cheat Lawsuit

As Bungie continues on its warpath against Destiny 2 cheaters, the studio has won $12 million in the lawsuit against Romanian cheat seller Mihai Claudiu-Florentin that began back in 2021.

Claudiu-Florentin sold cheat software at VeteranCheats, which allowed users to get an edge over other players with software that could do things like tweak their aim and let them see through walls. Naturally, Bungie argued that the software was damaging to Destiny 2‘s competitive and cooperative modes, and has won the case against the seller. The lawsuit alleges “copyright infringement, violations of the Digital Millennium Copyright Act (DMCA), breach of contract, intentional interference with contractual relations, and violations of the Washington Consumer Protection Act.” (Thanks, TheGamePost).

You can read a full PDF of the suit, courtesy of TheGamePost, here, but the gist of it is that Bungie is asking for $12,059,912.98 in total damages, with $11,696,000 going toward violations of the DMCA, $146,662.28 for violations of the Copyright Act, and $217,250.70 accounting for the studio’s attorney expense. After subpoenaing Stripe, a payment processing service, Bungie learned that at least 5848 separate transactions took place through the service that included Destiny 2 cheating software from November 2020 to July 2022.

While Bungie might have $12 million more dollars out of this, VeteranCheats’ website is still up and offering cheating software for games like Overwatch and Call of Duty. Though, Destiny no longer appears on the site’s home page or if you search within its community.

According to the lawsuit, Bungie has paid around $2 million in its anti-cheating efforts between staffing and software. This also extended to a blanket ban on cheating devices in both competitive and PvE modes earlier this month.

While Destiny 2 has been wrapped up in legal issues, the shooter has also been caught up in some other controversy recently thanks to a major leak that led to the ban of a major content creator in the game’s community.

Source: Bungie Wins $12 Million In Destiny 2 Anti-Cheat Lawsuit

Despite personally not liking online players cheating, it beggars belief that someone selling software is not allowed to create software which edits memory registers. You are the owner of what is on your computer, despite anything that software publishers put in their unreadable terms. You can modify anything on there however you like.

Apple App Store Policies Upheld by Court in Epic Games Antitrust Challenge – Apple can continue monopoly and massive 30% charges in app store USA (but not in EU)

Apple Inc. won an appeals court ruling upholding its App Store’s policies in an antitrust challenge brought by Epic Games Inc.

Monday’s ruling by the US Ninth Circuit Court of Appeals affirmed a lower-court judge’s 2021 decision largely rejecting claims by Epic, the maker of Fortnite, that Apple’s online marketplace policies violated federal law because they ban third-party app marketplaces on its operating system. The appeals panel upheld the judge’s ruling in Epic’s favor on California state law claims.

The ruling comes as Apple has been making changes to the way the App Store operates to address developer concerns since Epic sued the company in 2020. The dispute began after Apple expelled the Fortnite game from the App Store because Epic created a workaround to paying a 30% fee on customers’ in-app purchases.

“There is a lively and important debate about the role played in our economy and democracy by online transaction platforms with market power,” the three-judge panel said. “Our job as a federal court of appeals, however, is not to resolve that debate — nor could we even attempt to do so. Instead, in this decision, we faithfully applied existing precedent to the facts.”

Apple hailed the outcome as a “resounding victory,” saying nine out of 10 claims were decided in its favor.

[…]

Epic Chief Executive Officer Tim Sweeney tweeted that although Apple prevailed, at least the appeals court kept intact the portion of the 2021 ruling that sided with Epic.

“Fortunately, the court’s positive decision rejecting Apple’s anti-steering provisions frees iOS developers to send consumers to the web to do business with them directly there. We’re working on next steps,” he wrote.

[…]

Following a three-week trial in Oakland, California, Rogers ordered the technology giant to allow developers of mobile applications steer consumers to outside payment methods, granting an injunction sought by Epic. The judge, however, didn’t see the need for third-party app stores or to push Apple to revamp policies over app developer fees.

[…]

US and European authorities have taken steps to rein in Apple’s stronghold over the mobile market. In response to the Digital Markets Act — a new series of laws in the European Union — Apple is planning to allow outside apps as early as next year as part of an update to the upcoming iOS 17 software update, Bloomberg News has reported.

[…]

Source: Apple App Store Policies Upheld by Court in Epic Games Antitrust Challenge – Bloomberg

It’s a pretty sad day when an antitrust court runs away from calling a monopoly a monopoly

ICANN and Verisign Proposal Would Allow Any Government In The World To Seize Domain Names with no redress

ICANN, the organization that regulates global domain name policy, and Verisign, the abusive monopolist that operates the .COM and .NET top-level domains, have quietly proposed enormous changes to global domain name policy in their recently published “Proposed Renewal of the Registry Agreement for .NET”, which is now open for public comment.

Either by design, or unintentionally, they’ve proposed allowing any government in the world to cancel, redirect, or transfer to their control applicable domain names! This is an outrageous and dangerous proposal that must be stopped. […]

The offending text can be found buried in an Appendix of the proposed new registry agreement. […] the critical changes can be found in Section 2.7 of Appendix 8, on pages 147-148. (the blue text represents new language) Below is a screenshot of that section:

Proposed Changes in Appendix 8 of the .NET agreement
Proposed Changes in Appendix 8 of the .NET agreement

Section 2.7(b)(i) is new and problematic on its own [editor bold!] (and I’ll analyze that in more detail in a future blog post – there are other things wrong with this proposed agreement, but I’m starting off with the worst aspect). However, carefully examine the new text in Section 2.7(b)(ii) on page 148 of the redline document.

It would allow Verisign, via the new text in 2.7(b)(ii)(5), to:

deny, cancel, redirect or transfer any registration or transaction, or place any domain name(s) on registry lock, hold or similar status, as it deems necessary, in its unlimited and sole discretion” [the language at the beginning of 2.7(b)(ii), emphasis added]

Then it lists when it can take the above measures. The first 3 are non-controversial (and already exist, as they’re not in blue text). The 4th is new, relating to security, and might be abused by Verisign. But, look at the 5th item! I was shocked to see this new language:

“(5) to ensure compliance with applicable law, government rules or regulations, or pursuant to any legal order or subpoena of any government, administrative or governmental authority, or court of competent jurisdiction,” [emphasis added]

This text has a plain and simple meaning — they propose  to allow “any government“, “any administrative authority”  and “any government authority” and “court[s] of competent jurisdiction” to deny, cancel, redirect, or transfer any domain name registration […].

You don’t have to be ICANN’s fiercest critic to see that this is arguably the most dangerous language ever inserted into an ICANN agreement.

“Any government” means what it says, so that means China, Russia, Iran, Turkey,  the Pitcairn Islands, Tuvalu, the State of Texas, the State of California, the City of Detroit,  a village of 100 people with a local council in Botswana, or literally “any government” whether it be state, local, or national. We’re talking about countless numbers of “governments” in the world (you’d have to add up all the cities, towns, states, provinces and nations, for starers). If that wasn’t bad enough, their proposal adds “any administrative authority” and “any government authority” (i.e.  government bureaucrats in any jurisdiction in the world) that would be empowered to “deny, cancel, redirect or transfer” domain names.  [The new text about “court of competent jurisdiction” is also probematic, as it would  override determinations that would be made by registrars via the agreements that domain name registrants have with their registrars.]

This proposal represents a complete government takeover of domain names, with no due process protections for registrants. It would usurp the role of registrars, making governments go directly to Verisign (or any other registry that adopts similar language) to achieve anything they desired. It literally overturns more than two decades of global domain name policy.

[…]

they bury major policy changes in an appendix near the end of a document that is over 100 pages long (133 pages long for the “clean” version of the document; 181 pages for the “redline” version)

[…]

ICANN and Verisign appear to have deliberately timed the comment period to avoid public scrutiny.  The public comment period opened on April 13, 2023, and is scheduled to end (currently) on May 25, 2023. However, the ICANN76 public meeting was held between March 11 and March 16, 2023, and the ICANN77 public meeting will be held between June 12 and June 15, 2023. Thus, they published the proposal only after the ICANN76 public meeting had ended (where we could have asked ICANN staff and the board questions about the proposal), and seek to end the public comment period before ICANN77 begins. This is likely not by chance, but by design.

[…]

What can you do? You can submit a public comment, showing your opposition to the changes, and/or asking for more time to analyze the proposal. [there are other things wrong with the proposed agreement, e.g. all of Appendix 11 (which takes language from new gTLD agreements, which are entirely different from legacy gTLDs like .com/net/org); section 2.14 of Appendix 8 further protects Verisign, via the new language (page 151 of the redline document); section 6.3 of Appendix 8, on page 158 of the redline, seeks to protect Verisign from losing the contract in the event of a cyberattack that disrupts operations — however, we are already paying above market rates for .net (and .com) domain names, arguably because Verisign tells others that they have high expenses in order to keep 100% uptime even in the face of attacks; this new language allows them to degrade service, with no reduction in fees)

[…]

Update #1: I’ve submitted a “placeholder” comment to ICANN, to get the ball rolling.  There’s also a thread on NamePros.com about this topic, if you had questions, etc.

Update #2: DomainIncite points out correctly that the offending language is already in the .com agreement, and that people weren’t paying attention to this issue back 3 years ago, as there bigger fish to fry. I went back and reviewed my own comment submission, and see that I did raise the issue back then too:

[…]

Source: Red Alert: ICANN and Verisign Proposal Would Allow Any Government In The World To Seize Domain Names – FreeSpeech.com

AI-generated Drake and The Weeknd song pulled from streaming platforms

If you spent almost any time on the internet this week, you probably saw a lot of chatter about “Heart on My Sleeve.” The song went viral for featuring AI-generated voices that do a pretty good job of mimicking Drake and The Weeknd singing about a recent breakup.

On Monday, Apple Music and Spotify pulled the track following a complaint from Universal Music Group, the label that represents the real-life versions of the two Toronto-born artists. A day later, YouTube, Amazon, SoundCloud, Tidal, Deezer and TikTok did the same.

At least, they tried to comply with the complaint, but as is always the case with the internet, you can still find the song on websites like YouTube. Before it was removed from Spotify, “Heart on My Sleeve” was a bonafide hit. People streamed the track more than 600,000 times. On TikTok, where the creator of the song, the aptly named Ghostwriter977, first uploaded it, users listened to “Heart on My Sleeve” more than 15 million times.

In a statement Universal Music Group shared with publications like Music Business Worldwide, the label argued the training of a generative AI using the voices of Drake and The Weeknd was “a breach of our agreements and a violation of copyright law.” The company added that streaming platforms had a “legal and ethical responsibility to prevent the use of their services in ways that harm artists.”

It’s fair to say the music industry, much like the rest of society, now finds itself at an inflection point over the use of AI. While there are obvious ethical issues related to the creation of “Heart on My Sleeve,” it’s unclear if it’s a violation of traditional copyright law. In March, the US Copyright Office said art, including music, cannot be copyrighted if it was produced by providing a text prompt to a generative AI model. However, the office left the door open to granting copyright protections to works with AI-generated elements.

“The answer will depend on the circumstances, particularly how the AI tool operates and how it was used to create the final work,” it said. “This is necessarily a case-by-case inquiry. If a work’s traditional elements of authorship were produced by a machine, the work lacks human authorship and the Office will not register it.” In the case of “Heart on My Sleeve,” complicating matters is that the song was written by a human being. It’s impossible to say how a court challenge would play out. What is clear is that we’re only the start of a very long discussion about the role of AI in music.

Source: AI-generated Drake and The Weeknd song pulled from streaming platforms | Engadget

Streaming Services Urged To Clamp Down on AI-Generated Music by Record Labels

Universal Music Group has told streaming platforms, including Spotify and Apple, to block artificial intelligence services from scraping melodies and lyrics from their copyrighted songs, according to emails viewed by the Financial Times. From the report: UMG, which controls about a third of the global music market, has become increasingly concerned about AI bots using their songs to train themselves to churn out music that sounds like popular artists. AI-generated songs have been popping up on streaming services and UMG has been sending takedown requests “left and right,” said a person familiar with the matter. The company is asking streaming companies to cut off access to their music catalogue for developers using it to train AI technology. “We will not hesitate to take steps to protect our rights and those of our artists,” UMG wrote to online platforms in March, in emails viewed by the FT. “This next generation of technology poses significant issues,” said a person close to the situation. “Much of [generative AI] is trained on popular music. You could say: compose a song that has the lyrics to be like Taylor Swift, but the vocals to be in the style of Bruno Mars, but I want the theme to be more Harry Styles. The output you get is due to the fact the AI has been trained on those artists’ intellectual property.”

Source: Streaming Services Urged To Clamp Down on AI-Generated Music – Slashdot

Basically they don’t want AI’s listening to their music as an inspiration for them to make music. Which is exactly what humans do. So I’m very curious what legal basis would accept their takedowns.