Google Decides To Pull Up The Ladder On The Open Internet, Pushes For Unconstitutional Regulatory Proposals

It’s pretty much the way of the world: beyond the basic enshittification story that has been so well told over the past year or so about how companies get worse and worse as they get more and more powerful, there’s also the well known concept of successful innovative companies “pulling up the ladder” behind them, using the regulatory process to make it impossible for other companies to follow their own path to success. We’ve talked about this in the sense of political entrepreneurship, which is when the main entrepreneurial effort is not to innovate in newer and better products for customers, but rather using the political system for personal gain and to prevent competitors from havng the same opportunities.

It happens all too frequently. And it’s been happening lately with the big internet companies, which relied on the open internet to become successful, but under massive pressure from regulators (and the media), keep shooting the open internet in the back, each time they can present themselves as “supportive” of some dumb regulatory regime. Facebook did it six years ago by supporting FOSTA wholeheartedly, which was the key tide shift that made the law viable in Congress.

And, now, it appears that Google is going down that same path. There have been hints here and there, such as when it mostly gave up the fight on net neutrality six years ago. However, Google had still appeared to be active in various fights to protect an open internet.

But, last week, Google took a big step towards pulling up the open internet ladder behind it, which got almost no coverage (and what coverage it got was misleading). And, for the life of me, I don’t understand why it chose to do this now. It’s one of the dumbest policy moves I’ve seen Google make in ages, and seems like a complete unforced error.

Last Monday, Google announced “a policy framework to protect children and teens online,” which was echoed by subsidiary YouTube, which posted basically the same thing, talking about it’s “principled approach for children and teenagers.” Both of these pushed not just a “principled approach” for companies to take, but a legislative model (and I hear that they’re out pushing “model bills” across legislatures as well).

The “legislative” model is, effectively, California’s Age Appropriate Design Code. Yes, the very law that was just declared unconstitutional just a few weeks before Google basically threw its weight behind the approach. What’s funny is that many, many people have (incorrectly) believed that Google was some sort of legal mastermind behind the NetChoice lawsuits challenging California’s law and other similar laws, when the reality appears to be that Google knows full well that it can handle the requirements of the law, but smaller competitors cannot. Google likes the law. It wants more of them, apparently.

The model includes “age assurance” (which is effectively age verification, though everyone pretends it’s not), greater parental surveillance, and the compliance nightmare of “impact assessments” (we talked about this nonsense in relation to the California law). Again, for many companies this is a good idea. But just because something is a good idea for companies to do does not mean that it should be mandated by law.

But that’s exactly what Google is pushing for here, even as a law that more or less mimics its framework was just found to be unconstitutional. While cynical people will say that maybe Google is supporting these policies hoping that they will continue to be found unconstitutional, I see little evidence to support that. Instead, it really sounds like Google is fully onboard with these kinds of duty of care regulations that will harm smaller competitors, but which Google can handle just fine.

It’s pulling up the ladder behind it.

And yet, the press coverage of this focused on the fact that this was being presented as an “alternative” to a full on ban for kids under 18 to be on social media. The Verge framed this as “Google asks Congress not to ban teens from social media,” leaving out that it was Google asking Congress to basically make it impossible for any site other than the largest, richest companies to be able to allow teens on social media. Same thing with TechCrunch, which framed it as Google lobbying against age verification.

But… it’s not? It’s basically lobbying for age verification, just in the guise of “age assurance,” which is effectively “age verification, but if you’re a smaller company you can get it wrong some undefined amount of time, until someone sues you.” I mean, what’s here is not “lobbying against age verification,” it’s basically saying “here’s how to require age verification.”

A good understanding of user age can help online services offer age-appropriate experiences. That said, any method to determine the age of users across services comes with tradeoffs, such as intruding on privacy interests, requiring more data collection and use, or restricting adult users’ access to important information and services. Where required, age assurance – which can range from declaration to inference and verification – should be risk-based, preserving users’ access to information and services, and respecting their privacy. Where legislation mandates age assurance, it should do so through a workable, interoperable standard that preserves the potential for anonymous or pseudonymous experiences. It should avoid requiring collection or processing of additional personal information, treating all users like children, or impinging on the ability of adults to access information. More data-intrusive methods (such as verification with “hard identifiers” like government IDs) should be limited to high-risk services (e.g., alcohol, gambling, or pornography) or age correction. Moreover, age assurance requirements should permit online services to explore and adapt to improved technological approaches. In particular, requirements should enable new, privacy-protective ways to ensure users are at least the required age before engaging in certain activities. Finally, because age assurance technologies are novel, imperfect, and evolving, requirements should provide reasonable protection from liability for good-faith efforts to develop and implement improved solutions in this space.

Much like Facebook caving on FOSTA, this is Google caving on age verification and other “duty of care” approaches to regulating the way kids have access to the internet. It’s pulling up the ladder behind itself, knowing that it was able to grow without having to take these steps, and making sure that none of the up-and-coming challenges to Google’s position will have the same freedom to do so.

And, for what? So that Google can go to regulators and say “look, we’re not against regulations, here’s our framework”? But Google has smart policy people. They have to know how this plays out in reality. Just as with FOSTA, it completely backfired on Facebook (and the open internet). This approach will do the same.

Not only will these laws inevitably be used against the companies themselves, they’ll also be weaponized and modified by policymakers who will make them even worse and even more dangerous, all while pointing to Google’s “blessing” of this approach as an endorsement.

For years, Google had been somewhat unique in continuing to fight for the open internet long after many other companies were switching over to ladder pulling. There were hints that Google was going down this path in the past, but with this policy framework, the company has now made it clear that it has no intention of being a friend to the open internet any more.

Source: Google Decides To Pull Up The Ladder On The Open Internet, Pushes For Unconstitutional Regulatory Proposals | Techdirt

Well, with chrome only support, dns over https and browser privacy sandboxing, Google has been off the do no evil for some time and has been closing off the openness of the web by rebuilding or crushing competition for quite some time

Universal Music sues AI start-up Anthropic for scraping song lyrics – will they come after you for having read the lyrics or memorised the song next?

Universal Music has filed a copyright infringement lawsuit against artificial intelligence start-up Anthropic, as the world’s largest music group battles against chatbots that churn out its artists’ lyrics.

Universal and two other music companies allege that Anthropic scrapes their songs without permission and uses them to generate “identical or nearly identical copies of those lyrics” via Claude, its rival to ChatGPT.

When Claude is asked for lyrics to the song “I Will Survive” by Gloria Gaynor, for example, it responds with “a nearly word-for-word copy of those lyrics,” Universal, Concord, and ABKCO said in a filing with a US court in Nashville, Tennessee.

“This copyrighted material is not free for the taking simply because it can be found on the Internet,” the music companies said, while claiming that Anthropic had “never even attempted” to license their copyrighted work.

[…]

Universal earlier this year asked Spotify and other streaming services to cut off access to its music catalogue for developers using it to train AI technology.

Source: Universal Music sues AI start-up Anthropic for scraping song lyrics | Ars Technica

So don’t think about memorising or even listening to copyrighted material from them because apparently they will come after you with the mighty and crazy arm of the law!

Equifax poked with paltry $13.4 million following 147m customer data breach in 2017

Credit bureau company, Equifax, has been fined US$13.4 million by The Financial Conduct Authority (FCA), a UK financial watchdog, following its involvement in “one of the largest” data breaches ever.

This cyber security incident took place in 2017 and saw Equifax’s US-based parent company, Equifax Inc., suffer a data breach that saw the personal data of up to 147.9 million customers accessed by malicious actors during the hack. The FCA also revealed that, as this data was stored in company servers in the US, the hack also exposed the personal data of 13.8 million UK customers.

The data accessed during the hack included Equifax membership login details, customer names, dates of birth, partial credit card details and addresses.

According the FCA, the cyber attack and subsequent data breach was “entirely preventable” and exposed UK customers to financial crime.
“There were known weaknesses in Equifax Inc’s data security systems and Equifax failed to take appropriate action in response to protect UK customer data,” the FCA explained.

The authority also noted that the UK arm of Equifax was not made aware that malicious actors had been accessed during the hack until six weeks after the cyber security incident was discovered by Equifax Inc.

The company was fined $60,727 by the British Information Commissioner’s Office (ICO) relating to the data breach in 2018.

On October 13th, Equifax stated that it had fully cooperated with the FCA during the investigation, which has been extensive. The FCA also said that the fine levelled at Equifax Inc had been reduced following the company’s agreement to cooperate with the watchdog and resolve the cyber attack.

Patricio Remon, president for Europe at Equifax, said that since the cyber attack against Equifax in 2017, the company has “invested over $1.5 billion in a security and technology transformation”. Remon also said that “few companies have invested more time and resources than Equifax to ensure that consumers’ information is protected”.

Source: Equifax fined $13.4 million following data breach

Museum Collection Of Historical TV Culture At Risk Due To Copyright Takedowns

[…]

the informal nature of their collections means that they are exposed to serious threats from copyright, as the recent experience of The Museum of Classic Chicago Television makes clear. The Museum explains why it exists:

The Museum of Classic Chicago Television (FuzzyMemoriesTV) is constantly searching out vintage material on old videotapes saved in basements or attics, or sold at flea markets, garage sales, estate sales and everywhere in between. Some of it would be completely lost to history if it were not for our efforts. The local TV stations have, for the most part, regrettably done a poor job at preserving their history. Tapes were very expensive 25-30 years ago and there also was a lack of vision on the importance of preserving this material back then. If the material does not exist on a studio master tape, what is to be done? Do we simply disregard the thousands of off-air recordings that still exist holding precious “lost” material? We believe this would be a tragic mistake.

Dozens of TV professionals and private individuals have donated to the museum their personal copies of old TV programmes made in the 1970s and 1980s, many of which include rare and otherwise unavailable TV advertisements that were shown as part of the broadcasts. In addition to the main Museum of Classic Chicago Television site, there is also a YouTube channel with videos. However, as TorrentFreak recounts, the entire channel was under threat because of copyright takedown requests:

In a series of emails starting Friday and continuing over the weekend, [the museum’s president and lead curator] Klein began by explaining his team’s predicament, one that TorrentFreak has heard time and again over the past few years. Acting on behalf of a copyright owner, in this case Sony, India-based anti-piracy company Markscan hit the MCCTv channel with a flurry of copyright claims. If these cannot be resolved, the entire project may disappear.

One issue is that Klein was unable to contact Markscan to resolve the problem directly. He is quoted by TorrentFreak as saying: “I just need to reach a live human being to try to resolve this without copyright strikes. I am willing to remove the material manually to get the strikes reversed.”

Once the copyright enforcement machine is engaged, it can be hard to stop. As Walled Culture the book (free digital versions available) recounts, there are effectively no penalties for unreasonable or even outright false claims. The playing field is tipped entirely in the favour of the copyright world, and anyone that is targeted using one of the takedown mechanisms is unlikely to be able to do much to contest them, unless they have good lawyers and deep pockets. Fortunately, in this case, an Ars Technica article on the issue reported that:

Sony’s copyright office emailed Klein after this article was published, saying it would “inform MarkScan to request retractions for the notices issued in response to the 27 full-length episode postings of Bewitched” in exchange for “assurances from you that you or the Fuzzy Memories TV Channel will not post or re-post any infringing versions from Bewitched or other content owned or distributed by SPE [Sony Pictures Entertainment] companies.”

That “concession” by Sony highlights the main problem here: the fact that a group of public-spirited individuals trying to preserve unique digital artefacts must live with the constant threat of copyright companies taking action against them. Moreover, there is also the likelihood that some of their holdings will have to be deleted as a result of those legal threats, despite the material’s possible cultural value or the fact that it is the only surviving copy. No one wins in this situation, but the purity of copyright must be preserved at all costs, it seems.

[…]

Source: Museum Collection Of Historical TV Culture At Risk Due To Copyright Takedowns | Techdirt

ICE, CBP, Secret Service All Illegally Used Smartphone Location Data

In a bombshell report, an oversight body for the Department of Homeland Security (DHS) found that Immigration and Customs Enforcement (ICE), Customs and Border Enforcement (CBP), and the Secret Service all broke the law while using location data harvested from ordinary apps installed on smartphones. In one instance, a CBP official also inappropriately used the technology to track the location of coworkers with no investigative purpose. For years U.S. government agencies have been buying access to location data through commercial vendors, a practice which critics say skirts the Fourth Amendment requirement of a warrant. During that time, the agencies have typically refused to publicly explain the legal basis on which they based their purchase and use of the data. Now, the report shows that three of the main customers of commercial location data broke the law while doing so, and didn’t have any supervisory review to ensure proper use of the technology. The report also recommends that ICE stop all use of such data until it obtains the necessary approvals, a request that ICE has refused.

The report, titled “CBP, ICE, and Secret Service Did Not Adhere to Privacy Policies or Develop Sufficient Policies Before Procuring and Using Commercial Telemetry Data,” is dated September 28, 2023, and comes from Joseph V. Cuffari, the Inspector General for DHS. The report was originally marked as “law enforcement sensitive,” but the Inspector General has now released it publicly.

Source: ICE, CBP, Secret Service All Illegally Used Smartphone Location Data – Slashdot

EPIC urges FTC to investigate Grindr’s data practices

On Wednesday, EPIC filed a complaint with the US government watchdog over Grindr’s “apparent failure to safeguard users’ sensitive personal data.” This includes both present and past users who have since deleted their accounts, according to the complaint. Despite promising in its privacy policy to delete personal info if customers remove their account, Grindr allegedly retained and disclosed some of this data to third parties.

Considering that people trust the dating app with a ton of very sensitive information — this includes their sexual preferences, self-reported HIV status, chat history, photos including nudes, and location information — “learning that Grindr breaks the promises it makes to users would likely affect a consumer’s decision regarding whether to use Grindr,” the complaint states [PDF].

Grindr, for its part, says privacy is of the uppermost importance to it, and that these “unfounded” claims stem from allegations made by a disgruntled ex-worker. So that’s all right then.

“Privacy is a top priority for Grindr and the LGBTQ+ community we serve, and we have adopted industry-leading privacy practices and tools to protect and empower our users,” a spokesperson told The Register.

“We are sorry that the former employee behind the unfounded allegations in today’s request is dissatisfied with his departure from the company; we wish him the best.”

The former employee in question is Grindr’s ex-chief privacy officer Ron De Jesus. In June, De Jesus filed a wrongful termination lawsuit [PDF] against his former bosses that also accused the dating app of violating privacy laws.

According to the lawsuit, De Jesus was “leading the charge to keep Grindr compliant with state, national, and international laws” after Norway’s data protection agency fined the dating app biz about $12 million in December 2021 and a Wall Street Journal article in May 2022 accused the application developer of selling users’ location data.

But despite De Jesus’ attempts, “Grindr placed profit over privacy and got rid of Mr De Jesus for his efforts and reports,” the lawsuit alleges.

EPIC’s complaint, which highlights De Jesus’ allegations, asks the FTC to look into potential violations of privacy law, including detection data retention and disclosure practices.

It also accuses Grindr of violating the Health Breach Notification Rule (HNBR). The dating app is subject to the HNBR because it asks users to self-report health data including HIV status, last-tested date, and vaccination status. By sharing these records with third parties and retaining health data after users deleted their accounts, Grindr allegedly breached the HNBR, EPIC says.

The privacy advocates at EPIC want the FTC to make Grindr comply with the laws and stop any “unlawful or impermissible” data retention practices. Additionally, the complaint calls on the federal agency to force Grindr to notify any users’ whose data was misused, and impose fines against the dating app for any violations of the HBNR.

Source: EPIC urges FTC to investigate Grindr’s data practices • The Register

Singapore plans to scan your face instead of your passport

[…] “Singapore will be one of the first few countries in the world to introduce automated, passport-free immigration clearance,” said minister for communications and information Josephine Teo in a wrap-up speech for the bill. Teo did concede that Dubai had such clearance for select enrolled travelers, but there was no assurance of other countries planning similar actions.

[…]

Another consideration for why passports will likely remain relevant in Singapore airports is for checking in with airlines. Airlines check passports not just to confirm identity, but also visas and more. Airlines are often held responsible for stranded passengers so will likely be required to confirm travelers have the documentation required to enter their destination.

The Register asked Singapore Airlines to confirm if passports will still be required on the airline after the implementation of biometric clearance. They deferred to Changi’s operator, Changi Airport Group (CAG), which The Reg also contacted – and we will update if a relevant reply arises.

What travelers will see is an expansion of a program already taking form. Changi airport currently uses facial recognition software and automated clearance for some parts of immigration.

[…]

Passengers who pre-submit required declarations online can already get through Singapore’s current automated immigration lanes in 20 to 30 seconds once they arrive to the front of the queue. It’s one reason Changi has a reputation for being quick to navigate.

[…]

According to CAG, the airport handled 5.12 million passenger movements in June 2023 alone. This figure is expected to only increase as it currently stands at 88 percent of pre-COVID levels and the government sees such efficiency as critical to managing the impending growth.

But the reasoning for biometric clearance go beyond a boom in travelers. With an aging population and shrinking workforce, Singapore’s Immigration & Checkpoints Authority (ICA) will have “to cope without a significant increase in manpower,” said Teo.

Additionally, security threats including pandemics and terrorism call for Singapore to “go upstream” on immigration measures, “such as the collection of advance passenger and crew information, and entry restrictions to be imposed on undesirable foreigners, even before they arrive at our shores,” added the minister.

This collection and sharing of biometric information is what enables the passport-free immigration process – passenger and crew information will need to be disclosed to the airport operator to use for bag management, access control, gate boarding, duty-free purchases, as well as tracing individuals within the airport for security purposes.

The shared biometrics will serve as a “single token of authentication” across all touch points.

Members of Singapore’s parliament have raised concerns about shifting to universal automated clearance, including data privacy, and managing technical glitches.

According to Teo, only Singaporean companies will be allowed ICA-related IT contracts, vendors will be given non-disclosure agreements, and employees of such firms must undergo security screening. Traveler data will be encrypted and transported through data exchange gateways.

As for who will protect the data, that role goes to CAG, with ICA auditing its compliance.

In case of disruptions that can’t be handled by an uninterruptible power supply, off-duty officers will be called in to go back to analog.

And even though the ministry is pushing universal coverage, there will be some exceptions, such as those who are unable to provide certain biometrics or are less digitally literate. Teo promised their clearance can be done manually by immigration officers.

Source: Singapore plans to scan your face instead of your passport • The Register

Data safety is a real issue here – how long will the data be collected and for what other purposes will it be used?

UK passport and immigration images database could be repurposed to catch shoplifters

Britain’s passport database could be used to catch shoplifters, burglars and other criminals under urgent plans to curb crime, the policing minister has said.

Chris Philp said he planned to integrate data from the police national database (PND), the Passport Office and other national databases to help police find a match with the “click of one button”.

But civil liberty campaigners have warned the plans would be an “Orwellian nightmare” that amount to a “gross violation of British privacy principles”.

Foreign nationals who are not on the passport database could also be found via the immigration and asylum biometrics system, which will be part of an amalgamated system to help catch thieves.

[…]

Until the new platform is created, he said police forces should search each database separately.

[…]

Emmanuelle Andrews, policy and campaigns manager at the campaign group, said: “Time and time again the government has relied on the social issue of the day to push through increasingly authoritarian measures. And that’s just what we’re seeing here with these extremely worrying proposals to encourage the police to scan our faces as we go to buy a pint of milk and trawl through our personal information.

“By enabling the police to use private dashcam footage, as well as the immigration and asylum system, and passport database, the government are turning our neighbours, loved ones, and public service officials into border guards and watchmen.

[…]

Silkie Carlo, director of Big Brother Watch, said: “Philp’s plan to subvert Brits’ passport photos into a giant police database is Orwellian and a gross violation of British privacy principles. It means that over 45 million of us with passports who gave our images for travel purposes will, without any kind of consent or the ability to object, be part of secret police lineups.

“To scan the population’s photos with highly inaccurate facial recognition technology and treat us like suspects is an outrageous assault on our privacy that totally overlooks the real reasons for shoplifting. Philp should concentrate on fixing broken policing rather than building an automated surveillance state.

“We will look at every possible avenue to challenge this Orwellian nightmare.”

Source: UK passport images database could be used to catch shoplifters | Police | The Guardian

Also, time and again we have seen that centralised databases are a really really bad idea – the data gets stolen and misused by the operators.

Firefox now has private browser-based website translation – no cloud servers required

Web browsers have had tools that let you translate websites for years. But they typically rely on cloud-based translation services like Google Translate or Microsoft’s Bing Translator.

The latest version of Mozilla’s Firefox web browser does things differently. Firefox 118 brings support for Fullpage Translation, which can translate websites entirely in your browser. In other words, everything happens locally on your computer without any data sent to Microsoft, Google, or other companies.

Here’s how it works. Firefox will notice when you visit a website in a supported language that’s different from your default language, and a translate icon will show up in the address bar.

Tap that icon and you’ll see a pop-up window that asks what languages you’d like to translate from and to. If the browser doesn’t automatically detect the language of the website you’re visiting, you can set these manually.

Then click the “Translate” button, and a moment later the text on the page should be visible in your target language. If you’d prefer to go back to the original language, just tap the translate icon again and choose the option that says “show original.”

You can also tap the settings icon in the translation menu and choose to “always translate” or “never translate” a specific language so that you won’t have to manually invoke the translation every time you visit sites in that language.

Now for the bad news: Firefox Fullpage Translation only supports 9 languages so far:

  • Bulgarian
  • Dutch
  • English
  • French
  • German
  • Italian
  • Polish
  • Portuguese
  • Spanish

[…]

Source: Firefox 118 brings browser-based website translation (no cloud servers required… for a handful of supported languages) – Liliputing

Feds Probing Tesla For Lying About EV Ranges, Bullshitting Customers Who Complained

Back in July, Reuters released a bombshell report documenting how Tesla not only spent a decade falsely inflating the range of their EVs, but created teams dedicated to bullshitting Tesla customers who called in to complain about it. If you recall, Reuters noted how these teams would have a little, adorable party every time they got a pissed off user to cancel a scheduled service call. Usually by lying to them:

“Inside the Nevada team’s office, some employees celebrated canceling service appointments by putting their phones on mute and striking a metal xylophone, triggering applause from coworkers who sometimes stood on desks. The team often closed hundreds of cases a week and staffers were tracked on their average number of diverted appointments per day.”

The story managed to stay in the headlines for all of a day or two, quickly supplanted by gossip surrounding a non-existent Elon Musk Mark Zuckerberg fist fight.

But here in reality, Tesla’s routine misrepresentation of their product (and almost joyous gaslighting of their paying customers) has caught the eye of federal regulators, who are now investigating the company for fraudulent behavior:

“federal prosecutors have opened a probe into Tesla’s alleged range-exaggerating scheme, which involved rigging its cars’ software to show an inflated range projection that would then abruptly switch to an accurate projection once the battery dipped below 50% charged. Tesla also reportedly created an entire secret “diversion team” to dissuade customers who had noticed the problem from scheduling service center appointments.”

This pretty clearly meets the threshold definition of “unfair and deceptive” under the FTC Act, so this shouldn’t be that hard of a case. Of course, whether it results in any sort of meaningful penalties or fines is another matter entirely. It’s very clear Musk historically hasn’t been very worried about what’s left of the U.S. regulatory and consumer protection apparatus holding him accountable for… anything.

Still, it’s yet another problem for a company that’s facing a flood of new competitors with an aging product line. And it’s another case thrown in Tesla’s lap on top of the glacially-moving inquiry into the growing pile of corpses caused by obvious misrepresentation of under-cooked “self driving” technology, and an investigation into Musk covertly using Tesla funds to build himself a glass mansion.

Source: Feds Probing Tesla For Lying About EV Ranges, Bullshitting Customers Who Complained | Techdirt

Philips Hue / Signify Ecosystem: ‘Collapsing Into Stupidity’

The Philips Hue ecosystem of home automation devices is “collapsing into stupidity,” writes Rachel Kroll, veteran sysadmin and former production engineer at Facebook. “Unfortunately, the idiot C-suite phenomenon has happened here too, and they have been slowly walking down the road to full-on enshittification.” From her blog post: I figured something was up a few years ago when their iOS app would block entry until you pushed an upgrade to the hub box. That kind of behavior would never fly with any product team that gives a damn about their users — want to control something, so you start up the app? Forget it, we are making you placate us first! How is that user-focused, you ask? It isn’t.

Their latest round of stupidity pops up a new EULA and forces you to take it or, again, you can’t access your stuff. But that’s just more unenforceable garbage, so who cares, right? Well, it’s getting worse.

It seems they are planning on dropping an update which will force you to log in. Yep, no longer will your stuff Just Work across the local network. Now it will have yet another garbage “cloud” “integration” involved, and they certainly will find a way to make things suck even worse for you. If you have just the lights and smart outlets, Kroll recommends deleting the units from the Hue Hub and adding them to an IKEA Dirigera hub. “It’ll run them just fine, and will also export them to HomeKit so that much will keep working as well.” That said, it’s not a perfect solution. You will lose motion sensor data, the light level, the temperature of that room, and the ability to set custom behaviors with those buttons.

“Also, there’s no guarantee that IKEA won’t hop on the train to sketchville and start screwing over their users as well,” adds Kroll.

Source: Is the Philips Hue Ecosystem ‘Collapsing Into Stupidity’? – Slashdot

Chip firm Rivos countersues Apple, alleges illegal contracts and unecessary court cases

A chip startup and several of its employees are being sued by Apple for theft of trade secrets and breach of contract and filed a countersuit.

Rivos was sued [PDF] by Apple early last year over claims it lured away a gaggle of Apple employees working on the system-on-chip (SoC) designs like those in its Mac and iPhone devices. Rivos and several of its employees who previously worked at Apple were named in the suit, and six of them participated with Rivos in the countersuit [PDF] filed in the District Court for the Northern District of California on Friday.

In the original lawsuit, Apple accused Rivos, which was founded in 2021 to develop RISC-V SoCs for servers, of a “coordinated campaign to target Apple employees with access to Apple proprietary and trade secret information about Apple’s SoC designs.” When informed of confidentiality and intellectual property agreements (IPAs), Apple claimed Rivos never responded.

Instead, “after accepting their offers from Rivos, some of these employees took gigabytes of sensitive SoC specifications and design files during their last days of employment with Apple,” lawyers for Cupertino alleged.

A judge in the lawsuit dismissed [PDF] claims of trade secret theft against Rivos and two of its employees in August with leave to amend, but let other Defend Trade Secrets Act claims against individual employees, as well as the breach of contract claims, stand.

Apple has tried this before and failed, reasons Rivos

In its countersuit, Rivos and six of its employees argue that, rather than competing, “Apple has resorted to trying to thwart emerging startups through anticompetitive measures, including illegally restricting employee mobility.”

Methods Apple has used to stymie employee mobility include the aforementioned IPAs, which Rivos lawyers argue violate California’s Business and Professions Code rules voiding contracts that restrict an individual’s ability to engage in a lawful business, profession or trade.

Under California law, Rivos lawyers claim, such a violation means Apple is engaging in unfair and unlawful business practices that have caused injury to Rivos through the need to fight such a lengthy and, if the contracts are unenforceable, unnecessary court battle.

“Apple’s actions not only violate the laws and public policy of the State of California, but also undermine the free and open competition that has made the state the birthplace of countless innovative businesses,” Rivos’s lawyers argue in the lawsuit.

Rivos also claims that Apple’s method of applying its IPA is piecemeal and often abused to allow Apple future legal opportunities.

“Even when Apple knows its employees are leaving to work somewhere that Apple (rightly or wrongly) perceives as a competitive threat, it does not consistently conduct exit interviews or give employees any meaningful instruction about what they should do with supposedly ‘confidential’ Apple material upon leaving,” the countersuit claims.

“Apple lets these employees walk out the door with material they may have inadvertently ‘retained’ simply by using the Apple systems (such as iCloud or iMessage) that Apple effectively mandates they use as part of their work.”

Rivos argues in its filing that Apple tried this exact same scheme before and it failed then too.

That incident involved Arm-compatible chipmaker Nuvia, which was founded by former Apple chip chief Gerard Williams in 2019. Apple sued Williams that same year over claims he violated his contract with Apple and tried to poach employees for his startup.

Williams unsurprisingly made the same claims as Rivos – that the Apple contracts were unenforceable under California law – and after a couple years of stalling, Apple finally abandoned its suit against Williams with little justification.

The iGiant didn’t respond to our questions about the countersuit.

Source: Chip firm Rivos countersues Apple, alleges illegal contracts • The Register

Philips Hue will force users to upload their data to Hue cloud – changing their TOS after you bought the product for not needing an account

Today’s story is about Philips Hue by Signify. They will soon start forcing accounts on all users and upload user data to their cloud. For now, Signify says you’ll still be able to control your Hue lights locally as you’re currently used to, but we don’t know if this may change in the future. The privacy policy allows them to store the data and share it with partners.

[…]

When you open the Philips Hue app you will now be prompted with a new message: Starting soon, you’ll need to be signed in.

[…]

So today, you can choose to not share your information with Signify by not creating an account. But this choice will soon be taken away and all users need to share their data with Philips Hue.

Confirming the news

I didn’t want to cry wolf, so I decided to verify the above statement with Signify. They sadly confirmed:

Twitter conversation with Philips Hue (source: Twitter)

The policy they are referring to is their privacy policy (April 2023 edition, download version).

[…]

When asked what drove this change, the answer is the usual: security. Well Signify, you know what keeps user data even more secure? Not uploading it all to your cloud.

[…]

As a user, we encourage you to reach out to Signify support and voice your concern.

NOTE: Their support form doesn’t work. You can visit their Facebook page though

Dear Signify, please reconsider your decision and do not move forward with it. You’ve reversed bad decisions before. People care about privacy and forcing accounts will hurt the brand in the long term. The pain caused by this is not worth the gain.

Source: Philips Hue will force users to upload their data to Hue cloud

No, Philips / Signify – I have used these devices for years without having to have an account or be connected to the internet. It’s one of the reasons I bought into Hue. Making us give up data to use something we bought after we bought it is a dangerous decision considering the private and exploitable nature of the data, as well as greedy and rude.

T-Mobile US exposes some customer data, but don’t say breach

T-Mobile US has had another bad week on the infosec front – this time stemming from a system glitch that exposed customer account data, followed by allegations of another breach the carrier denied.

According to customers who complained of the issue on Reddit and X, the T-Mobile app was displaying other customers’ data instead of their own – including the strangers’ purchase history, credit card information, and address.

This being T-Mobile’s infamously leaky US operation, people immediately began leaping to the obvious conclusion: another cyber attack or breach.

“There was no cyber attack or breach at T-Mobile,” the telco assured us in an emailed statement. “This was a temporary system glitch related to a planned overnight technology update involving limited account information for fewer than 100 customers, which was quickly resolved.”

Note, as Reddit poster Jman100_JCMP did, T-Mobile means fewer than 100 customers had their data exposed – but far more appear to have been able to view those 100 customers’ data.

As for the breach, the appearance of exposed T-Mobile data was alleged by malware repository vx-underground’s X (Twitter) account. The Register understands T-Mobile examined the data and determined that independently owned T-Mobile dealer, Connectivity Source, was the source – resulting from a breach it suffered in April. We understand T-Mobile believes vx-underground misinterpreted a data dump.

Connectivity Source was indeed the subject of a breach in April, in which an unknown attacker made off with employee data including names and social security numbers – around 17,835 of them from across the US, where Connectivity appears to do business exclusively as a white-labelled T-Mobile US retailer.

Looks like the carier really dodged the bullet on this one – there’s no way Connectivity Source employees could be mistaken for its own staff.

T-Mobile US has already experienced two prior breaches this year, but that hasn’t imperilled the biz much – its profits have soared recently and some accompanying sizable layoffs will probably keep things in the black for the foreseeable future.

Source: T-Mobile US exposes some customer data, but don’t say breach • The Register

EU reinstates $400 million fine on Intel for blocking sales of competing chips

The European Commission has imposed a €376.36 million ($400 million) fine on Intel for blocking the sales of devices powered by its competitors’ x86 CPUs. This brings one part of the company’s long-running antitrust court battle with the European authority to a close. If you’ll recall, the Commission slapped the chipmaker with a record-breaking €1.06 billion ($1.13 billion) fine in 2009 after it had determined that Intel abused its dominant position in the market. ye

It found back then that the company gave hidden rebates and incentives to manufacturers like HP, Dell and Lenovo for buying all or almost all their processors from Intel. The Commission also found that Intel paid manufacturers to delay or to completely cease the launch of products powered by its rivals’ CPUs “naked restrictions.” Other times, Intel apparently paid companies to limit those products’ sales channels. The Commission calls these actions “naked restrictions.”

[…]

In its announcement, the European Commission gave a few examples of how Intel hindered the sales of competing products. It apparently paid HP between November 2002 and May 2005 to sell AMD-powered business desktops only to small- and medium-sized enterprises and via direct distribution channels. It also paid Acer to delay the launch of an AMD-based notebook from September 2003 to January 2004. Intel paid Lenovo to push back the launch of AMD-based notebooks for half a year, as well.

The Commission has since appealed the General Court’s decision to dismiss the part of the case related to the rebates Intel offered its clients. Intel, however, did not lodge an appeal for the court’s ruling on naked restrictions, setting it in stone. “With today’s decision, the Commission has re-imposed a fine on Intel only for its naked restrictions practice,” the European authority wrote. “The fine does not relate to Intel’s conditional rebates practice. The fine amount, which is based on the same parameters as the 2009 Commission’s decision, reflects the narrower scope of the infringement compared to that decision.” Seeing as the rebates part of the case is under appeal, Intel could still pay the rest of the fine in the future.

Source: EU reinstates $400 million fine on Intel for blocking sales of competing chips

Dutch privacy watchdog SDBN sues twitter for collecting and selling data via Mohub (wordfeud, duolingo, etc) without notifying users

The Dutch Data Protection Foundation (SDBN) wants to enforce a mass claim for 11 million people through the courts against social media company X, the former Twitter. Between 2013 and 2021, that company owned the advertising platform MoPub, which, according to the privacy foundation, illegally traded in data from users of more than 30,000 free apps such as Wordfeud, Buienradar and Duolingo.

SDBN has been trying to reach an agreement with X since November last year, but according to the foundation, without success. That is why SDBN is now starting a lawsuit at the Rotterdam court. Central to this is MoPub’s handling of personal data such as religious beliefs, sexual orientation and health. In addition to compensation, SDBN wants this data to be destroyed.

The foundation also believes that users are entitled to profit contributions. A lot of money can be made by sharing personal data with thousands of companies, says SDBN chairman Anouk Ruhaak. Although she says it is difficult to find out exactly which companies had access to the data. “By holding X. Corp liable, we hope not only to obtain compensation for all victims, but also to put a stop to this type of practice,” said Ruhaak. “Unfortunately, these types of companies often only listen when it hurts financially.”

Source: De Ondernemer | Privacystichting SDBN wil via rechter massaclaim bij…

Join the claim here

The maestro: The man who built the biggest match-fixing ring in tennis

On the morning of his arrest, Grigor Sargsyan was still fixing matches. Four cellphones buzzed on his nightstand with calls and messages from around the world.

Sargsyan was sprawled on a bed in his parents’ apartment, making deals between snatches of sleep. It was 3 a.m. in Brussels, which meant it was 8 a.m. in Thailand. The W25 Hua Hin tournament was about to start.

Sargsyan was negotiating with professional tennis players preparing for their matches, athletes he had assiduously recruited over years. He needed them to throw a game or a set — or even just a point — so he and a global network of associates could place bets on the outcomes.

That’s how Sargsyan had become rich. As gambling on tennis exploded into a $50 billion industry, he had infiltrated the sport, paying pros more to lose matches, or parts of matches, than they could make by winning tournaments.

Sargsyan had crisscrossed the globe building his roster, which had grown to include more than 180 professional players across five continents. It was one of the biggest match-fixing rings in modern sports, large enough to earn Sargsyan a nickname whispered throughout the tennis world: the Maestro.

This Washington Post investigation of Sargsyan’s criminal enterprise, and how the changing nature of gambling has corrupted tennis, is based on dozens of interviews with players, coaches, investigators, tennis officials and match fixers.

[…]

Source: The maestro: The man who built the biggest match-fixing ring in tennis

Google Chrome’s Privacy Sandbox: any site can now query all your habits

[…]

Specifically, the web giant’s Privacy Sandbox APIs, a set of ad delivery and analysis technologies, now function in the latest version of the Chrome browser. Website developers can thus write code that calls those APIs to deliver and measure ads to visitors with compatible browsers.

That is to say, sites can ask Chrome directly what kinds of topics you’re interested in – topics automatically selected by Chrome from your browsing history – so that ads personalized to your activities can be served. This is supposed to be better than being tracked via third-party cookies, support for which is being phased out. There are other aspects to the sandbox that we’ll get to.

While Chrome is the main vehicle for Privacy Sandbox code, Microsoft Edge, based on the open source Chromium project, has also shown signs of supporting the technology. Apple and Mozilla have rejected at least the Topics API for interest-based ads on privacy grounds.

[…]

“The Privacy Sandbox technologies will offer sites and apps alternative ways to show you personalized ads while keeping your personal information more private and minimizing how much data is collected about you.”

These APIs include:

  • Topics: Locally track browsing history to generate ads based on demonstrated user interests without third-party cookies or identifiers that can track across websites.
  • Protected Audience (FLEDGE): Serve ads for remarketing (e.g. you visited a shoe website so we’ll show you a shoe ad elsewhere) while mitigating third-party tracking across websites.
  • Attribution Reporting: Data to link ad clicks or ad views to conversion events (e.g. sales).
  • Private Aggregation: Generate aggregate data reports using data from Protected Audience and cross-site data from Shared Storage.
  • Shared Storage: Allow unlimited, cross-site storage write access with privacy-preserving read access. In other words, you graciously provide local storage via Chrome for ad-related data or anti-abuse code.
  • Fenced Frames: Securely embed content onto a page without sharing cross-site data. Or iframes without the security and privacy risks.

These technologies, Google and industry allies believe, will allow the super-corporation to drop support for third-party cookies in Chrome next year without seeing a drop in targeted advertising revenue.

[…]

“Privacy Sandbox removes the ability of website owners, agencies and marketers to target and measure their campaigns using their own combination of technologies in favor of a Google-provided solution,” James Rosewell, co-founder of MOW, told The Register at the time.

[…]

Controversially, in the US, where lack of coherent privacy rules suit ad companies just fine, the popup merely informs the user that these APIs are now present and active in the browser but requires visiting Chrome’s Settings page to actually manage them – you have to opt-out, if you haven’t already. In the EU, as required by law, the notification is an invitation to opt-in to interest-based ads via Topics.

Source: How Google Chrome’s Privacy Sandbox works and what it means • The Register

Google taken to court in NL for large scale privacy breaches

The Foundation for the Protection of Privacy Interests and the Consumers’ Association are taking the next step in their fight against Google. The tech company is being taken to court today for ‘large-scale privacy violations’.

The proceedings demand, among other things, that Google stop its constant surveillance and sharing of personal data through online advertising auctions and also pay damages to consumers. Since the announcement of this action on May 23, 2023, more than 82,000 Dutch people have already joined the mass claim.

According to the organizations, Google is acting in violation of Dutch and European privacy legislation. The tech giant collects users’ online behavior and location data on an immense scale through its services and products. Without providing enough information or having obtained permission. Google then shares that data, including highly sensitive personal data about health, ethnicity and political preference, for example, with hundreds of parties via its online advertising platform.

Google is constantly monitoring everyone. Even when using third-party cookies – which are invisible – Google continues to collect data through other people’s websites and apps, even when someone is not using its products or services. This enables Google to monitor almost the entire internet behavior of its users.

All these matters have been discussed with Google, to no avail.

The Foundation for the Protection of Privacy Interests represents the interests of users of Google’s products and services living in the Netherlands who have been harmed by privacy violations. The foundation is working together with the Consumers’ Association in the case against Google. Consumers’ Association Claimservice, a partnership between the Consumers’ Association and ConsumersClaim, processes the registrations of affiliated victims.

More than 82,000 consumers have already registered for the Google claim. They demand compensation of 750 euros per participant.

A lawsuit by the American government against Google starts today in the US . Ten weeks have been set aside for this. This mainly revolves around the power of Google’s search engine.

Essentially, Google is accused of entering into exclusive agreements to guarantee the use of its search engine. These are agreements that prevent alternative search engines from being pre-installed, or from Google’s search app being removed.

Source: Google voor de rechter gedaagd wegens ‘grootschalige privacyschendingen’ – Emerce (NL)

Microsoft to stop forcing Windows 11 users into Edge in EU countries

Microsoft will finally stop forcing Windows 11 users in Europe into Edge if they click a link from the Windows Widgets panel or from search results. The software giant has started testing the changes to Windows 11 in recent test builds of the operating system, but the changes are restricted to countries within the European Economic Area (EEA).

“In the European Economic Area (EEA), Windows system components use the default browser to open links,” reads a change note from a Windows 11 test build released to Dev Channel testers last month. I asked Microsoft to comment on the changes and, in particular, why they’re only being applied to EU countries. Microsoft refused to comment.

Microsoft has been ignoring default browser choices in its search experience in Windows 10 and the taskbar widget that forces users into Edge if they click a link instead of their default browser. Windows 11 continued this trend, with search still forcing users into Edge and a new dedicated widgets area that also ignores the default browser setting.

[…]

Source: Microsoft to stop forcing Windows 11 users into Edge in EU countries – The Verge

Big Tech failed to police Russian disinformation: EU study

[…]

The independent study of the DSA’s risk management framework published by the EU’s executive arm, the European Commission, concluded that commitments by social media platforms to mitigate the reach and influence of global online disinformation campaigns have been generally unsuccessful.

The reach of Kremlin-sponsored disinformation has only increased since the major platforms all signed a voluntary Code of Practice on Disinformation in mid-2022.

“In theory, the requirements of this voluntary Code were applied during the second half of 2022 – during our period of study,” the researchers said. We’re sure you’re just as shocked as we are that social media companies failed to uphold a voluntary commitment.

Between January and May of 2023, “average engagement [of pro-Kremlin accounts rose] by 22 percent across all online platforms,” the study said. By absolute numbers, the report found, Meta led the pack on engagement with Russian misinformation. However, the increase was “largely driven by Twitter, where engagement grew by 36 percent after CEO Elon Musk decided to lift mitigation measures on Kremlin-backed accounts,” researchers concluded. Twitter, now known as X, pulled out of the disinformation Code in May.

Across the platforms studied – Facebook, Instagram, Telegram, TikTok, Twitter and YouTube – Kremlin-backed accounts have amassed some 165 million followers and have had their content viewed at least 16 billion times “in less than a year.” None of the platforms we contacted responded to questions.

[…]

The EU’s Digital Services Act and its requirements that VLOPs (defined by the Act as companies large enough to reach 10 percent of the EU, or roughly 45 million people) police illegal content and disinformation became enforceable late last month.

Under the DSA, VLOPs are also required “to tackle the spread of illegal content, online disinformation and other societal risks,” such as, say, the massive disinformation campaign being waged by the Kremlin since Putin decided to invade Ukraine last year.

[…]

Now that VLOPs are bound by the DSA, will anything change? We asked the European Commission if it can take any enforcement actions, or whether it’ll make changes to the DSA to make disinformation rules tougher, but have yet to hear back.

Two VLOPs are fighting their designation: Amazon and German fashion retailer Zalando. The two orgs claim that as retailers, they shouldn’t be considered in the same category as Facebook, Pinterest, and Wikipedia.

[…]

Source: Big Tech failed to police Russian disinformation: EU study • The Register

TV Museum Will Die in 48 Hours Unless Sony Retracts YouTube Copyright Strikes on 40 – 60 year old TV shows

Rick Klein and his team have been preserving TV adverts, forgotten tapes, and decades-old TV programming for years. Now operating as a 501(c)(3) non-profit, the Museum of Classic Chicago Television has called YouTube home since 2007. However, copyright notices sent on behalf of Sony, protecting TV shows between 40 and 60 years old, could shut down the project in 48 hours.

[…]

After being reborn on YouTube as The Museum of Classic Chicago Television (MCCTv), the last sixteen years have been quite a ride. Over 80 million views later, MCCTv is a much-loved 501(c)(3) non-profit Illinois corporation but in just 48 hours, may simply cease to exist.

In a series of emails starting Friday and continuing over the weekend, Klein began by explaining his team’s predicament, one that TorrentFreak has heard time and again over the past few years. Acting on behalf of a copyright owner, in this case Sony, India-based anti-piracy company Markscan hit the MCCTv channel with a flurry of copyright claims. If these cannot be resolved, the entire project may disappear.

[…]

No matter whether takedowns are justified, unjustified (Markscan hit Sony’s own website with a DMCA takedown recently), or simply disputed, getting Markscan’s attention is a lottery at best, impossible at worst. In MCCTv’s short experience, nothing has changed.

“Our YouTube channel with 150k subscribers is in danger of being terminated by September 6th if I don’t find a way to resolve these copyright claims that Markscan made,” Klein told TorrentFreak on Friday.

“At this point, I don’t even care if they were issued under authorization by Sony or not – I just need to reach a live human being to try to resolve this without copyright strikes. I am willing to remove the material manually to get the strikes reversed.”

[…]

Complaints Targeted TV Shows 40 to 60 years old

[…]

Two episodes of the TV series Bewitched dated 1964 aired on ABC Network and almost sixty years later, archive copies of those transmissions were removed from YouTube for violating Sony copyrights, with MCCTv receiving a strike.

[…]

Given that copyright law locks content down for decades, Klein understands that can sometimes cause issues, although 16 years on YouTube suggests that the overwhelming majority of rightsholders don’t consider his channel a threat. If they did, the option to monetize the recordings can be an option.

No Competition For Commercial Offers

Why most rightsholders have left MCCTv alone is hard to say; perhaps some see the historical value of the channel, maybe others don’t know it exists. At least in part, Klein believes the low quality of the videos could be significant.

“These were relatively low picture quality broadcast examples from various channels from various years at least 30-40 years ago, with the original commercial breaks intact. Also mixed in with these were examples of ’16mm network prints’ which are surviving original film prints that were sent out to TV stations back in the day from when the show originally aired. In many cases they include original sponsorship notices, original network commercials, ‘In Color’ notices, etc.,” he explains.

[…]

Klein says the team is happy to comply with Sony’s wishes and they hope that given a little leeway, the project won’t be consigned to history. Perhaps Sony will recall the importance of time-shifting while understanding that time itself is running out for The Museum of Classic Chicago Television.

Source: TV Museum Will Die in 48 Hours Unless Sony Retracts YouTube Copyright Strikes * TorrentFreak

Mozilla investigates 25 major car brands and finds privacy is shocking

[…]

The foundation, the Firefox browser maker’s netizen-rights org, assessed the privacy policies and practices of 25 automakers and found all failed its consumer privacy tests and thereby earned its Privacy Not Included (PNI) warning label.

If you care even a little about privacy, stay as far away from Nissan’s cars as you possibly can

In research published Tuesday, the org warned that manufacturers may collect and commercially exploit much more than location history, driving habits, in-car browser histories, and music preferences from today’s internet-connected vehicles. Instead, some makers may handle deeply personal data, such as – depending on the privacy policy – sexual activity, immigration status, race, facial expressions, weight, health, and even genetic information, the Mozilla team found.

Cars may collect at least some of that info about drivers and passengers using sensors, microphones, cameras, phones, and other devices people connect to their network-connected cars, according to Mozilla. And they collect even more info from car apps – such as Sirius XM or Google Maps – plus dealerships, and vehicle telematics.

Some car brands may then share or sell this information to third parties. Mozilla found 21 of the 25 automakers it considered say they may share customer info with service providers, data brokers, and the like, and 19 of the 25 say they can sell personal data.

More than half (56 percent) also say they share customer information with the government or law enforcement in response to a “request.” This isn’t necessarily a court-ordered warrant, and can also be a more informal request.

And some – like Nissan – may also use this private data to develop customer profiles that describe drivers’ “preferences, characteristics, psychological trends, predispositions, behavior, attitudes, intelligence, abilities, and aptitudes.”

Yes, you read that correctly. According to Mozilla’s privacy researchers, Nissan says it can infer how smart you are, then sell that assessment to third parties.

[…]

Nissan isn’t the only brand to collect information that seems completely irrelevant to the vehicle itself or the driver’s transportation habits.

Kia mentions sex life,” Caltrider said. “General Motors and Ford both mentioned race and sexual orientation. Hyundai said that they could share data with government and law enforcement based on formal or informal requests. Car companies can collect even more information than reproductive health apps in a lot of ways.”

[…]

the Privacy Not Included team contacted Nissan and all of the other brands listed in the research: that’s Lincoln, Mercedes-Benz, Acura, Buick, GMC, Cadillac, Fiat, Jeep, Chrysler, BMW, Subaru, Dacia, Hyundai, Dodge, Lexus, Chevrolet, Tesla, Ford, Honda, Kia, Audi, Volkswagen, Toyota and Renault.

Only three – Mercedes-Benz, Honda, and Ford – responded, we’re told.

“Mercedes-Benz did answer a few of our questions, which we appreciate,” Caltrider said. “Honda pointed us continually to their public privacy documentation to answer your questions, but they didn’t clarify anything. And Ford said they discussed our request internally and made the decision not to participate.”

This makes Mercedes’ response to The Register a little puzzling. “We are committed to using data responsibly,” a spokesperson told us. “We have not received or reviewed the study you are referring to yet and therefore decline to comment to this specifically.”

A spokesperson for the four Fiat-Chrysler-owned brands (Fiat, Chrysler, Jeep, and Dodge) told us: “We are reviewing accordingly. Data privacy is a key consideration as we continually seek to serve our customers better.”

[…]

The Mozilla Foundation also called out consent as an issue some automakers have placed in a blind spot.

“I call this out in the Subaru review, but it’s not limited to Subaru: it’s the idea that anybody that is a user of the services of a connected car, anybody that’s in a car that uses services is considered a user, and any user is considered to have consented to the privacy policy,” Caltrider said.

Opting out of data collection is another concern.

Tesla, for example, appears to give users the choice between protecting their data or protecting their car. Its privacy policy does allow users to opt out of data collection but, as Mozilla points out, Tesla warns customers: “If you choose to opt out of vehicle data collection (with the exception of in-car Data Sharing preferences), we will not be able to know or notify you of issues applicable to your vehicle in real time. This may result in your vehicle suffering from reduced functionality, serious damage, or inoperability.”

While technically this does give users a choice, it also essentially says if you opt out, “your car might become inoperable and not work,” Caltrider said. “Well, that’s not much of a choice.”

[…]

Source: Mozilla flunks 25 major car brands for data privacy fails • The Register

Australian Government, Of All Places, Says Age Verification Is A Privacy & Security Nightmare

In the past I’ve sometimes described Australia as the land where internet policy is completely upside down. Rather than having a system that protects intermediaries from liability for third party content, Australia went the opposite direction. Rather than recognizing that a search engine merely links to content and isn’t responsible for the content at those links, Australia has said that search engines can be held liable for what they link to. Rather than protect the free expression of people on the internet who criticize the rich and powerful, Australia has extremely problematic defamation laws that result in regular SLAPP suits and suppression of speech. Rather than embrace encryption that protects everyone’s privacy and security, Australia requires companies to break encryption, insisting only criminals use it.

It’s basically been “bad internet policy central,” or the place where good internet policy goes to die.

And, yet, there are some lines that even Australia won’t cross. Specifically, the Australian eSafety commission says that it will not require adult websites to use age verification tools, because it would put the privacy and security of Australians’ data at risk. (For unclear reasons, the Guardian does not provide the underlying documents, so we’re fixing that and providing both the original roadmap and the Australian government’s response

[…]

Of course, in France, the Data Protection authority released a paper similarly noting that age verification was a privacy and security nightmare… and the French government just went right on mandating the use of the technology. In Australia, the eSafety Commission pointed to the French concerns as a reason not to rush into the tech, meaning that Australia took the lessons from French data protection experts more seriously than the French government did.

And, of course, here in the US, the Congressional Research Service similarly found serious problems with age verification technology, but it hasn’t stopped Congress from releasing a whole bunch of “save the children” bills that are built on a foundation of age verification.

[…]

Source: Australian Government, Of All Places, Says Age Verification Is A Privacy & Security Nightmare | Techdirt

OpenAI disputes authors’ claims that every ChatGPT response is a derivative work, it’s transformative

This week, OpenAI finally responded to a pair of nearly identical class-action lawsuits from book authors

[…]

In OpenAI’s motion to dismiss (filed in both lawsuits), the company asked a US district court in California to toss all but one claim alleging direct copyright infringement, which OpenAI hopes to defeat at “a later stage of the case.”

The authors’ other claims—alleging vicarious copyright infringement, violation of the Digital Millennium Copyright Act (DMCA), unfair competition, negligence, and unjust enrichment—need to be “trimmed” from the lawsuits “so that these cases do not proceed to discovery and beyond with legally infirm theories of liability,” OpenAI argued.

OpenAI claimed that the authors “misconceive the scope of copyright, failing to take into account the limitations and exceptions (including fair use) that properly leave room for innovations like the large language models now at the forefront of artificial intelligence.”

According to OpenAI, even if the authors’ books were a “tiny part” of ChatGPT’s massive data set, “the use of copyrighted materials by innovators in transformative ways does not violate copyright.”

[…]

The purpose of copyright law, OpenAI argued, is “to promote the Progress of Science and useful Arts” by protecting the way authors express ideas, but “not the underlying idea itself, facts embodied within the author’s articulated message, or other building blocks of creative,” which are arguably the elements of authors’ works that would be useful to ChatGPT’s training model. Citing a notable copyright case involving Google Books, OpenAI reminded the court that “while an author may register a copyright in her book, the ‘statistical information’ pertaining to ‘word frequencies, syntactic patterns, and thematic markers’ in that book are beyond the scope of copyright protection.”

[…]

Source: OpenAI disputes authors’ claims that every ChatGPT response is a derivative work | Ars Technica

So the authors are saying that if you read their book and then are inspired by it, you can’t use that memory – any of it – to write another book. Which also means that you presumably wouldn’t be able to use any words at all, as they are all copyrighted entities which have inspired you in the past as well.