Google Leak Reveals Thousands of Privacy Incidents

Google has accidentally collected childrens’ voice data, leaked the trips and home addresses of car pool users, and made YouTube recommendations based on users’ deleted watch history, among thousands of other employee-reported privacy incidents, according to a copy of an internal Google database which tracks six years worth of potential privacy and security issues obtained by 404 Media. From the report: Individually the incidents, most of which have not been previously publicly reported, may only each impact a relatively small number of people, or were fixed quickly. Taken as a whole, though, the internal database shows how one of the most powerful and important companies in the world manages, and often mismanages, a staggering amount of personal, sensitive data on people’s lives.

The data obtained by 404 Media includes privacy and security issues that Google’s own employees reported internally. These include issues with Google’s own products or data collection practices; vulnerabilities in third party vendors that Google uses; or mistakes made by Google staff, contractors, or other people that have impacted Google systems or data. The incidents include everything from a single errant email containing some PII, through to substantial leaks of data, right up to impending raids on Google offices. When reporting an incident, employees give the incident a priority rating, P0 being the highest, P1 being a step below that. The database contains thousands of reports over the course of six years, from 2013 to 2018. In one 2016 case, a Google employee reported that Google Street View’s systems were transcribing and storing license plate numbers from photos. They explained that Google uses an algorithm to detect text in Street View imagery.

Source: https://tech.slashdot.org/story/24/06/03/1655212/google-leak-reveals-thousands-of-privacy-incidents?utm_source=rss1.0mainlinkanon&utm_medium=feed

Adobe changes TOS, says it can republish what you made for free

Adobe has decided that if you use its software, it can re-use anything you create. Considering you pay to use the software, that’s a bit grating.

4.2 Licenses to Your Content. Solely for the purposes of operating or improving the Services and Software, you grant us a non-exclusive, worldwide, royalty-free sublicensable, license, to use, reproduce, publicly display, distribute, modify, create derivative works based on, publicly perform, and translate the Content. For example, we may sublicense our right to the Content to our service providers or to other users to allow the Services and Software to operate as intended, such as enabling you to share photos with others. Separately, section 4.6 (Feedback) below covers any Feedback that you provide to us.

Source: Legal

They say it’s to detect kiddie porn, people think it’s to train their AIs. Obviously, people are upset.

Time to start learning the free and (fortunately) great Photoshop alternative: Gimp.

YouTube’s Crackdown on Adblockers Makes Videos Unwatchable – now skips to end of video

YouTube has been at war with adblockers for quite some time now and has employed various tactics to keep users off those extensions. Its most recent defense strategy is to skip right to the end of the video you’re playing. If you try replaying it, it’ll do that again. If you tap anywhere on the timeline, your video will buffer indefinitely. Here’s what it looks like in action.

[…]

one of its first moves was to send a pop-up warning saying, “Video playback is blocked unless YouTube is allowlisted or the ad blocker is disabled.” However, users could close that pop-up and resume watching their videos.

Next, it tried to make videos unplayable by showing a never-ending loading screen. Then it refused to do even that and would pop up an immovable prompt to disable the adblocker.

[…]

This latest move is frustrating, and that’s the point. There was a time when its ads were tolerable, but with the recent increase of ads on the video platform, users are finding it extremely hard to sit through a 20-second unskippable ad followed by a 5-second skippable one. Ad runtime isn’t proportionate to a video’s length, which adds to the bizarreness.

Google is aware of its monopoly over the video-sharing industry and has jacked up its ad-free Premium tier prices to $14 monthly. It has also extended its crackdown on mobile, resulting in buffering issues and error messages for users who dare to use an adblocker on their phones.

[…]

Users have also figured out workarounds. Some are switching to AdBlock alternatives, such as uBlock Origin, while others recommend browser substitutes like Brave to fix the issue. A few disappointed consumers are also considering bidding farewell to the platform.

[…]

Source: YouTube’s Crackdown on Adblockers Makes Videos Unwatchable

Samsung Requires Independent Repair Shops to Share Customer Data, Snitch on People and destroy phones Using Aftermarket Parts, Leaked Contract Shows

In exchange for selling them repair parts, Samsung requires independent repair shops to give Samsung the name, contact information, phone identifier, and customer complaint details of everyone who gets their phone repaired at these shops, according to a contract obtained by 404 Media. Stunningly, it also requires these nominally independent shops to “immediately disassemble” any phones that customers have brought them that have been previously repaired with aftermarket or third-party parts and to “immediately notify” Samsung that the customer has used third-party parts.

[…]

The contract also requires the “daily” uploading of details of each and every repair that an independent company does into a Samsung database called G-SPN “at the time of each repair,” which includes the customer’s address, email address, phone number, details about what is wrong with their phone, their phone’s warranty status, details of the customer’s complaint, and the device’s IMEI number, which is a unique device identifier. 404 Media has verified the authenticity of the original contract and has recreated the version embedded at the bottom of this article to protect the source. No provisions have been changed.

The use of aftermarket parts in repair is relatively common. This provision requires independent repair shops to destroy the devices of their own customers, and then to snitch on them to Samsung.

[…]

People have a right to use third-party parts under the Magnuson Moss Warranty Act, for one thing, and it’s hard to square this contact language with that basic consumer right.”

[…]

The contract shows the incredible level of control that Samsung has over “independent” repair shops, which need to sign this agreement to get repair parts from Samsung. Signing this contract does not even make a repair shop an “authorized” repair center, which is a distinction that requires shop owners to jump through even more hoops.

[…]

“This is exactly the kind of onerous, one-sided ‘agreement’ that necessitates the right-to-repair,” Kit Walsh, a staff attorney at the Electronic Freedom Foundation and right to repair expert told me. “The data collection is excessive. I may not have chosen to disclose my address or identity to Samsung, yet an added cost of repair—even at an independent shop—is giving that information up. In addition to the provision you mentioned about dismantling devices with third-party components, these create additional disincentives to getting devices repaired, which can harm both device security and the environment as repairable devices wind up in landfills.”

[…]

The contract also functionally limits the types of repairs these “independent” repair shops are allowed to do and does not authorize the stores to do repairs that require soldering or so-called board-level repair, which are increasingly common types of repairs.

Independent repair shops are also required to get a certification from an organization called WISE, which costs $200 annually and is an arm of the CTIA, a trade group made up of wireless companies like Verizon and AT&T that has repeatedly lobbied against right to repair laws. In effect, independent shops are required to fund an organization lobbying against their interests.

In 2020, Motherboard obtained a contract that Apple required independent repair companies to sign in order to get repair parts from the company. At the time, experts said that Apple’s contract was problematic because it allowed Apple to audit and inspect the shops at any time. The Samsung document is even more onerous because it requires them to essentially serve as enforcers for Samsung and requires the proactive sharing of consumer data.

[…]

Source: Samsung Requires Independent Repair Shops to Share Customer Data, Snitch on People Who Use Aftermarket Parts, Leaked Contract Shows

Netflix app update for Windows PCs will ditch downloads and offline viewing but give you stuff you never wanted.

In the past few weeks, users have received notifications on their Netflix Windows indicating that a new update is coming. The update will ship with many new features and quality-of-life improvements, including support for watching live events, improved streaming quality, compatibility with ad-supported plans, and more.

Wait – who wants any of this stuff? What quality-of-life is improved here?

[…]

However, the update will also include a new change that won’t allow users to download movies or series via the Netflix app for offline viewing or when facing intermittent internet connection issues.

Source: An official Netflix app update for Windows PCs will ditch downloads and offline viewing following a crackdown on account sharing | Windows Central

So you get stuff you really don’t want to lose stuff you really do want – especially if you travel. Which most people do.

What are they thinking at Netflix? Well, I guess it’s out with the pirate hat again and download what I want to watch offline – even if it is available to me on a service I pay for…

Adobe threatens to sue Nintendo emulator Delta for its look-alike logo

Delta, an emulator that can play Nintendo games, had to change its logo after Adobe threatened legal action. You’d think it would face trouble from Nintendo, seeing as it has been going after emulators these days, but no. It’s Adobe who’s going after the developer, which told TechCrunch that it first received an email from the company’s lawyer on May 7. Adobe warned Delta that their logos are too similar, with its app icon infringing on the well-known Adobe “A,” and asked it to change its logo so it wouldn’t violate the company’s rights. Delta reportedly received an email from Apple, as well, telling the developer that Adobe asked it to take down the emulator app.

A purple icon.
Delta

If you’ll recall, Apple started allowing retro game emulators on the App Store, as long as they don’t offer pirated games for download. Delta was one of the first to be approved for listing and was at the top of Apple’s charts for a while, which is probably why it caught Adobe’s attention. At the time of writing, it sits at number six in the ranking for apps in Entertainment with 17,100 ratings.

The developer told both Adobe and Apple that its logo was a stylized version of the Greek letter “delta,” and not the uppercase letter A. Regardless, it debuted a new logo, which looks someone took a sword to its old one to cut it in half. It’s a temporary solution, though — the developer said it’s releasing the “final” version of its new logo when Delta 1.6 comes out.

Source: Adobe threatens to sue Nintendo emulator Delta for its look-alike logo

Top EU court says there is no right to online anonymity, because copyright is more important

A year ago, Walled Culture wrote about an extremely important case that was being considered by the Court of Justice of the European Union (CJEU), the EU’s top court. The central question was whether the judges considered that copyright was more important than privacy. The bad news is that the CJEU has just decided that it is:

The Court, sitting as the Full Court, holds that the general and indiscriminate retention of IP addresses does not necessarily constitute a serious interference with fundamental rights.

IP addresses refer to the identifying Internet number assigned to a user’s system when it is online. That may change each time someone uses the Internet, but if Internet Service Providers are required by law to retain information about who was assigned a particular address at a given time, then it is possible to carry out routine surveillance of people’s online activities. The CJEU has decided this is acceptable:

EU law does not preclude national legislation authorising the competent public authority, for the sole purpose of identifying the person suspected of having committed a criminal offence, to access the civil identity data associated with an IP address

The key problem is that copyright infringement by a private individual is regarded by the court as something so serious that it negates the right to privacy. It’s a sign of the twisted values that copyright has succeeded on imposing on many legal systems. It equates the mere copying of a digital file with serious crimes that merit a prison sentence, an evident absurdity.

As one of the groups that brought the original case, La Quadrature du Net, writes, this latest decision also has serious negative consequences for human rights in the EU:

Whereas in 2020, the CJEU considered that the retention of IP addresses constituted a serious interference with fundamental rights and that they could only be accessed, together with the civil identity of the Internet user, for the purpose of fighting serious crime or safeguarding national security, this is no longer true. The CJEU has reversed its reasoning: it now considers that the retention of IP addresses is, by default, no longer a serious interference with fundamental rights, and that it is only in certain cases that such access constitutes a serious interference that must be safeguarded with appropriate protection measures.

As a result, La Quadrature du Net says:

While in 2020 [the CJEU] stated that there was a right to online anonymity enshrined in the ePrivacy Directive, it is now abandoning it. Unfortunately, by giving the police broad access to the civil identity associated with an IP address and to the content of a communication, it puts a de facto end to online anonymity.

This is a good example of how copyright’s continuing obsession with ownership and control of digital material is warping the entire legal system in the EU. What was supposed to be simply a fair way of rewarding creators has resulted in a monstrous system of routine government surveillance carried out on hundreds of millions of innocent people just in case they copy a digital file.

Source: Top EU court says there is no right to online anonymity, because copyright is more important – Walled Culture

Patent troll hits Microsoft with $242 million US verdict in Cortana lawsuit

Microsoft (MSFT.O) must pay patent owner IPA Technologies $242 million, a federal jury in Delaware said on Friday after determining that Microsoft’s Cortana virtual-assistant software infringed an IPA patent.

The jury agreed with IPA after a week-long trial that Microsoft’s voice-recognition technology violates IPA’s patent rights in computer-communications software.
IPA is a subsidiary of patent-licensing company Wi-LAN, which is jointly owned by Canadian technology company Quarterhill (QTRH.TO)
, opens new tab and two investment firms. It bought the patent and others from SRI International’s Siri Inc, which Apple acquired in 2010 and whose technology it used in its Siri virtual assistant.
“We remain confident that Microsoft never infringed on IPA’s patents and will appeal,” a Microsoft spokesperson said.
Representatives for IPA and Wi-LAN did not immediately respond to a request for comment on the verdict.
IPA filed the lawsuit in 2018, accusing Microsoft of infringing patents related to personal digital assistants and voice-based data navigation.
The case was later narrowed to concern one IPA patent. Microsoft argued that it does not infringe and that the patent is invalid.
IPA has also sued Google and Amazon over its patents. Amazon defeated IPA’s lawsuit in 2021, and the Google case is still ongoing.

Source: Microsoft hit with $242 million US verdict in Cortana patent lawsuit | Reuters

So basically some company that never did anything except buy some rights from somewhere managed to extort a quarter of a billion dollars from MS. What a brilliant system copyright is!

iPhone users report deleted photos reappearing after update – turns out for Apple, delete doesn’t mean delete

Some iPhone users are reportedly seeing photos they had previously deleted resurface on their devices ever since updating to the latest version of iOS.

The user reports originate from Reddit, and it’s not just a couple of Apple users experiencing issues. By our count, 16 people who deleted their photos say they’ve come back. The deleted photos are apparently marked as recently added, making it very obvious which have made a comeback.

One user says that even photos from 2010 reappeared, and that they have “deleted them repeatedly.”

The Register was able to find a handful of instances of X users reporting the same problem.

[…]

The recent complaints were preceded by a different Reddit thread where three users reported the exact same thing happening in the beta version of iOS 17.5.

[…]

Some users previously reported disappearing photos on older versions of iOS 17, and the fix may have resulted in both accidentally and purposefully deleted photos being brought back to life.

If the issue is genuine, it wouldn’t be the first time iCloud has kept its hands on data after it was supposedly deleted, despite Apple’s emphasis on the privacy of its users. Back in 2017, iCloud was patched to fix a glitch where user browser history was retained for up to a year or so.

Source: iPhone users report deleted photos reappearing after update • The Register

FCC fines America’s largest wireless carriers $200 million for selling customer location data without permission

The Federal Communications Commission has slapped the largest mobile carriers in the US with a collective fine worth $200 million for selling access to their customers’ location information without consent. AT&T was ordered to pay $57 million, while Verizon has to pay $47 million. Meanwhile, Sprint and T-Mobile are facing a penalty with a total amount of $92 million together, since the companies had merged two years ago. The FCC conducted an in-depth investigation into the carriers’ unauthorized disclosure and sale of subscribers’ real-time location data after their activities came to light in 2018.

To sum up the practice in the words of FCC Commissioner Jessica Rosenworcel: The carriers sold “real-time location information to data aggregators, allowing this highly sensitive data to wind up in the hands of bail-bond companies, bounty hunters, and other shady actors.” According to the agency, the scheme started to unravel following public reports that a sheriff in Missouri was tracking numerous individuals by using location information a company called Securus gets from wireless carriers. Securus provides communications services to correctional facilities in the country.

While the carriers eventually ceased their activities, the agency said they continued operating their programs for a year after the practice was revealed and after they promised the FCC that they would stop selling customer location data. Further, they carried on without reasonable safeguards in place to ensure that the legitimate services using their customers’ information, such as roadside assistance and medical emergency services, truly are obtaining users’ consent to track their locations.

Source: FCC fines America’s largest wireless carriers $200 million for selling customer location data

Helldivers 2 PC players suddenly have to link to a PSN account and they’re not being chill about it

Nintendo sent a Digital Millennium Copyright Act (DMCA) notice for over 8,000 GitHub repositories hosting code from the Yuzu Switch emulator, which the Zelda maker previously described as enabling “piracy at a colossal scale.” The sweeping takedown comes two months after Yuzu’s creators quickly settled a lawsuit with Nintendo and its notoriously trigger-happy legal team for $2.4 million.

GamesIndustry.biz first reported on the DMCA notice, affecting 8,535 GitHub repos. Redacted entities representing Nintendo assert that the Yuzu source code contained in the repos “illegally circumvents Nintendo’s technological protection measures and runs illegal copies of Switch games.”

GitHub wrote on the notice that developers will have time to change their content before it’s disabled. In keeping with its developer-friendly approach and branding, the Microsoft-owned platform also offered legal resources and guidance on submitting DMCA counter-notices.

Nintendo’s legal blitz, perhaps not coincidentally, comes as game emulators are enjoying a resurgence. Last month, Apple loosened its restrictions on retro game players in the App Store (likely in response to regulatory threats), leading to the Delta emulator establishing itself as the de facto choice and reaching the App Store’s top spot. Nintendo may have calculated that emulators’ moment in the sun threatened its bottom line and began by squashing those that most immediately imperiled its income stream.

Sadly, Nintendo’s largely undefended legal assault against emulators ignores a crucial use for them that isn’t about piracy. Game historians see the software as a linchpin of game preservation. Without emulators, Nintendo and other copyright holders could make a part of history obsolete for future generations, as their corresponding hardware will eventually be harder to come by.

[…]

This has royally pissed off PC players, though it’s worth noting that it’s free to make a PSN account. This has led to review bombing on Steam and many promises to abandon the game when the linking becomes a requirement, according to a report by Kotaku. The complaints range from frustration over adding yet another barrier to entry after downloading an 80GB game to fears that the PSN account would likely be hacked. While it is true that Sony was the target of a huge hack that impacted 77 million PSN accounts, that was back in 2011. Obama was still in his first term. Also worth noting? Steam was hacked in 2011, impacting 35 million accounts.

[…]

Source: Helldivers 2 PC players suddenly have to link to a PSN account and they’re not being chill about it

Nintendo blitzes GitHub with over 8,000 emulator-related DMCA takedowns

Nintendo sent a Digital Millennium Copyright Act (DMCA) notice for over 8,000 GitHub repositories hosting code from the Yuzu Switch emulator, which the Zelda maker previously described as enabling “piracy at a colossal scale.” The sweeping takedown comes two months after Yuzu’s creators quickly settled a lawsuit with Nintendo and its notoriously trigger-happy legal team for $2.4 million.

GamesIndustry.biz first reported on the DMCA notice, affecting 8,535 GitHub repos. Redacted entities representing Nintendo assert that the Yuzu source code contained in the repos “illegally circumvents Nintendo’s technological protection measures and runs illegal copies of Switch games.”

GitHub wrote on the notice that developers will have time to change their content before it’s disabled. In keeping with its developer-friendly approach and branding, the Microsoft-owned platform also offered legal resources and guidance on submitting DMCA counter-notices.

Nintendo’s legal blitz, perhaps not coincidentally, comes as game emulators are enjoying a resurgence. Last month, Apple loosened its restrictions on retro game players in the App Store (likely in response to regulatory threats), leading to the Delta emulator establishing itself as the de facto choice and reaching the App Store’s top spot. Nintendo may have calculated that emulators’ moment in the sun threatened its bottom line and began by squashing those that most immediately imperiled its income stream.

Sadly, Nintendo’s largely undefended legal assault against emulators ignores a crucial use for them that isn’t about piracy. Game historians see the software as a linchpin of game preservation. Without emulators, Nintendo and other copyright holders could make a part of history obsolete for future generations, as their corresponding hardware will eventually be harder to come by.

Source: Nintendo blitzes GitHub with over 8,000 emulator-related DMCA takedowns

Russia arrests in absentia former world chess champion Garry Kasparov on foreign agent and terrorist charges

Russia has arrested Garry Kasparov and charged him in connection with foreign agent and terrorist charges – much to the former chess champion’s amusement.

The city court in Syktyvkar, the largest city in Russia‘s northwestern Komi region, announced it had arrested the grandmaster in absentia alongside former Russian parliament member Gennady Gudkov, Ivan Tyutrin co-founder of the Free Russia Forum – which has been designated as an ‘undesirable organisation in the country – as well as former environmental activist Yevgenia Chirikova.

All were charged with setting up a terrorist society, according to the court’s press service. As all were charged in their absence, none were physically held in custody.

[…]

Kasparov responded to the court’s bizarre arrest statement in an April 24 post shared on X, formerly Twitter. “In absentia is definitely the best way I’ve ever been arrested,” he said. “Good company, as well. I’m sure we’re all equally honoured that Putin’s terror state is spending time on this that would otherwise go persecuting and murdering.”

Kasparov has found himself in Russian President Vladimir Putin’s firing line after he voiced his opposition to the country’s leader. He has also pursued pro-democracy initiatives in Russia. But he felt unable to continue living in Russia after he was jailed and allegedly beaten by police in 2012, according to the Guardian. He was granted Croatian citizenship in 2014 following repeated difficulties in Russia.

[…]

Source: Russia arrests former world chess champion Garry Kasparov on foreign agent and terrorist charges – World News – Mirror Online

People Are Slowly Realizing Their Auto Insurance Rates Are Skyrocketing Because Their Car Is Covertly Spying On Them

Last month the New York Times’ Kashmir Hill published a major story on how GM collects driver behavior data then sells access (through LexisNexis) to insurance companies, which will then jack up your rates.

The absolute bare minimum you could could expect from the auto industry here is that they’re doing this in a way that’s clear to car owners. But of course they aren’t; they’re burying “consent” deep in the mire of some hundred-page end user agreement nobody reads, usually not related to the car purchase — but the apps consumers use to manage roadside assistance and other features.

Since Kashmir’s story was published, she says she’s been inundated with complaints by consumers about similar behavior. She’s even discovered that she’s one of the folks GM spied on and tattled to insurers about. In a follow up story, she recounts how she and her husband bought a Chevy Bolt, were auto-enrolled in a driver assistance program, then had their data (which they couldn’t access) sold to insurers.

GM’s now facing 10 different federal lawsuits from customers pissed off that they were surreptitiously tracked and then forced to pay significantly more for insurance:

“In 10 federal lawsuits filed in the last month, drivers from across the country say they did not knowingly sign up for Smart Driver but recently learned that G.M. had provided their driving data to LexisNexis. According to one of the complaints, a Florida owner of a 2019 Cadillac CTS-V who drove it around a racetrack for events saw his insurance premium nearly double, an increase of more than $5,000 per year.”

GM (and some apologists) will of course proclaim that this is only fair that reckless drivers pay more, but that’s generally not how it works. Pressured for unlimited quarterly returns, insurance companies will use absolutely anything they find in the data to justify rising rates.

[…]

Automakers — which have long had some of the worst privacy reputations in all of tech — are one of countless industries that lobbied relentlessly for decades to ensure Congress never passed a federal privacy law or regulated dodgy data brokers. And that the FTC — the over-burdened regulator tasked with privacy oversight — lacks the staff, resources, or legal authority to police the problem at any real scale.

The end result is just a parade of scandals. And if Hill were so inclined, she could write a similar story about every tech sector in America, given everything from your smart TV and electricity meter to refrigerator and kids’ toys now monitor your behavior and sell access to those insights to a wide range of dodgy data broker middlemen, all with nothing remotely close to ethics or competent oversight.

And despite the fact that this free for all environment is resulting in no limit of dangerous real-world harms, our Congress has been lobbied into gridlock by a cross-industry coalition of companies with near-unlimited budgets, all desperately hoping that their performative concerns about TikTok will distract everyone from the fact we live in a country too corrupt to pass a real privacy law.

Source: People Are Slowly Realizing Their Auto Insurance Rates Are Skyrocketing Because Their Car Is Covertly Spying On Them | Techdirt

Ring Spy Doorbell customers get measly $5.6 million in refunds in privacy settlement

In a 2023 complaint, the FTC accused the doorbell camera and home security provider of allowing its employees and contractors to access customers’ private videos. Ring allegedly used such footage to train algorithms without consent, among other purposes.

Ring was also charged with failing to implement key security protections, which enabled hackers to take control of customers’ accounts, cameras and videos. This led to “egregious violations of users’ privacy,” the FTC noted.

The resulting settlement required Ring to delete content that was found to be unlawfully obtained, establish stronger security protections

[…]

the FTC is sending 117,044 PayPal payments to impacted consumers who had certain types of Ring devices — including indoor cameras — during the timeframes that the regulators allege unauthorized access took place.

[…]

Earlier this year, the California-based company separately announced that it would stop allowing police departments to request doorbell camera footage from users, marking an end to a feature that had drawn criticism from privacy advocates.

Source: Ring customers get $5.6 million in refunds in privacy settlement | AP News

Considering the size of Ring and the size of the customer base, this is a very very light tap on the wrist for delivering poor security and something that spies on everything on the street.

When You Need To Post A Lengthy Legal Disclaimer With Your Parody Song, You Know Copyright Is Broken

In a world where copyright law has run amok, even creating a silly parody song now requires a massive legal disclaimer to avoid getting sued. That’s the absurd reality we live in, as highlighted by the brilliant musical parody project “There I Ruined It.”

Musician Dustin Ballard creates hilarious videos, some of which reimagine popular songs in the style of wildly different artists, like Simon & Garfunkel singing “Baby Got Back” or the Beach Boys covering Jay-Z’s “99 Problems.” He appears to create the music himself, including singing the vocals, but uses an AI tool to adjust the vocal styles to match the artist he’s trying to parody. The results are comedic gold. However, Ballard felt the need to plaster his latest video with paragraphs of dense legalese just to avoid frivolous copyright strikes.

When our intellectual property system is so broken that it stifles obvious works of parody and creative expression, something has gone very wrong. Comedy and commentary are core parts of free speech, but overzealous copyright law is allowing corporations to censor first and ask questions later. And that’s no laughing matter.

If you haven’t yet watched the video above (and I promise you, it is totally worth it to watch), the last 15 seconds involve this long scrolling copyright disclaimer. It is apparently targeted at the likely mythical YouTube employee who might read it in assessing whether or not the song is protected speech under fair use.

Image

And here’s a transcript:

The preceding was a work of parody which comments on the perceived misogynistic lyrical similarities between artists of two different eras: the Beach Boys and Jay-Z (Shawn Corey Carter). In the United States, parody is protected by the First Amendment under the Fair Use exception, which is governed by the factors enumerated in section 107 of the Copyright Act. This doctrine provides an affirmative defense for unauthorized uses that would otherwise amount to copyright infringement. Parody aside, copyrights generally expire 95 years after publication, so if you are reading this in the 22nd century, please disregard.

Anyhoo, in the unlikely event that an actual YouTube employee sees this, I’d be happy to sit down over coffee and talk about parody law. In Campell v. Acuff-Rose Music Inc, for example, the U.S. Supreme Court allowed for 2 Live Crew to borrow from Roy Orbison’s “Pretty Woman” on grounds of parody. I would have loved to be a fly on the wall when the justices reviewed those filthy lyrics! All this to say, please spare me the trouble of attempting to dispute yet another frivolous copyright claim from my old pals at Universal Music Group, who continue to collect the majority of this channel’s revenue. You’re ruining parody for everyone.

In 2024, you shouldn’t need to have a law degree to post a humorous parody song.

But, that is the way of the world today. The combination of the DMCA’s “take this down or else” and YouTube’s willingness to cater to big entertainment companies with the way ContentID works allows bogus copyright claims to have a real impact in all sorts of awful ways.

We’ve said it before: copyright remains the one tool that allows for the censorship of content, but it’s supposed to only be applied to situations of actual infringement. But because Congress and the courts have decided that copyright is in some sort of weird First Amendment free zone, it allows for the removal of content before there is any adjudication of whether or not the content is actually infringing.

And that has been a real loss to culture. There’s a reason we have fair use. There’s a reason we allow people to create parodies. It’s because it adds to and improves our cultural heritage. The video above (assuming it’s still available) is an astoundingly wonderful cultural artifact. But it’s one that is greatly at risk due to abusive copyright claims.

Nope, it has been taken down by Universal Music Group

Let’s also take this one step further. Tennessee just recently passed a new law, the ELVIS Act (Ensuring Likeness Voice and Image Security Act). This law expands the already problematic space of publicity rights based on a nonsense moral panic about AI and deepfakes. Because there’s an irrational (and mostly silly) fear of people taking the voice and likeness of musicians, this law broadly outlaws that.

While the ELVIS Act has an exemption for works deemed to be “fair use,” as with the rest of the discussion above, copyright law today seems to (incorrectly, in my opinion) take a “guilty until proven innocent” approach to copyright and fair use. That is, everything is set up to assume it’s infringing unless you can convince a court that it’s fair use, and that leads to all sorts of censorship.

[…]

Source: When You Need To Post A Lengthy Legal Disclaimer With Your Parody Song, You Know Copyright Is Broken | Techdirt

Europol asks tech firms, governments to unencrypt your private messages

In a joint declaration of European police chiefs published over the weekend, Europol said it needs lawful access to private messages, and said tech companies need to be able to scan them (ostensibly impossible with E2EE implemented) to protect users. Without such access, cops fear they won’t be able to prevent “the most heinous of crimes” like terrorism, human trafficking, child sexual abuse material (CSAM), murder, drug smuggling and other crimes.

“Our societies have not previously tolerated spaces that are beyond the reach of law enforcement, where criminals can communicate safely and child abuse can flourish,” the declaration said. “They should not now.”

Not exactly true – most EU countries do not tolerate anyone opening your private (snail) mail without a warrant.

The joint statement, which was agreed to in cooperation with the UK’s National Crime Agency, isn’t exactly making a novel claim. It’s nearly the same line of reasoning that the Virtual Global Taskforce, an international law enforcement group founded in 2003 to combat CSAM online, made last year when Meta first first started talking about implementing E2EE on Messenger and Instagram.

While not named in this latest declaration itself [PDF], Europol said that its opposition to E2EE “comes as end-to-end encryption has started to be rolled out across Meta’s messenger platform.” The UK NCA made a similar statement in its comments on the Europol missive released over the weekend.

The declaration urges the tech industry not to see user privacy as a binary choice, but rather as something that can be assured without depriving law enforcement of access to private communications.

Not really though. And if law enforcement can get at it, then so can everyone else.

[…] Gail Kent, Meta’s global policy director for Messenger, said in December the E2EE debate is far more complicated than the child safety issue that law enforcement makes it out to be, and leaving an encryption back door in products for police to take advantage of would only hamper trust in its messaging products.

Kent said Meta’s E2EE implementation prevents client-side scanning of content, which has been one of the biggest complaints from law enforcement. Kent said even that technology would violate user trust, as it serves as a workaround to intrude on user privacy without compromising encryption – an approach Meta is unwilling to take, according to Kent’s blog post.

As was pointed out during previous attempts to undermine E2EE, not only would an encryption back door (client-side scanning or otherwise) provide an inroad for criminals to access secured information, it wouldn’t stop criminals from finding some other way to send illicit content without the prying eyes of law enforcement able to take a look.

[…]

“We don’t think people want us reading their private messages, so have developed safety measures that prevent, detect and allow us to take action against this heinous abuse, while maintaining online privacy and security,” a Meta spokesperson told us last year. “It’s misleading and inaccurate to say that encryption would have prevented us from identifying and reporting accounts … to the authorities.”

In other words, don’t expect Meta to cave on this one when it can develop a fancy new detection algorithm instead.

Source: Europol asks tech firms, governments to get rid of E2EE • The Register

And every time they come for your freedom whilst quoting child safety – look out.

EDPS warns of EU plans to spy on personal chat messages

This week, during the presentation of the 2023 annual review ( pdf ) , the European privacy supervisor EDPS again warned about European plans to monitor chat messages from European citizens. According to the watchdog, this leads to ‘irreversible surveillance’.

At the beginning of 2022, the European Commission came up with a proposal to inspect all chat messages and other communications from citizens for child abuse. In the case of end-to-end encrypted chat services, this should be done via client-side scanning.

The European Parliament voted against the proposal, but came up with its own proposal.

However, the European member states have not yet taken a joint position.

Already in 2022, the EDPS raised the alarm about the European Commission’s proposal to monitor citizens’ communications. It is seen as a serious risk to the fundamental rights of 450 million Europeans.

Source: EDPS warns of European plans to monitor chat messages – Emerce

Sure, so the EU is not much of a democracy with the European Council (which is where the actual power is) not being elected at all, but that doesn’t mean it has to be a surveillance police state.

US Hospital Websites Almost All Give your Data to 3rd parties, but Many just don’t tell you about it

 In this cross-sectional analysis of a nationally representative sample of 100 nonfederal acute care hospitals, 96.0% of hospital websites transmitted user information to third parties, whereas 71.0% of websites included a publicly accessible privacy policy. Of 71 privacy policies, 40 (56.3%) disclosed specific third-party companies receiving user information.

[…]

Of 100 hospital websites, 96 […] transferred user information to third parties. Privacy policies were found on 71 websites […] 70 […] addressed how collected information would be used, 66 […] addressed categories of third-party recipients of user information, and 40 […] named specific third-party companies or services receiving user information.

[…]

In this cross-sectional study of a nationally representative sample of 100 nonfederal acute care hospitals, we found that although 96.0% of hospital websites exposed users to third-party tracking, only 71.0% of websites had an available website privacy policy. Polices averaged more than 2500 words in length and were written at a college reading-level. Given estimates that more than one-half of adults in the US lack literacy proficiency and that the average patient in the US reads at a grade 8 level, the length and complexity of privacy policies likely pose substantial barriers to users’ ability to read and understand them.27,32

[…]

Only 56.3% of policies (and only 40 hospitals overall) identified specific third-party recipients. Named third-parties tended to be companies familiar to users, such as Google. This lack of detail regarding third-party data recipients may lead users to assume that they are being tracked only by a small number of companies that they know well, when, in fact, hospital websites included in this study transferred user data to a median of 9 domains.

[…]

In addition to presenting risks for users, inadequate privacy policies may pose risks for hospitals. Although hospitals are generally not required under federal law to have a website privacy policy that discloses their methods of collecting and transferring data from website visitors, hospitals that do publish website privacy policies may be subject to enforcement by regulatory authorities like the Federal Trade Commission (FTC).33 The FTC has taken the position that entities that publish privacy policies must ensure that these policies reflect their actual practices.34 For example, entities that promise they will delete personal information upon request but fail to do so in practice may be in violation of the FTC Act.34

[…]

Source: User Information Sharing and Hospital Website Privacy Policies | Ethics | JAMA Network Open | JAMA Network

How private equity has used copyright to cannibalise the past at the expense of the future

Walled Culture has been warning about the financialisation and securitisation of music for two years now. Those obscure but important developments mean that the owners of copyrights are increasingly detached from the creative production process. They regard music as just another asset, like gold, petroleum or property, to be exploited to the maximum. A Guest Essay in the New York Times points out one of the many bad consequences of this trend:

Does that song on your phone or on the radio or in the movie theater sound familiar? Private equity — the industry responsible for bankrupting companies, slashing jobs and raising the mortality rates at the nursing homes it acquires — is making money by gobbling up the rights to old hits and pumping them back into our present. The result is a markedly blander music scene, as financiers cannibalize the past at the expense of the future and make it even harder for us to build those new artists whose contributions will enrich our entire culture.

As well as impoverishing our culture, the financialisation and securitisation of music is making life even harder for the musicians it depends on:

In the 1990s, as the musician and indie label founder Jenny Toomey wrote recently in Fast Company, a band could sell 10,000 copies of an album and bring in about $50,000 in revenue. To earn the same amount in 2024, the band’s whole album would need to rack up a million streams — roughly enough to put each song among Spotify’s top 1 percent of tracks. The music industry’s revenues recently hit a new high, with major labels raking in record earnings, while the streaming platforms’ models mean that the fractions of pennies that trickle through to artists are skewed toward megastars.

Part of the problem is the extremely low rates paid by streaming services. But the larger issue is the power imbalance within all the industries based on copyright. The people who actually create books, music, films and the rest are forced to accept bad deals with the distribution companies. Walled Culture the book (free ebook versions) details the painfully low income the vast majority of artists derive from their creativity, and how most are forced to take side jobs to survive. This daily struggle is so widespread that it is no longer remarked upon. It is one of the copyright world’s greatest successes that the public and many creators now regard this state of affairs as a sad but unavoidable fact of life. It isn’t.

The New York Times opinion piece points out that there are signs private equity is already moving on to its next market/victim, having made its killing in the music industry. But one thing is for sure. New ways of financing today’s exploited artists are needed, and not ones cooked up by Wall Street. Until musicians and creators in general take back control of their works, rather than acquiescing in the hugely unfair deal that is copyright, it will always be someone else who makes most of the money from their unique gifts.

Source: How private equity has used copyright to cannibalise the past at the expense of the future – Walled Culture

Of course, the whole model of continously making money from a single creating is a bit fucked up. If a businessman were to ask for money every time someone read their email that would be plain stupid. How is this any different?

Dutch investigation into Android smartphones leads to new lawsuit against Google Play Services Constant Surveillance

The Mass Damage & Consumer Foundation today announced that it has initiated a class action lawsuit against Google over its Android operating system. The reason is a new study that shows how Dutch Android smartphones systematically transfer large amounts of information about device use to Google. Even with the most privacy-friendly options enabled, user data cannot be prevented from ending up on Google’s servers. According to the foundation, this is not clear to Android users, let alone whether they have given permission for this.

For the research, a team of scientists purchased several Android phones between 2022 and 2024 and captured, decrypted and analyzed the outgoing traffic on a Dutch server. This shows that a bundle of processes called ‘Google Play Services’ runs silently in the background and cannot be disabled or deleted. These processes continuously record what happens on and around the phone. For example, Google shares which apps someone uses, products they order and even whether users are sleeping.

More than nine million Dutch people

The Mass Damage & Consumer Foundation states that Google’s conduct violates a large number of Dutch and European rules that must protect consumers. The foundation wants to use a lawsuit to force Google to implement fundamental (privacy) changes to the Android platform and to offer an opt-out option for every form of data it collects, not just a few.

[…]

Identity can be easily traced

The research paid specific attention to the use of unique identifiers (UIDs). These are characteristics that Google can link to the collected data, such as an e-mail address or Android ID, a unique serial number with which someone is known to Google. The use of these features is sensitive. For example, Google advises against the use of unique features in its own guidelines for app developers: users could unintentionally be tracked across multiple apps. However, one or more of these unique features were found in the data transmissions examined – without exception. The researchers point out that this makes it easy to trace someone’s identity to virtually everything that happens on and around an Android device.

[…]

Source: Dutch investigation into Android smartphones leads to new lawsuit against Google – Mass Damage & Consumer Foundation

OpenAI and Google train AIs on transcriptions of YouTube videos – YouTube and NYTimes desperately try to profit somehow without doing anything except lawsuit

OpenAI and Google trained their AI models on text transcribed from YouTube videos, potentially violating creators’ copyrights, according to The New York Times.

Note – the New York Times is embroiled in copyright lawsuits over AI, where they clearly show they don’t understand that an AI reading content is the same as a person reading content; that content being offered up for free with no paywall is free for everyone and that entering content and then asking for it back doesn’t mean that copyright is infringed.

[…]

It comes just days after YouTube CEO Neal Mohan said in an interview with Bloomberg Originals that OpenAI’s alleged use of YouTube videos to train its new text-to-video generator, Sora, would go against the platform’s policies.

According to the NYT, OpenAI used its Whisper speech recognition tool to transcribe more than one million hours of YouTube videos, which were then used to train GPT-4. The Information previously reported that OpenAI had used YouTube videos and podcasts to train the two AI systems. OpenAI president Greg Brockman was reportedly among the people on this team. Per Google’s rules, “unauthorized scraping or downloading of YouTube content” is not allowed

[…]

The way the data is stored in an ML model means that the data is not scraped or downloaded – unless you consider every view downloading or scraping though.

What this shows is a determination to ride the AI hype and find a way to monetise content that has already been released into the public domain without any extra effort apart from hiring a bunch of lawyers. The players are big and the payoff is potentially huge in terms of cash, but in terms of setting back progress, throwing everything under the copyright bus is a staggering disaster.

Source: OpenAI and Google reportedly used transcriptions of YouTube videos to train their AI models

Academics Try to Figure Out Apple’s default apps Privacy Settings and Fail

A study has concluded that Apple’s privacy practices aren’t particularly effective, because default apps on the iPhone and Mac have limited privacy settings and confusing configuration options.

The research was conducted by Amel Bourdoucen and Janne Lindqvist of Aalto University in Finland. The pair noted that while many studies had examined privacy issues with third-party apps for Apple devices, very little literature investigates the issue in first-party apps – like Safari and Siri.

The aims of the study [PDF] were to investigate how much data Apple’s own apps collect and where it’s sent, and to see if users could figure out how to navigate the landscape of Apple’s privacy settings.

[…]

“Our work shows that users may disable default apps, only to discover later that the settings do not match their initial preference,” the paper states.

“Our results demonstrate users are not correctly able to configure the desired privacy settings of default apps. In addition, we discovered that some default app configurations can even reduce trust in family relationships.”

The researchers criticize data collection by Apple apps like Safari and Siri, where that data is sent, how users can (and can’t) disable that data tracking, and how Apple presents privacy options to users.

The paper illustrates these issues in a discussion of Apple’s Siri voice assistant. While users can ostensibly choose not to enable Siri in the initial setup on macOS-powered devices, it still collects data from other apps to provide suggestions. To fully disable Siri, Apple users must find privacy-related options across five different submenus in the Settings app.

Apple’s own documentation for how its privacy settings work isn’t good either. It doesn’t mention every privacy option, explain what is done with user data, or highlight whether settings are enabled or disabled. Also, it’s written in legalese, which almost guarantees no normal user will ever read it.

[…]

The authors also conducted a survey of Apple users and quizzed them on whether they really understood how privacy options worked on iOS and macOS, and what apps were doing with their data.

While the survey was very small – it covered just 15 respondents – the results indicated that Apple’s privacy settings could be hard to navigate.

Eleven of the surveyed users were well aware about data tracking and that it was mostly on by default. However, when informed about how privacy options work in iOS and macOS, nine of the surveyed users were surprised about the scope of data collection.

[…]

Users were also tested on their knowledge of privacy settings for eight default apps – including Siri, Family Sharing, Safari, and iMessage. According to the study, none could confidently figure out how to work their way around the Settings menu to completely disable default apps. When confused, users relied on searching the internet for answers, rather than Apple’s privacy documentation.

[…]

Assuming Apple has any interest in fixing these shortcomings, the team made a few suggestions. Since many users first went to operating system settings instead of app-specific settings when attempting to disable data tracking, a change could assist users. Centralizing these options would also prevent users from getting frustrated and giving up on finding the settings they’re looking for.

Informing users what specific settings do would also be an improvement – many settings are labelled with just a name, but no further details. The researchers suggest replacing Apple’s jargon-filled privacy policy with descriptions that are in the settings menu itself, and maybe even providing some infographic illustrations as well. Anything would be better than legalese.

While this study probably won’t convince Apple to change its ways, lawsuits might have better luck. Apple has been sued multiple times for not transparently disclosing its data tracking. One of the latest suits calls out Apple’s broken promises about privacy, claiming that “Apple does not honor users’ requests to restrict data sharing.”

[…]

Reminder: Apple has a multi-billion-dollar online ads business that it built while strongly criticizing Facebook and others for their privacy practices.

Source: Academics reckon Apple’s default apps have privacy pitfalls • The Register

Roku’s New Idea to Show You Ads When You Pause Your Video Game and spy on the content on your hdmi cable Is Horrifying

[…]

Roku describes its idea in a patent application, which largely flew under the radar when it was filed in November, and was recently spotted by the streaming newsletter Lowpass. In the application, Roku describes a system that’s able to detect when users pause third-party hardware and software and show them ads during that time.

According to the company, its new system works via an HDMI connection. This suggests that it’s designed to target users who play video games or watch content from other streaming services on their Roku TVs. Lowpass described Roku’s conundrum perfectly:

“Roku’s ability to monetize moments when the TV is on but not actively being used goes away when consumers switch to an external device, be it a game console or an attached streaming adapter from a competing manufacturer,” Janko Roettgers, the newsletter’s author, wrote. “Effectively, HDMI inputs have been a bit of a black box for Roku.”

In addition, Roku wouldn’t just show you any old ads. The company states that its innovation can recognize the content that users have paused and deliver customized related ads. Roku’s system would do this by using audio or video-recognition technologies to analyze what the user is watching or analyze the content’s metadata, among other methods.

[…]

In the case of gaming, there’s also the danger of Roku mistaking a long moment of pondering for a pause and sticking an ad right when you’re getting ready to face the final boss. The company is aware of this potential failure and points out that its system will monitor the frames of the content being watched to ensure there was a phase. It also plans on using other methods, such as analyzing the audio feed on the TV for extended moments of silence, to confirm there has been a pause.

[…]

Source: Roku’s New Idea to Show You Ads When You Pause Your Video Game Is Horrifying

Google will delete data collected from private browsing

In hopes of settling a lawsuit challenging its data collection practices, Google has agreed to destroy web browsing data it collected from users browsing in Chrome’s private modes – which weren’t as private as you might have thought.

The lawsuit [PDF], filed in June, 2020, on behalf of plaintiffs Chasom Brown, Maria Nguyen, and William Byatt, sought to hold Google accountable for making misleading statements about privacy.

[…]

“Despite its representations that users are in control of what information Google will track and collect, Google’s various tracking tools, including Google Analytics and Google Ad Manager, are actually designed to automatically track users when they visit webpages – no matter what settings a user chooses,” the complaint claims. “This is true even when a user browses in ‘private browsing mode.'”

Chrome’s Incognito mode only provides privacy in the client by not keeping a locally stored record of the user’s browsing history. It does not shield website visits from Google.

[…]

During the discovery period from September 2020 through March 2022, Google produced more than 5.8 million pages of documents. Even so, it was sanctioned nearly $1 million in 2022 by Magistrate Judge Susan van Keulen – for concealing details about how it can detect when Chrome users employ Incognito mode.

What the plaintiffs’ legal team found might have been difficult to explain at trial.

“Google employees described Chrome Incognito Mode as ‘misleading,’ ‘effectively a lie,’ a ‘confusing mess,’ a ‘problem of professional ethics and basic honesty,’ and as being ‘bad for users, bad for human rights, bad for democracy,'” according to the declaration [PDF] of Mark C Mao, a partner with the law firm of Boies Schiller Flexner LLP, which represents the plaintiffs.

[…]

On December 26 last year the plaintiffs and Google agreed to settle the case. The plaintiffs’ attorneys have suggested the relief provided by the settlement is worth $5 billion – but nothing will be paid, yet.

The settlement covers two classes of people: one of which excludes those using Incognito mode while logged into their Google Account:

  • Class 1: All Chrome browser users with a Google account who accessed a non-Google website containing Google tracking or advertising code using such browser and who were (a) in “Incognito mode” on that browser and (b) were not logged into their Google account on that browser, but whose communications, including identifying information and online browsing history, Google nevertheless intercepted, received, or collected from June 1, 2016 through the present.
  • Class 2: All Safari, Edge, and Internet Explorer users with a Google account who accessed a non-Google website containing Google tracking or advertising code using such browser and who were (a) in a “private browsing mode” on that browser and (b) were not logged into their Google account on that browser, but whose communications, including identifying information and online browsing history, Google nevertheless intercepted, received, or collected from June 1, 2016 through the present.

The settlement [PDF] requires that Google: inform users that it collects private browsing data, both in its Privacy Policy and in an Incognito Splash Screen; “must delete and/or remediate billions of data records that reflect class members’ private browsing activities”; block third-party cookies in Incognito mode for the next five years (separately, Google is phasing out third-party cookies this year); and must delete the browser signals that indicate when private browsing mode is active, to prevent future tracking.

[…]

The class of affected people has been estimated to number about 136 million.

 

Source: Google will delete data collected from private browsing • The Register