Google Will Require Android Apps to Make Account Deletion Easier

Right now, developers simply need to declare to Google that account deletion is somehow possible, but beginning next year, developers will have to make it easier to delete data through both their app and an online portal. Google specifies:

For apps that enable app account creation, developers will soon need to provide an option to initiate account and data deletion from within the app and online.

This means any app that lets you create an account to use it is required to allow you to delete that information when you’re done with it (or rather, request the developer delete the data from their servers). Although you can request that your data be deleted now, it usually requires manually contacting the developer to remove it. This new policy would mean developers have to offer a kill switch from the get-go rather than having Android users do the leg work.

The web deletion requirement is particularly new and must be “readily discoverable.” Developers must provide a link to a web form from the app’s Play Store landing page, with the idea being to let users delete account data even if they no longer have the app installed. Per the existing Android developer policy, all apps must declare how they collect and handle user data—Google introduced the policy in 2021 and made it mandatory last year. When you go into the Play Store and expand the “Data Safety” section under each app listing, developers list out data collection by criteria.

Simply removing an app from your Android device doesn’t completely scrub your data. Like software on a desktop operating system, files and folders are sometimes left behind from when the app was operating. This new policy will hopefully help you keep your data secure by wiping any unnecessary account info from the app developer’s servers, but also hopes to cut down on straggling data on your device. Conversely, you don’t have to delete your data if you think you’ll come to the app later. When it says you have a “choice,” Google wants to ensure it can point to something obvious.

It’s unclear how Google will determine if a developer follows the rules. It is up to the app developer to disclose whether user-specific app data is actually deleted. Earlier this year, Mozilla called out Google after discovering significant discrepancies between the top 20 most popular free apps’ internal privacy policies and those they listed in the Play Store.

https://gizmodo.com/google-android-delete-account-apps-request-uninstall-1850304540

Tesla Employees Have Been Meme-ing Your Private Car Videos

“We could see inside people’s garages and their private properties,” a former employee told Reuters. “Let’s say that a Tesla customer had something in their garage that was distinctive, you know, people would post those kinds of things.”

One office in particular, located in San Mateo, reportedly had a “free-wheeling” atmosphere, where employees would share videos and images with wild abandon. These pics or vids would often be “marked-up” via Adobe photoshop, former employees said, converting drivers’ personal experiences into memes that would circulate throughout the office.

“The people who buy the car, I don’t think they know that their privacy is, like, not respected,” one former employee was quoted as saying. “We could see them doing laundry and really intimate things. We could see their kids.”

Another former employee seemed to admit that all of this was very uncool: “It was a breach of privacy, to be honest. And I always joked that I would never buy a Tesla after seeing how they treated some of these people,” the employee told the news outlet. Yes, it’s always a vote of confidence when a company’s own employees won’t use the products that they sell.

Privacy concerns related to Tesla’s data-guzzling autos aren’t exactly new. Back in 2021, the Chinese government formally banned the vehicles on the premises of certain military installations, calling the company a “national security” threat. The Chinese were worried that the cars’ sensors and cameras could be used to funnel data out of China and back to the U.S. for the purposes of espionage. Beijing seems to have been on to something—although it might be the case that the spying threat comes less from America’s spooks than it does from bored slackers back at Tesla HQ.

One of the reasons that Tesla’s cameras seem so creepy is that you can never really tell if they’re on or not. A couple of years ago, a stationary Tesla helped catch a suspect in a Massachusetts hate crime, when its security system captured images of the man slashing tires in the parking lot of a predominantly Black church. The man was later arrested on the basis of the photos.

Reuters notes that it wasn’t ultimately “able to determine if the practice of sharing recordings, which occurred within some parts of Tesla as recently as last year, continues today or how widespread it was.”

With all this in mind, you might as well always assume that your Tesla is watching, right? And, now that Reuters’ story has come out, you should also probably assume that some bored coder is also watchingpotentially in the hopes of converting your dopiest in-car moment into a meme.

https://gizmodo.com/tesla-elon-musk-car-camera-videos-employees-watching-1850307575

Wow, who knew? How surprising… not.

Tesla workers shared and memed sensitive images recorded by customer cars

Private camera recordings, captured by cars, were shared in chat rooms: ex-workers
Circulated clips included one of child being hit by car: ex-employees
Tesla says recordings made by vehicle cameras ‘remain anonymous’
One video showed submersible vehicle from James Bond film, owned by Elon Musk


LONDON/SAN FRANCISCO, April 6 (Reuters) – Tesla Inc assures its millions of electric car owners that their privacy “is and will always be enormously important to us.” The cameras it builds into vehicles to assist driving, it notes on its website, are “designed from the ground up to protect your privacy.”

But between 2019 and 2022, groups of Tesla employees privately shared via an internal messaging system sometimes highly invasive videos and images recorded by customers’ car cameras, according to interviews by Reuters with nine former employees.

Some of the recordings caught Tesla customers in embarrassing situations. One ex-employee described a video of a man approaching a vehicle completely naked.

Also shared: crashes and road-rage incidents. One crash video in 2021 showed a Tesla driving at high speed in a residential area hitting a child riding a bike, according to another ex-employee. The child flew in one direction, the bike in another. The video spread around a Tesla office in San Mateo, California, via private one-on-one chats, “like wildfire,” the ex-employee said.

Other images were more mundane, such as pictures of dogs and funny road signs that employees made into memes by embellishing them with amusing captions or commentary, before posting them in private group chats. While some postings were only shared between two employees, others could be seen by scores of them, according to several ex-employees.

Tesla states in its online “Customer Privacy Notice” that its “camera recordings remain anonymous and are not linked to you or your vehicle.” But seven former employees told Reuters the computer program they used at work could show the location of recordings – which potentially could reveal where a Tesla owner lived.

One ex-employee also said that some recordings appeared to have been made when cars were parked and turned off. Several years ago, Tesla would receive video recordings from its vehicles even when they were off, if owners gave consent. It has since stopped doing so.

“We could see inside people’s garages and their private properties,” said another former employee. “Let’s say that a Tesla customer had something in their garage that was distinctive, you know, people would post those kinds of things.”

Tesla didn’t respond to detailed questions sent to the company for this report.

About three years ago, some employees stumbled upon and shared a video of a unique submersible vehicle parked inside a garage, according to two people who viewed it. Nicknamed “Wet Nellie,” the white Lotus Esprit sub had been featured in the 1977 James Bond film, “The Spy Who Loved Me.”

The vehicle’s owner: Tesla Chief Executive Elon Musk, who had bought it for about $968,000 at an auction in 2013. It is not clear whether Musk was aware of the video or that it had been shared.

The submersible Lotus vehicle nicknamed “Wet Nellie” that featured in the 1977 James Bond film, “The Spy Who Loved Me,” and which Tesla chief executive Elon Musk purchased in 2013. Tim Scott ©2013 Courtesy of RM Sotheby’s
The submersible Lotus vehicle nicknamed “Wet Nellie” that featured in the 1977 James Bond film, “The Spy Who Loved Me,” and which Tesla chief executive Elon Musk purchased in 2013. Tim Scott ©2013 Courtesy of RM Sotheby’s
Musk didn’t respond to a request for comment.

To report this story, Reuters contacted more than 300 former Tesla employees who had worked at the company over the past nine years and were involved in developing its self-driving system. More than a dozen agreed to answer questions, all speaking on condition of anonymity.

Reuters wasn’t able to obtain any of the shared videos or images, which ex-employees said they hadn’t kept. The news agency also wasn’t able to determine if the practice of sharing recordings, which occurred within some parts of Tesla as recently as last year, continues today or how widespread it was. Some former employees contacted said the only sharing they observed was for legitimate work purposes, such as seeking assistance from colleagues or supervisors.

https://www.reuters.com/technology/tesla-workers-shared-sensitive-images-recorded-by-customer-cars-2023-04-06/

ICE Is Grabbing Data From Schools, Abortion Clinics and news orgs with no judicial oversight

US Immigration and Customs Enforcement agents are using an obscure legal tool to demand data from elementary schools, news organizations, and abortion clinics in ways that, some experts say, may be illegal.

While these administrative subpoenas, known as 1509 custom summonses, are meant to be used only in criminal investigations about illegal imports or unpaid customs duties, WIRED found that the agency has deployed them to seek records that seemingly have little or nothing to do with customs violations, according to legal experts and several recipients of the 1509 summonses.

A WIRED analysis of an Immigration and Customs Enforcement (ICE) subpoena tracking database, obtained through a Freedom of Information Act request, found that agents issued custom summons more than 170,000 times from the beginning of 2016 through mid-August 2022. The primary recipients of 1509s include telecommunications companies, major tech firms, money transfer services, airlines, and even utility companies. But it’s the edge cases that have drawn the most concern among legal experts,

The outlier cases include custom summonses that sought records from a youth soccer league in Texas; surveillance video from a major abortion provider in Illinois; student records from an elementary school in Georgia; health records from a major state university’s student health services; data from three boards of elections or election departments; and data from a Lutheran organization that provides refugees with humanitarian and housing support.

In at least two instances, agents at ICE used the custom summons to pressure news organizations to reveal information about their sources.

All of this is done without judicial oversight.

[…]

The 1509 customs summons is an administrative subpoena explicitly and exclusively meant for use in investigations of illegal imports or unpaid customs duties under a law known as Title 19 US Code 1509. Its goal is to provide agencies like ICE with a way to obtain business records from companies without having to go to a judge for a warrant.

[…]

Without access to the underlying subpoenas ICE issued in each use of a 1509, it’s difficult to know exactly why companies in the database were issued customs summonses. However, nearly everyone we spoke to was concerned about the types of organizations that received these summonses. Our investigation found that ICE issued scores of customs summonses to hospitals and hundreds to elementary schools, high schools, and universities. “It’s disturbing,” Mao says. “I really can’t imagine how a student or a health record could possibly be relevant to a permissible customs investigation under the law.”

To figure out if these summonses were issued for customs investigations, we contacted 30 organizations that received them. Most did not respond, and many who did refused to speak on the record for fear of retaliation.

[…]

In March last year, US senator Ron Wyden, an Oregon Democrat who chairs the Senate Finance Committee, revealed that ICE had been using 1509 customs summonses to obtain millions of money transfer records, which were added to a database that was shared with hundreds of law enforcement agencies across the country. According to the American Civil Liberties Union (ACLU), it was one of the largest government surveillance programs in recent memory.

Immediately after Wyden’s investigation, the number of customs summons issued by ICE fell from 3,683 in March 2022 to 1,650 by the end of August, according to the records WIRED obtained.

[…]

 

Source: ICE Is Grabbing Data From Schools and Abortion Clinics | WIRED

Cruz, Warren Intro America Act to Break Up huge advertisers

[…]

The Advertising Middlemen Endangering Rigorous Internet Competition Accountability Act, aka the AMERICA Act. Say what you will about government; Congress’ acronym acumen is untouchable. Introduced by Republican Sen. Mike Lee of Utah, the bill would prohibit companies from owning multiple parts of the digital ad ecosystem if they “process more than $20 billion in digital ad transactions.”

The bill would kneecap Google and Meta, the two biggest players in digital advertising by far, but its provisions seem designed to affect almost every big tech company from Apple to Amazon, too. Google, Meta, Amazon, and Apple did not respond to requests for comment.

The only thing longer than the name of the bill is the stunningly bipartisan list of Senators supporting it: Democrats Amy Klobuchar, Richard Blumenthal, and Elizabeth Warren, and Republicans Ted Cruz, Marco Rubio, Eric Schmitt, Josh Hawley, John Kennedy, Lindsey Graham, J.D. Vance, and Lee. As one observer put it on Twitter, it’s a list of cosponsors “who wouldn’t hold the elevator for each other.” Look at all these little Senators getting along. Isn’t that nice?

[…]

“If enacted into law, this bill would most likely require Google and Facebook to divest significant portions of their advertising businesses—business units that account for or facilitate a large portion of their ad revenue,” Sen. Lee said in a fact sheet about the bill. “Amazon may also have to make divestments, and the bill will impact Apple’s accelerating entry into third-party ads.”

[…]

When you see an ad online, it’s usually the result of a lightspeed bidding war. On one side, the demand side, you have companies who want to buy ads. On the other, the supply side, are apps and websites who have ad space to sell. Advertisers use demand-side tech to compete for the most profitable ad space for their products. Publishers, like Gizmodo.com, use supply-side tech, where they compete to sell the most profitable ads. Sometimes there’s a third piece of tech involved called an “exchange,” which is a service that connects demand-side platforms and supply-side platforms to arrange even more complicated auctions.

Your friends at Google operate the most popular demand-side platform. Google also owns the most popular supply-side platform, and it runs the most popular exchange. And Google is also a publisher, because it sells ad space on places like YouTube and Search. Meta likewise has its hands in multiple corners of the pie. Here’s an analogy: it’s like if the realtor you contracted to represent you in buying a house had also been contracted by the people selling the house. It would be hard to trust that anyone was getting a fair deal, wouldn’t it? That realtor would be in a unique position to jack up the prices for everyone and make extra cash. The dominance is quantifiable—Google itself estimates that it snatches a stunning 35% of every dollar spent on digital ads.

Some people think this is all a little unfair! Unfortunately for Google and Meta, more and more of those people work for the US government.

[…]

Source: Cruz, Warren Intro America Act to Break Up Google, Facebook

This only targets a specific part of the monopolies  / duopolies these companies hold, but it’s hugely bipartisan so we take what we can get.

‘A Blow for Libraries’: Internet Archive Loses Copyright Infringement Lawsuit by money grubbing publishers

A judge ruled against Internet Archive, a free online digital library, on Friday in a lawsuit filed by four top publishers who claimed the company was in violation of copyright laws. The publishers, Hachette Book Group, HarperCollins, John Wiley & Sons, and Penguin Random House filed the lawsuit against Internet Archive in 2020, claiming the company had illegally scanned and uploaded 127 of their books for readers to download for free, detracting from their sales and the authors’ royalties.

U.S. District Court Judge John G. Koeltl ruled in favor of the publishing houses in saying that Internet Archive was making “derivative” works by transforming printed books into e-books and distributing them.

[…]

Koeltl’s decision was in part based on the law that libraries are required to pay publishers for continued use of their digital book copies and are only permitted to lend these digital copies a specified number of times, called controlled digital lending, as agreed by the publisher [not the writer!] before paying to renew its license.

[…]

However, according to the court ruling, Hachette and Penguin provide one or two-year terms to libraries, in which the eBook can be rented an unlimited number of times before the library has to purchase a new license. HarperCollins allows the library to circulate a digital copy 26 times before the license has to be renewed, while Wiley has continued to experiment with several subscription models.

[…]

The judge ruled that because Internet Archive was purchasing the book only once before scanning it and lending each digital copy an unlimited number of times, it is an infringement of copyright and “concerns the way libraries lend eBooks.

[…]

Source: ‘A Blow for Libraries’: Internet Archive Loses Copyright Infringement Lawsuit

The decision was “a blow to all libraries and the communities we serve,” argued Chris Freeland, the director of Open Libraries at the Internet Archive. In a blog post he argued the decision “impacts libraries across the U.S. who rely on controlled digital lending to connect their patrons with books online. It hurts authors by saying that unfair licensing models are the only way their books can be read online. And it holds back access to information in the digital age, harming all readers, everywhere.
The Verge adds that the judge rejected “fair use” arguments which had previously protected a 2014 digital book preservation project by Google Books and HathiTrust:

Koetl wrote that any “alleged benefits” from the Internet Archive’s library “cannot outweigh the market harm to the publishers,” declaring that “there is nothing transformative about [Internet Archive’s] copying and unauthorized lending,” and that copying these books doesn’t provide “criticism, commentary, or information about them.” He notes that the Google Books use was found “transformative” because it created a searchable database instead of simply publishing copies of books on the internet.

Their lending model works like this. They purchase a paper copy of the book, scan it to digital format, and then lend out the digital copy to one person at a time. Their argument is that this is no different than lending out the paper copy that they legally own to one person at a time. It is not as cut and dry as you make it out to be.

Source: Internet Archive Loses in Court. Judge Rules They Can’t Scan and Lend eBooks

Last Monday was the day of the oral arguments in the Big Publishers’ lawsuit against libraries in the form of the Internet Archive. As we noted mid-week, publishers won’t quit until libraries are dead. And they got one step closer to that goal on Friday, when Judge John Koetl wasted no time in rejecting every single one of the Internet Archive’s arguments.

The fact that the ruling came out on the Friday after the Monday oral arguments suggests pretty strongly that Judge Koetl had his mind made up pretty quickly and was ready to kill a library with little delay. Of course, as we noted just last Wednesday, whoever lost at this stage was going to appeal, and the really important stuff was absolutely going to happen at the 2nd Circuit appeals court. It’s just that now the Internet Archives, and a bunch of important copyright concepts, are already starting to be knocked down a few levels.

I’ve heard from multiple people claiming that of course the Internet Archive was going to lose, because it was scanning books (!!) and lending them out and how could that be legal? But, the answer, as we explained multiple times, is that every piece of this copyright puzzle had already been deemed legal.

And the Internet Archive didn’t just jump into this without any thought. Two of the most well known legal scholars regarding copyright and libraries, David Hansen and Kyle Courtney, had written a white paper detailing exactly how and why the approach the Internet Archive took with Controlled Digital Lending easily fit within the existing contours and precedents of copyright law.

But, as we and others have discussed for ages, in the copyright world, there’s a long history of courts ignoring what the law actually says and just coming up with some way to say something is infringement if it feels wrong to them. And that’s what happened here.

A key part of the ruling, as in a large percentage of cases that are about fair use, is looking at whether or not the use of the copy is “transformative.” Judge Koetl is 100% positive it is not transformative.

There is nothing transformative about IA’s copying and unauthorized lending of the Works in Suit.7 IA does not reproduce the Works in Suit to provide criticism, commentary, or information about them. See 17 U.S.C. § 107. IA’s ebooks do not “add[] something new, with a further purpose or different character, altering the [originals] with new expression, meaning or message.” Campbell, 510 U.S. at 579. IA simply scans the Works in Suit to become ebooks and lends them to users of its Website for free. But a copyright holder holds the “exclusive[] right” to prepare, display, and distribute “derivative works based upon the copyrighted work.”

But… there’s a lot more to “transformative” use than simply adding something new or altering the meaning. In many cases, fair use is found in cases where you’re copying the exact same content, but for a different purpose, and the Internet Archive’s usage here seems pretty clearly transformative in that it’s changing the way the book can be consumed to make it easier for libraries to lend it out and patrons to read it. That is, the “transformation” is in the way the book can be lent, not the content of the book.

I know many people find this strange (and the judge did here as well) saying things like “but it’s the whole work.” Or “the use is the same because it’s still just reading the book.” But the Supreme Court already said, quite clearly, that such situations can be fair use, such as in the Sony v. Universal case that decided VCRs were legal, and that time shifting TV shows was clear fair use. In that ruling, they even cite Congress noting that “making a copy of a copyright work for… convenience” can be considered fair use.

Unfortunately, Judge Koetl effectively chops away a huge part of the Sony ruling in insisting that this is somehow different.

But Sony is plainly inapposite. IA is not comparable to the parties in Sony — either to Sony, the alleged contributory copyright infringer, or to the home viewers who used the Betamax machine for the noncommercial, nonprofit activity of watching television programs at home. Unlike Sony, which only sold the machines, IA scans a massive number of copies of books and makes them available to patrons rather than purchasing ebook licenses from the Publishers. IA is also unlike the home viewers in Sony, who engaged in the “noncommercial, nonprofit activity” of viewing at a more convenient time television programs that they had the right to view for free at the time they were originally broadcast. 464 U.S. at 449. The home viewers were not accused of making their television programs available to the general public. Although IA has the right to lend print books it lawfully acquired, it does not have the right to scan those books and lend the digital copies en masse.

But note what the Judge did here. Rather than rely on the text of what the Supreme Court actually said in Sony, he insists that he won’t apply the rules of Sony because the parties are different. But if the basic concepts and actions are covered by the Sony ruling, it seems silly to ignore them here as the judge did.

And the differences highlighted by the court here have no bearing on the actual issues and the specifics of fair use and the law. I mean, first of all, the fact that Koetl claims that the Internet Archive is not engaged in “noncommercial, nonprofit activity” is just weird. The Internet Archive is absolutely engaged in noncommerical, nonprofit activity.

The other distinctions are meaningless as well. No, IA is not building devices for people to buy, but in many ways IA’s position here should be seen as stronger than Sony’s because Sony actually was a commercial operation, and IA is literally acting as a library, increasing the convenience for its patrons, and doing so in a manner that is identical to lending out physical books. Sony created a machine, Betamax, that copied TV shows and allowed those who bought those machines to watch the show at a more convenient time. IA created a machine that copies books, and allows library patrons to access those books in a more convenient way.

Also, the Betamax (and VCR) were just as “available to the general public” as the Internet Archive is. The idea that they are substantially different is just… weird. And strikes me as pretty clearly wrong.

There’s another precedential oddity in the ruling. It relies pretty heavily on the somewhat terrible fair use ruling in the 2nd Circuit in the Warhol Foundation v. Goldsmith case. That case was so terrible that we (at the Copia Institute) weighed in with the Supreme Court to let them know how problematic it was, and the Supreme Court is still sitting on a decision in that case.

Which means the Supreme Court is soon to rule on it, and that could very much change or obliterate the case that Judge Koetl leans on heavily for his ruling.

Here, Judge Koetl repeatedly goes back to the Warhol well to make various arguments, especially around the question of the fourth fair use factor: the effect on the market. To me, this clearly weighs towards fair use, because it’s no different than a regular library. Libraries are allowed to buy (or receive donated) books and lend them out. That’s all the Open Library does here. So to argue there’s a negative impact on the market, the publishers rely on the fact that they’ve been able to twist and bend copyright law so much that they’ve created a new, extortionate market in ebook “licenses,” and then play all sorts of games to force people to buy the books rather than check them out of the library.

Judge Koetl seems particularly worried about how much damage this could do this artificially inflated market:

It is equally clear that if IA’s conduct “becomes widespread, it will adversely affect the potential market for the” Works in Suit. Andy Warhol Found., 11 F.4th at 48. IA could expand the Open Libraries project far beyond the current contributing partners, allowing new partners to contribute many more concurrent copies of the Works in Suit to increase the loan count. New organizations like IA also could emerge to perform similar functions, further diverting potential readers and libraries from accessing authorized library ebooks from the Publishers. This plainly risks expanded future displacement of the Publishers’ potential revenues.

But go back and read that paragraph again, and replace the key words to read that if libraries become widespread, it will adversely affect the potential market for buying books in bookstores… because libraries would be “diverting potential readers” from purchasing physical books, which “plainly risks expanded future displacement of the Publishers’ potential revenues.”

Again, the argument here is effectively that libraries themselves shouldn’t be allowed. And that seems like a problem?

Koetl also falls into the ridiculous trap of saying that “you can’t compete with free” and that libraries will favor CDL-scanned books over licensed ones:

An accused infringer usurps an existing market “where the infringer’s target audience and the nature of the infringing content is the same as the original.” Cariou, 714 F.3d at 709; see also Andy Warhol Found., 11 F.4th at 50. That is the case here. For libraries that are entitled to partner with IA because they own print copies of books in IA’s collection, it is patently more desirable to offer IA’s bootleg ebooks than to pay for authorized ebook licenses. To state the obvious, “[i]t is difficult to compete with a product offered for free.” Sony BMG Music Ent. v. Tenenbaum, 672 F. Supp. 2d 217, 231 (D. Mass. 2009).

Except that’s literally wrong. The licensed ebooks have many features that the scanned ones don’t. And many people (myself included!) prefer to check out licensed ebooks from our local libraries rather than the CDL ones, because they’re more readable. My own library offers the ability to check out books from either one, and defaults to recommending the licensed ebooks, because they’re a better customer experience, which is how tons of products “compete with free” all the time.

I mean, not to be simplistic here, but the bottled water business in the US is an over $90 billion market for something most people can get for free (or effectively free) from the tap. That’s three times the size of the book publishing market. So, uh, maybe don’t say “it’s difficult to compete with free.” Other industries do it just fine. The publishers are just being lazy.

Besides, based on this interpretation of Warhol, basically anyone can destroy fair use by simply making up some new, crazy, ridiculously priced, highly restrictive license that covers the same space as the fair use alternative, and claim that the alternative destroys the “market” for this ridiculous license. That can’t be how fair use works.

Anyway, one hopes first that the Supreme Court rejects the terrible 2nd Circuit ruling in the Warhol Foundation case, and that this in turn forces Judge Koetl to reconsider his argument. But given the pretzel he twisted himself into to ignore the Betamax case, it seems likely he’d still find against libraries like the Internet Archive.

Given that, it’s going to be important that the 2nd Circuit get this one right. As the Internet Archive’s Brewster Kahle said in a statement on the ruling:

“Libraries are more than the customer service departments for corporate database products. For democracy to thrive at global scale, libraries must be able to sustain their historic role in society—owning, preserving, and lending books.

This ruling is a blow for libraries, readers, and authors and we plan to appeal it.”

What happens next is going to be critical to the future of copyright online. Already people have pointed out how some of the verbiage in this ruling could have wide reaching impact on questions about copyright in generative AI products or many other kinds of fair use cases.

One hopes that the panel on the 2nd Circuit doesn’t breezily dismiss these issues like Judge Koetl did.

Source: Publishers Get One Step Closer To Killing Libraries

This money grab by publishers is disgusting, for more information and articles I have referenced them here

Nike Blocks F1 Champ Max Verstappen’s ‘Max 1’ Clothing Brand because they can own words now

[…]

Nike’s argument is that Max 1 is too similar to its longtime “Air Max” shoe line, including other “Max Force 1” products and other variations that include similar keywords. Verstappen had named his line of products after himself and his current racing number but encountered legal trouble soon after launch.

The Benelux Office for Intellectual Property—essentially The Netherlands’ trademark office—issued a report that Verstappen’s Max 1 brand carried a “likelihood of confusion” and posed a risk of consumers believing Max 1 products were associated with Nike.

Nike went as far as claiming that some designs in the Max 1 catalog were too similar to the apparel giant’s, while also alleging that the word MAX was prominently used and likened to Nike apparel. For these reasons, Verstappen was reportedly fined approximately $1,100 according to Express.

[…]

Source: Nike Blocks F1 Champ Max Verstappen’s ‘Max 1’ Clothing Brand

1. What about Pepsi Max

2. What about the name Max being much much older than Nike (so prior use)

3. What about people going around using the word max as in eg ‘that’s the max speed’ or ‘that’s the max that will go in’?

3. What the actual fuck, copyright.

“Click-to-cancel” rule would penalize companies that make you cancel by phone

Canceling a subscription should be just as easy as signing up for the service, the Federal Trade Commission said in a proposed “click-to-cancel” rule announced today. If approved, the plan “would put an end to companies requiring you to call customer service to cancel an account that you opened on their website,” FTC commissioners said.

[…]

The FTC said the proposed rule would be enforced with civil penalties and let the commission return money to harmed consumers.

“The proposal states that if consumers can sign up for subscriptions online, they should be able to cancel online, with the same number of steps. If consumers can open an account over the phone, they should be able to cancel it over the phone, without endless delays,” FTC Chair Lina Khan wrote.

[…]

Source: “Click-to-cancel” rule would penalize companies that make you cancel by phone | Ars Technica

We need this globally!

Dashcam App is driving nazi informer wet dream, Sends Video of You Speeding and other infractions Directly to Police

Speed cameras have been around for a long time and so have dash cams. The uniquely devious idea of combining the two into a traffic hall monitor’s dream device was not a potential reality until recently, though. According to the British Royal Automobile Club, such a combination is coming soon. The app, which is reportedly available in the U.K. as soon as May, will allow drivers to report each other directly to the police with video evidence for things like running red lights, failure to use a blinker, distracted driving, and yes, speeding.

Its founder Oleksiy Afonin recently held meetings with police to discuss how it would work. In a nutshell, video evidence of a crime could be uploaded as soon as the driver who captured it stopped their vehicle to do so safely. According to the RAC, the footage could then be “submitted to the police through an official video portal in less than a minute.” Police reportedly were open to the idea of using the videos as evidence in court.

The RAC questioned whether such an app could be distracting. It certainly opens up a whole new world of crime reporting. In some cities, individuals can report poorly or illegally parked cars to traffic police. Drivers getting into the habit of reporting each other for speeding might be a slippery slope, though. The government would be happy to collect the ticket revenue but the number of citations for alleged speeding could be off the charts with such a system. Anybody can download the app and report someone else, but the evidence would need to be reviewed.

The app, called dashcamUK, will only be available in the United Kingdom, as its name indicates. Thankfully, it doesn’t seem like there are any plans to bring it Stateside. Considering the British public is far more open to the use of CCTV cameras in terms of recording crimes than Americans are, it will likely stay that way for that reason, among others.

Source: Strangers Can Send Video of You Speeding Directly to Police With Dashcam App

TSA Confirms Biometric Scanning Soon Won’t Be Optional Even For Domestic Travelers

[…]

In 2017, the DHS began quietly rolling out its facial recognition program, starting with international airports and aimed mainly at collecting/scanning people boarding international flights. Even in its infancy, the DHS was hinting this was never going to remain solely an international affair.

It made its domestic desires official shortly thereafter, with the TSA dropping its domestic surveillance “roadmap” which now included “expanding biometrics to additional domestic travelers.” Then the DHS and TSA ran silent for a bit, resurfacing in late 2022 with the news it was rolling out its facial recognition system at 16 domestic airports.

As of January, the DHS and TSA were still claiming this biometric ID verification system was strictly opt-in. A TSA rep interviewed by the Washington Post, however, hinted that opting out just meant subjecting yourself to the worst in TSA customer service. Given the options, more travelers would obviously prefer a less brusque/hands-y trip through security checkpoints, ensuring healthy participation in the TSA’s “optional” facial recognition program.

A little more than two months have passed, and the TSA is now informing domestic travelers there will soon be no way to opt out of its biometric program. (via Papers Please)

Speaking at an aviation security panel at South by Southwest, TSA Administrator David Pekoske made these comments:

“We’re upgrading our camera systems all the time, upgrading our lighting systems,” Pekoske said. “(We’re) upgrading our algorithms, so that we are using the very most advanced algorithms and technology we possibly can.”

He said passengers can also choose to opt out of certain screening processes if they are uncomfortable, for now. Eventually, biometrics won’t be optional, he said.

[…]

Pekoske buries the problematic aspects of biometric harvesting in exchange for domestic travel “privileges” by claiming this is all about making things better for passengers.

“It’s critically important that this system has as little friction as it possibly can, while we provide for safety and security,” Pekoske said.

Yes, you’ll get through screening a little faster. Unless the AI is wrong, in which case you’ll be dealing with a whole bunch of new problems most agents likely won’t have the expertise to handle.

[…]

More travelers. Fewer agents. And a whole bunch of screens to interact with. That’s the plan for the nation’s airports and everyone who passes through them.

Source: TSA Confirms Biometric Scanning Soon Won’t Be Optional Even For Domestic Travelers | Techdirt

And way more data that hackers can get their hands on and which the government and people who buy the data can use for 1984 type purposes.

Big Four publishers move to crush the Internet Archive

On Monday four of the largest book publishers asked a New York court to grant summary judgment in a copyright lawsuit seeking to shut down the Internet Archive’s online library and hold the non-profit organization liable for damages.

The lawsuit was filed back June 1, 2020, by the Hachette Book Group, HarperCollins Publishers, John Wiley & Sons and Penguin Random House. In the complaint [PDF], the publishers ask for an injunction that orders “all unlawful copies be destroyed” in the online archive.

The central question in the case, as summarized during oral arguments by Judge John Koeltl, is: does a library have the right to make a copy of a book that it otherwise owns and then lend the ebook it has made without a license from the publisher to patrons of the library?

Publishers object to the Internet Archive’s efforts to scan printed books and make digital copies available online to readers without buying a license from the publisher. The Internet Archive has filed its own motion for summary judgment to have the case dismissed.

The Internet Archive (IA) began its book scanning project back in 2006 and by 2011 started lending out digital copies. It did so, however, in a way that maintained the limitation imposed by physical book ownership.

This activity is fundamentally the same as traditional library lending and poses no new harm to authors or the publishing industry

Its Controlled Digital Lending (CDL) initiative allows only one person to check out the digital copy of each scanned physical book. The idea is that the purchased physical book is being lent in digital form but no extra copies are being lent. IA presently offers 1.3 million books to the public in digital form.

“This activity is fundamentally the same as traditional library lending and poses no new harm to authors or the publishing industry,” IA argued in answer [PDF] to the publisher’s complaint.

“Libraries have collectively paid publishers billions of dollars for the books in their print collections and are investing enormous resources in digitization in order to preserve those texts. CDL helps them take the next step by making sure the public can make full use of the books that libraries have bought.”

The publishers, however, want libraries to pay for ebooks in addition to the physical books they have purchased already. And they claim they have lost millions in revenue, though IA insists there’s no evidence of the presumptive losses.

“Brewster Kahle, Internet Archive’s founder and funder, is on a mission to make all knowledge free. And his goal is to circulate ebooks to billions of people by transforming all library collections from analog to digital,” said Elizabeth McNamara, attorney for the publishers, during Monday’s hearing.

“But IA does not want to pay authors or publishers to realize this grand scheme and they argue it can be excused from paying the customary fees because what they’re doing is in the public interest.”

Kahle in a statement denounced the publishers’ demands. “Here’s what’s at stake in this case: hundreds of libraries contributed millions of books to the Internet Archive for preservation in addition to those books we have purchased,” he said.

“Thousands of donors provided the funds to digitize them.

“The publishers are now demanding that those millions of digitized books, not only be made inaccessible, but be destroyed. This is horrendous. Let me say it again – the publishers are demanding that millions of digitized books be destroyed.

“And if they succeed in destroying our books or even making many of them inaccessible, there will be a chilling effect on the hundreds of other libraries that lend digitized books as we do.”

[…]

Source: Big Four publishers move to crush the Internet Archive • The Register

AI-generated art may be protected, says US Copyright Office – requires meaningful creative input from a human

[…]

AI software capable of automatically generating images or text from an input prompt or instruction has made it easier for people to churn out content. Correspondingly, the USCO has received an increasing number of applications to register copyright protections for material, especially artwork, created using such tools.

US law states that intellectual property can be copyrighted only if it was the product of human creativity, and the USCO only acknowledges work authored by humans at present. Machines and generative AI algorithms, therefore, cannot be authors, and their outputs are not copyrightable.

Digital art, poems, and books generated using tools like DALL-E, Stable Diffusion, Midjourney, ChatGPT, or even the newly released GPT-4 will not be protected by copyright if they were created by humans using only a text description or prompt, USCO director Shira Perlmutter warned.

“If a work’s traditional elements of authorship were produced by a machine, the work lacks human authorship and the Office will not register it,” she wrote in a document outlining copyright guidelines.

“For example, when an AI technology receives solely a prompt from a human and produces complex written, visual, or musical works in response, the ‘traditional elements of authorship’ are determined and executed by the technology – not the human user.

“Instead, these prompts function more like instructions to a commissioned artist – they identify what the prompter wishes to have depicted, but the machine determines how those instructions are implemented in its output.”

The USCO will consider content created using AI if a human author has crafted something beyond the machine’s direct output. A digital artwork that was formed from a prompt, and then edited further using Photoshop, for example, is more likely to be accepted by the office. The initial image created using AI would not be copyrightable, but the final product produced by the artist might be.

Thus it would appear the USCO is simply saying: yes, if you use an AI-powered application to help create something, you have a reasonable chance at applying for copyright, just as if you used non-AI software. If it’s purely machine-made from a prompt, you need to put some more human effort into it.

In a recent case, officials registered a copyright certificate for a graphic novel containing images created using Midjourney. The overall composition and words were protected by copyright since they were selected and arranged by a human, but the individual images themselves were not.

“In the case of works containing AI-generated material, the Office will consider whether the AI contributions are the result of ‘mechanical reproduction’ or instead of an author’s ‘own original mental conception, to which [the author] gave visible form’. The answer will depend on the circumstances, particularly how the AI tool operates and how it was used to create the final work. This is necessarily a case-by-case inquiry,” the USCO declared.

Perlmutter urged people applying for copyright protection for any material generated using AI to state clearly how the software was used to create the content, and show which parts of the work were created by humans. If they fail to disclose this information accurately, or try to hide the fact it was generated by AI, USCO will cancel their certificate of registration and their work may not be protected by copyright law.

Source: AI-generated art may be protected, says US Copyright Office • The Register

So very slowly but surely the copyrighters are starting to understand what this newfangled AI technology is all about.

So what happens when an AI edits and AI generated artwork?

SCOPE Europe becomes the accredited monitoring body for a Dutch national data protection code of conduct

[…]SCOPE Europe is now accredited by the Dutch Data Protection Authority as the monitoring body of the Data Pro Code. On this occasion, SCOPE Europe celebrates its success in obtaining its second accreditation and looks forward to continuing its work on fostering trust in the digital economy.

When we were approached by NLdigital, the creators of the Data Pro Code, we knew that taking on the monitoring of a national code of conduct would be an exciting endeavor. As the first-ever accredited monitoring body for a transnational GDPR code of conduct, SCOPE Europe has built unique expertise in the field and are proud, to further apply in the context of another co-regulatory initiative.

The Code puts forward an accessible compliance framework for companies of all sizes, including micro, small and medium enterprises in the Netherlands. With the approval and now the accreditation of its monitoring body, the Data Pro Code will enable data processors to demonstrate GDPR compliance and boost transparency within the digital industry.

Source: PRESS RELEASE: SCOPE Europe becomes the accredited monitoring body for a Dutch national code of conduct: SCOPE Europe bvba/sprl

Anker Eufy security cam ‘stored unique ID’ of everyone filmed in the cloud for other cameras to identify – and for anyone to watch

A lawsuit filed against eufy security cam maker Anker Tech claims the biz assigns “unique identifiers” to the faces of any person who walks in front of its devices – and then stores that data in the cloud, “essentially logging the locations of unsuspecting individuals” when they stroll past.

[…]

All three suits allege Anker falsely represented that its security cameras stored all data locally and did not upload that data to the cloud.

Moore went public with his claims in November last year, alleging video and audio captured by Anker’s eufy security cams could be streamed and watched by any stranger using VLC media player, […]

In a YouTube video, the complaint details, Moore allegedly showed how the “supposedly ‘private,’ ‘stored locally’, ‘transmitted only to you’ doorbell is streaming to the cloud – without cloud storage enabled.”

He claimed the devices were uploading video thumbnails and facial recognition data to Anker’s cloud server, despite his never opting into Anker’s cloud services and said he’d found a separate camera tied to a different account could identify his face with the same unique ID.

The security researcher alleged at the time this showed that Anker was not only storing facial-recog data in the cloud, but also “sharing that back-end information between accounts” lawyers for the two other, near-identical lawsuits claim.

[…]

According to the complaint [PDF], eufy’s security cameras are marketed as “private” and as “local storage only” as a direct alternative to Anker’s competitors that require the use of cloud storage.

Desai’s complaint goes on to claim:

Not only does Anker not keep consumers’ information private, it was further revealed that Anker was uploading facial recognition data and biometrics to its Amazon Web Services cloud without encryption.

In fact, Anker has been storing its customers’ data alongside a specific username and other identifiable information on its AWS cloud servers even when its “eufy” app reflects the data has been deleted. …. Further, even when using a different camera, different username, and even a different HomeBase to “store” the footage locally, Anker is still tagging and linking a user’s facial ID to their picture across its camera platform. Meaning, once recorded on one eufy Security Camera, those same individuals are recognized via their biometrics on other eufy Security Cameras.

In an unrelated incident in 2021, a “software bug” in some of the brand’s 1080p Wi-Fi-connected Eufycams cams sent feeds from some users’ homes to other Eufycam customers, some of whom were in other countries at the time.

[…]

Source: Eufy security cam ‘stored unique ID’ of everyone filmed • The Register

Telehealth startup Cerebral shared millions of patients’ data with advertisers since 2019

Cerebral has revealed it shared the private health information, including mental health assessments, of more than 3.1 million patients in the United States with advertisers and social media giants like Facebook, Google and TikTok.

The telehealth startup, which exploded in popularity during the COVID-19 pandemic after rolling lockdowns and a surge in online-only virtual health services, disclosed the security lapse [This is no security lapse! This is blatant greed served by peddling people’s personal information!] in a filing with the federal government that it shared patients’ personal and health information who used the app to search for therapy or other mental health care services.

Cerebral said that it collected and shared names, phone numbers, email addresses, dates of birth, IP addresses and other demographics, as well as data collected from Cerebral’s online mental health self-assessment, which may have also included the services that the patient selected, assessment responses and other associated health information.

The full disclosure follows:

If an individual created a Cerebral account, the information disclosed may have included name, phone number, email address, date of birth, IP address, Cerebral client ID number, and other demographic or information. If, in addition to creating a Cerebral account, an individual also completed any portion of Cerebral’s online mental health self-assessment, the information disclosed may also have included the service the individual selected, assessment responses, and certain associated health information.

If, in addition to creating a Cerebral account and completing Cerebral’s online mental health self-assessment, an individual also purchased a subscription plan from Cerebral, the information disclosed may also have included subscription plan type, appointment dates and other booking information, treatment, and other clinical information, health insurance/pharmacy benefit information (for example, plan name and group/member numbers), and insurance co-pay amount.

Cerebral was sharing patients’ data with tech giants in real-time by way of trackers and other data-collecting code that the startup embedded within its apps. Tech companies and advertisers, like Google, Facebook and TikTok, allow developers to include snippets of their custom-built code, which allows the developers to share information about their app users’ activity with the tech giants, often under the guise of analytics but also for advertising.

But users often have no idea that they are opting-in to this tracking simply by accepting the app’s terms of use and privacy policies, which many people don’t read.

Cerebral said in its notice to customers — buried at the bottom of its website — that the data collection and sharing has been going on since October 2019 when the startup was founded. The startup said it has removed the tracking code from its apps. While not mentioned, the tech giants are under no obligations to delete the data that Cerebral shared with them.

Because of how Cerebral handles confidential patient data, it’s covered under the U.S. health privacy law known as HIPAA. According to a list of health-related security lapses under investigation by the U.S. Department of Health and Human Services, which oversees and enforces HIPAA, Cerebral’s data lapse is the second-largest breach of health data in 2023.

News of Cerebral’s years-long data lapse comes just weeks after the U.S. Federal Trade Commission slapped GoodRx with a $1.5 million fine and ordered it to stop sharing patients’ health data with advertisers, and BetterHelp was ordered to pay customers $8.5 million for mishandling users’ data.

If you were wondering why startups today should terrify you, Cerebral is just the latest example.

Source: Telehealth startup Cerebral shared millions of patients’ data with advertisers | TechCrunch

 

Holy shit: German Courts saying DNS Service (Quad9) Is Implicated In Any Copyright Infringement At The Domains It Resolves

Back in September 2021 Techdirt covered an outrageous legal attack by Sony Music on Quad9, a free, recursive, anycast DNS platform. Quad9 is part of the Internet’s plumbing: it converts domain names to numerical IP addresses. It is operated by the Quad9 Foundation, a Swiss public-benefit, not-for-profit organization. Sony Music says that Quad9 is implicated in alleged copyright infringement on the sites it resolves. That’s clearly ridiculous, but unfortunately the Regional Court of Hamburg agreed with Sony Music’s argument, and issued an interim injunction against Quad9. The German Society for Civil Rights (Gesellschaft für Freiheitsrechte e.V. or “GFF”) summarizes the court’s thinking:

In its interim injunction the Regional Court of Hamburg asserts a claim against Quad9 based on the principles of the German legal concept of “Stoererhaftung” (interferer liability), on the grounds that Quad9 makes a contribution to a copyright infringement that gives rise to liability, in that Quad9 resolves the domain name of website A into the associated IP address. The German interferer liability has been criticized for years because of its excessive application to Internet cases. German lawmakers explicitly abolished interferer liability for access providers with the 2017 amendment to the German Telemedia Act (TMG), primarily to protect WIFI operators from being held liable for costs as interferers.

As that indicates, this is a case of a law that is a poor fit for modern technology. Just as the liability no longer applies to WIFI operators, who are simply providing Internet access, so the German law should also not catch DNS resolvers like Quad9. The GFF post notes that Quad9 has appealed to the Hamburg Higher Regional Court against the lower court’s decision. Unfortunately, another regional court has just handed down a similar ruling against the company, reported here by Heise Online (translation by DeepL):

the Leipzig Regional Court has sentenced the Zurich-based DNS service Quad9. On pain of an administrative fine of up to 250,000 euros or up to 2 years’ imprisonment, the small resolver operator was prohibited from translating two related domains into the corresponding IP addresses. Via these domains, users can find the tracks of a Sony music album offered via Shareplace.org.

The GFF has already announced that it will be appealing along with Quad9 to the Dresden Higher Regional Court against this new ruling. It says that the Leipzig Regional Court has made “a glaring error of judgment”, and explains:

If one follows this reasoning, the copyright liability of completely neutral infrastructure services like Quad9 would be even stricter than that of social networks, which fall under the infamous Article 17 of the EU Copyright Directive,” criticizes Felix Reda, head of the Control © project of the Society for Civil Rights. “The [EU] Digital Services Act makes it unequivocally clear that the liability rules for Internet access providers apply to DNS services. We are confident that this misinterpretation of European and German legal principles will be overturned by the Court of Appeals.”

Let’s hope so. If it isn’t, we can expect companies providing the Internet’s basic infrastructure in the EU to be bombarded with demands from the copyright industry and others for domains to be excluded from DNS resolution. The likely result is that perfectly legal sites and their holdings will be ghosted by DNS companies, which will prefer to err on the side of caution rather than risk becoming the next Quad9.

Source: Another German Court Says The DNS Service Quad9 Is Implicated In Any Copyright Infringement At The Domains It Resolves | Techdirt

There are some incredibly stupid judges and lawyers out there

YouTube Chills the Darned Hell Out On Its Cursing Policy, but you still can’t fucking say fuck

Google’s finally rolling back its unpopular decree against any kinds of profanity in videos, making it harder for any creators used to offering colorful sailor’s speech in videos from monetizing content on behalf of its beloved ad partners. The only thing is, Google still seems to think the “f-word” is excessively harsh language, so sorry Samuel L. Jackson, those motha-[redacted] snakes are still liable for less ad dollars on this motha-[redacted] plane.

On Tuesday, Google updated its support page to offer up an olive branch to crass creators upset that their potty mouths were resulting in their videos being demonetized. Now, the company clarified that use of “moderate” profanity at any time in a video is now eligible for ad revenue.

However, the company seemed to be antagonistic to “stronger profanity” like “the f-word,” AKA “fuck.” You can’t say “fuck” in the first seven seconds or repeatedly throughout a video or else you will receive “limited ads.” Putting words like “fuck” into a title or thumbnail will result in no ad content.

What is allowed are words like “hell” or “damn” in a title or thumbnail. Words like “bitch,” “douchebag,” “asshole,” and “shit” are considered “moderate profanity, so that’s fine to use frequently in a video. But “fuck,” dear god, will hurt advertiser’s poor virgin ears. YouTube has been extremely sensitive to what its advertisers are saying. For instance the platform came close to pulling big money-making ads over creepy pasta content during the “Elsagate” scandal.

The changes also impacted videos which used music tracks in the background. YouTube is now saying any use of “moderate” or “strong” profanity in background music is eligible for full ad revenue.

Back in November, YouTube changed its creator monetization policy, calling it guidelines for “advertiser-friendly content.” The company decreed that any video with a thumbnail or title containing obscene language or “adult material” wouldn’t receive any ad revenue. YouTube also said it would demonetize violent content such as dead bodies without context, or virtual violence directed at a “real, named person.” Fair enough, but then YouTube said it would demonetize any video which used profanity “in the first eight seconds of the video.”

[…]

Source: YouTube Chills the Hell Out On Its Cursing Policy

What the shitting fuck, Google. Americans. I thought it was the land of the free, once?

When Given The Choice, Most Authors Reject Excessively Long Copyright Terms

Recently, Walled Culture mentioned the problem of orphan works. These are creations, typically books, that are still covered by copyright, but unavailable because the original publisher or distributor has gone out of business, or simply isn’t interested in keeping them in circulation. The problem is that without any obvious point of contact, it’s not possible to ask permission to re-publish or re-use it in some way.

It turns out that there is another serious issue, related to that of orphan works. It has been revealed by the New York Public Library, drawing on work carried out as a collaboration between the Internet Archive and the US Copyright Office. According to a report on the Vice Web site:

the New York Public Library (NYPL) has been reviewing the U.S. Copyright Office’s official registration and renewals records for creative works whose copyrights haven’t been renewed, and have thus been overlooked as part of the public domain.

The books in question were published between 1923 and 1964, before changes to U.S. copyright law removed the requirement for rights holders to renew their copyrights. According to Greg Cram, associate general counsel and director of information policy at NYPL, an initial overview of books published in that period shows that around 65 to 75 percent of rights holders opted not to renew their copyrights.

Since most people today will naturally assume that a book published between 1923 and 1964 is still in copyright, it is unlikely anyone has ever tried to re-publish or re-use material from this period. But this new research shows that the majority of these works are, in fact, already in the public domain, and therefore freely available for anyone to use as they wish.

That’s a good demonstration of how the dead hand of copyright stifles fresh creativity from today’s writers, artists, musicians and film-makers. They might have drawn on all these works as a stimulus for their own creativity, but held back because they have been brainwashed by the copyright industry into thinking that everything is in copyright for inordinate lengths of time. As a result, huge numbers of books that are freely available according to the law remain locked up with a kind of phantom copyright that exists only in people’s minds, infected as they are with copyright maximalist propaganda.

The other important lesson to be drawn from this work by the NYPL is that given the choice, the majority of authors didn’t bother renewing their copyrights, presumably because they didn’t feel they needed to. That makes today’s automatic imposition of exaggeratedly-long copyright terms not just unnecessary but also harmful in terms of the potential new works, based on public domain materials, that have been lost as a result of this continuing over-protection.

Source: When Given The Choice, Most Authors Reject Excessively Long Copyright Terms | Techdirt

Texas Bill Would Make ISPs censor any abortion information

Last week, Texas introduced a bill that would make it illegal for internet service providers to let users access information about how to get abortion pills. The bill, called the Women and Child Safety Act, would also criminalize creating, editing, or hosting a website that helps people seek abortions.

If the bill passes, internet service providers (ISPs) will be forced to block websites “operated by or on behalf of an abortion provider or abortion fund.” ISPs would also have to filter any website that helps people who “provide or aid or abet elective abortions” in almost any way, including raising money.

[…]

Five years ago, a bill like this would violate federal law. Remember Net Neutrality? Net Neutrality forced ISPs to act like phone companies, treating all traffic the same with almost no ability to limit or filter the content traveling on their networks. But Net Neutrality was repealed in 2018, essentially reclassifying internet service as a luxury with little regulator oversight, and upending consumers’ right to free access of the web.

[…]

Source: Texas Bill Would Bar ISPs From Hosting Abortion Websites, Info

JPMorgan Chase ‘requires workers give 6 months notice’

A veteran JPMorgan Chase banker fumed over the financial giant’s policy requiring certain staffers to give six months’ notice before being allowed to leave for another job.

The Wall Street worker, who claims to earn around $400,000 annually in total compensation after accumulating 15 years of experience, griped that the lengthy notice period likely means a lucrative job offer from another company will be rescinded.

[…]

“When I looked into the resignation process, I see that my notice period is 6 bloody months!!”

“I was in disbelief, I checked my offer letter and ‘Whoops there it is,’” the post continued.

[…]

A spokesperson for JPMorgan Chase told The Post: “In line with other e-trading organizations, some of our algo trading technology employees have an extended notice period. This affects a very small portion – less than 100 – of our 57,000 technologists.”

[…]

Workers at its India corporate offices said last year that the Wall Street giant was raising its notice period from 30 days for vice president and below to 60 days, according to eFinancialCareer.com.

Meanwhile, bankers at the executive director level saw their notice period bumped up to 90 days.

Source: JPMorgan Chase ‘requires workers give 6 months notice’

On the other side, I’m betting that JPMorgan Chase can just fire you with 0 days notice period.

Guy Embezzles Cool $9 Million From Poop-to-Energy Ponzi Scheme

Stop me if you’ve heard this one before: A guy embezzled nearly $9 million by convincing investors he was turning cow poop into green energy—and then not building any of the machines at all.

On Monday, 66-year-old Raymond Brewer of Porterville, California pled guilty to charges that he’d defrauded investors. Court records show that Brewer stole $8,750,000 from investors between 2014 and 2019 with promises to build anaerobic digesters, or machines that can convert cow manure to methane gas that can then be sold as energy, on dairies in various counties in California and Idaho. But instead of actually building any of those digesters, Brewer spent it on stuff like a new house and new Dodge Ram pickup trucks.

According to the U.S. Attorney’s Office of the Eastern District of California, Brewer was a prolific scammer. He took potential investors on tours of dairies where he said he was going to build the digesters and sent faked documents where he’d signed agreements with those dairies. When investors asked how things were going or for updates on the construction of the digesters or how the digesters were running, Brewer sent over “fake construction schedules, fake invoices for project-related costs, fake power generation reports, fake RECs, and fake pictures,” as well as forged contracts with banks and fake international investors. He must have been great at Photoshop!

Part of the appeal of the scam was in what’s known as Renewable Energy Credits (REC), which are credits issued by the federal government signifying that renewable energy has been produced on a site; those credits can then be sold to companies looking to offset their fossil fuel emissions. Brewer told his investors that he’d get them 66% of all the profits from those credits.

Five years is a hell of a long time to promise folks money and not deliver—which is why the U.S. Attorney’s office has described Brewer’s setup as a “Ponzi” scheme, because he began repaying old investors with money he was scamming off of new ones. When investors began to get suspicious, the U.S. Attorneys’ office said, Brewer moved to Montana and assumed a new identity. He was finally arrested in 2020.

Some profiles for Brewer’s company, CH4 Energy, are still active on business directories like PitchBook and food waste resource site ReFED. The company was even the subject of a profile on its “work” in local paper Visalia Times-Delta in 2016 and was part of a story in the LA Times in 2013 on dairy farmers and renewable energy.

In the LA Times story, Brewer is quoted as talking about the reluctance of dairy farmers to install the digesters.

“Brewer said he tested his system in other states, such as Wisconsin and Idaho, before shopping it around with California dairy farmers, whom he said were very skeptical,” the LA Times wrote. “He eventually signed his first contract with [a farmer]—‘Talk about apprehensive,’ Brewer recalled. ‘That was a little bit of an understatement.’”

Our buddy Ray wasn’t totally bullshitting—pardon the pun—in peddling his ideas. Anaerobic digesters are real machines that do convert animal waste into energy, and millions of dollars in federal and state money have been spent on the technology. However, questions remain around just how “green” this energy is and whether it’s worth the investment.

Brewer will be sentenced in June and faces up to 20 years in prison.

Source: Guy Embezzles Cool $9 Million From Poop-to-Energy Ponzi Scheme

You don’t own what you buy: Roald Dahl eBooks Censored Remotely after you bought them

“Owners of Roald Dahl ebooks are having their libraries automatically updated with the new censored versions containing hundreds of changes to language related to weight, mental health, violence, gender and race,” reports the British newspaper the Times. Readers who bought electronic versions of the writer’s books, such as Matilda and Charlie and the Chocolate Factory, before the controversial updates have discovered their copies have now been changed.

Puffin Books, the company which publishes Dahl novels, updated the electronic novels, in which Augustus Gloop is no longer described as fat or Mrs Twit as fearfully ugly, on devices such as the Amazon Kindle. Dahl’s biographer Matthew Dennison last night accused the publisher of “strong-arming readers into accepting a new orthodoxy in which Dahl himself has played no part.”
Meanwhile…

  • Children’s book author Frank Cottrell-Boyce admits in the Guardian that “as a child I disliked Dahl intensely. I felt that his snobbery was directed at people like me and that his addiction to revenge was not good. But that was fine — I just moved along.”

But Cottrell-Boyce’s larger point is “The key to reading for pleasure is having a choice about what you read” — and that childhood readers faces greater threats. “The outgoing children’s laureate Cressida Cowell has spent the last few years fighting for her Life-changing Libraries campaign. It’s making a huge difference but it would have a been a lot easier if our media showed a fraction of the interest they showed in Roald Dahl’s vocabulary in our children.”

Source: Roald Dahl eBooks Reportedly Censored Remotely – Slashdot

Signal says it will shut down in UK over Online Safety Bill, which wants to install spyware on all your devices

[…]

The Online Safety Bill contemplates bypassing encryption using device-side scanning to protect children from harmful material, and coincidentally breaking the security of end-to-end encryption at the same time. It’s currently being considered in Parliament and has been the subject of controversy for months.

[ something something saving children – that’s always a bad sign when they trot that one out ]

The legislation contains what critics have called “a spy clause.” [PDF] It requires companies to remove child sexual exploitation and abuse (CSEA) material or terrorist content from online platforms “whether communicated publicly or privately.” As applied to encrypted messaging, that means either encryption must be removed to allow content scanning or scanning must occur prior to encryption.

Signal draws the line

Such schemes have been condemned by technical experts and Signal is similarly unenthusiastic.

“Signal is a nonprofit whose sole mission is to provide a truly private means of digital communication to anyone, anywhere in the world,” said Meredith Whittaker, president of the Signal Foundation, in a statement provided to The Register.

“Many millions of people globally rely on us to provide a safe and secure messaging service to conduct journalism, express dissent, voice intimate or vulnerable thoughts, and otherwise speak to those they want to be heard by without surveillance from tech corporations and governments.”

“We have never, and will never, break our commitment to the people who use and trust Signal. And this means that we would absolutely choose to cease operating in a given region if the alternative meant undermining our privacy commitments to those who rely on us.”

Asked whether she was concerned that Signal could be banned under the Online Safety rules, Whittaker told The Register, “We were responding to a hypothetical, and we’re not going to speculate on probabilities. The language in the bill as it stands is deeply troubling, particularly the mandate for proactive surveillance of all images and texts. If we were given a choice between kneecapping our privacy guarantees by implementing such mass surveillance, or ceasing operations in the UK, we would cease operations.”

[…]

“If Signal withdraws its services from the UK, it will particularly harm journalists, campaigners and activists who rely on end-to-end encryption to communicate safely.”

[…]

 

Source: Signal says it will shut down in UK over Online Safety Bill

Google’s Play Store Privacy Labels Are a ‘Total Failure:’ Study

[…]

“There are two main problems here,” Mozilla’s Caltrider said. “The first problem is Google only requires the information in labels to be self-reported. So, fingers crossed, because it’s the honor system, and it turns out that most labels seem to be misleading.”

Google promises to make apps fix problems it finds in the labels, and threatens to ban apps that don’t get in compliance. But the company has never provided any details about how it polices apps. Google said it’s vigilant about enforcement but didn’t give any details about its enforcement process, and didn’t respond to a question about any enforcement actions it’s taken in the past.

[…]

Of course, Google could just read the privacy policies where apps spell out these practices, like Mozilla did, but there’s a bigger issue at play. These apps may not even be breaking Google’s privacy label rules, because those rules are so relaxed that “they let companies lie,” Caltrider said.

“That’s the second problem. Google’s own rules for what data practices you have to disclose are a joke,” Caltrider said. “The guidelines for the labels make them useless.”

If you go looking at Google’s rules for the data safety labels, which are buried deep in a cascading series of help menus, you’ll learn that there is a long list of things that you don’t have to tell your users about. In other words, you can say you don’t collect data or share it with third parties, while you do in fact collect data and share it with third parties.

For example, apps don’t have to disclose data sharing it if they have “consent” to share the data from users, or if they’re sharing the data with “service providers,” or if the data is “anonymized” (which is nonsense), or if the data is being shared for “specific legal purposes.” There are similar exceptions for what counts as data collection. Those loopholes are so big you could fill up a truck with data and drive it right on through.

[…]

Source: Google’s Play Store Privacy Labels Are a ‘Total Failure:’ Study

Which goes to show again, walled garden app stores really are no better than just downloading stuff from the internet, unless you’re the owner of the walled garden and collect 30% revenue for doing basically not much.

AI-created images lose U.S. copyrights in test for new technology

Images in a graphic novel that were created using the artificial-intelligence system Midjourney should not have been granted copyright protection, the U.S. Copyright Office said in a letter seen by Reuters.

“Zarya of the Dawn” author Kris Kashtanova is entitled to a copyright for the parts of the book Kashtanova wrote and arranged, but not for the images produced by Midjourney, the office said in its letter, dated Tuesday.

The decision is one of the first by a U.S. court or agency on the scope of copyright protection for works created with AI, and comes amid the meteoric rise of generative AI software like Midjourney, Dall-E and ChatGPT.

The Copyright Office said in its letter that it would reissue its registration for “Zarya of the Dawn” to omit images that “are not the product of human authorship” and therefore cannot be copyrighted.

The Copyright Office had no comment on the decision.

Kashtanova on Wednesday called it “great news” that the office allowed copyright protection for the novel’s story and the way the images were arranged, which Kashtanova said “covers a lot of uses for the people in the AI art community.”

Kashtanova said they were considering how best to press ahead with the argument that the images themselves were a “direct expression of my creativity and therefore copyrightable.”

Midjourney general counsel Max Sills said the decision was “a great victory for Kris, Midjourney, and artists,” and that the Copyright Office is “clearly saying that if an artist exerts creative control over an image generating tool like Midjourney …the output is protectable.”

Midjourney is an AI-based system that generates images based on text prompts entered by users. Kashtanova wrote the text of “Zarya of the Dawn,” and Midjourney created the book’s images based on prompts.

The Copyright Office told Kashtanova in October it would reconsider the book’s copyright registration because the application did not disclose Midjourney’s role.

The office said on Tuesday that it would grant copyright protection for the book’s text and the way Kashtanova selected and arranged its elements. But it said Kashtanova was not the “master mind” behind the images themselves.

“The fact that Midjourney’s specific output cannot be predicted by users makes Midjourney different for copyright purposes than other tools used by artists,” the letter said.

Source: AI-created images lose U.S. copyrights in test for new technology | Reuters

I am not sure why they are calling this a victory, as the court is basically reiterating that what she created is hers and what an AI created cannot be copyrighted by her or by the AI itself. That’s a loss for the AI.