The Linkielist

Linking ideas with the world

The Linkielist

Huge data leak shatters the lie that the innocent need not fear surveillance – governments are spying on critics, journos, etc without a warrant using commercial Pegasus spyware by NSO

Billions of people are inseparable from their phones. Their devices are within reach – and earshot – for almost every daily experience, from the most mundane to the most intimate.

Few pause to think that their phones can be transformed into surveillance devices, with someone thousands of miles away silently extracting their messages, photos and location, activating their microphone to record them in real time.

Such are the capabilities of Pegasus, the spyware manufactured by NSO Group, the Israeli purveyor of weapons of mass surveillance.

NSO rejects this label. It insists only carefully vetted government intelligence and law enforcement agencies can use Pegasus, and only to penetrate the phones of “legitimate criminal or terror group targets”.

Yet in the coming days the Guardian will be revealing the identities of many innocent people who have been identified as candidates for possible surveillance by NSO clients in a massive leak of data.

Without forensics on their devices, we cannot know whether governments successfully targeted these people. But the presence of their names on this list indicates the lengths to which governments may go to spy on critics, rivals and opponents.

First we reveal how journalists across the world were selected as potential targets by these clients prior to a possible hack using NSO surveillance tools.

Over the coming week we will be revealing the identities of more people whose phone numbers appear in the leak. They include lawyers, human rights defenders, religious figures, academics, businesspeople, diplomats, senior government officials and heads of state.

Our reporting is rooted in the public interest. We believe the public should know that NSO’s technology is being abused by the governments who license and operate its spyware. But we also believe it is in the public interest to reveal how governments look to spy on their citizens and how seemingly benign processes such as HLR lookups can be exploited in this environment.

[…]

Companies such as NSO operate in a market that is almost entirely unregulated, enabling tools that can be used as instruments of repression for authoritarian regimes such as those in Saudi Arabia, Kazakhstan and Azerbaijan.

The market for NSO-style surveillance-on-demand services has boomed post-Snowden, whose revelations prompted the mass adoption of encryption across the internet. As a result the internet became far more secure, and mass harvesting of communications much more difficult.

But that in turn spurred the proliferation of companies such as NSO offering solutions to governments struggling to intercept messages, emails and calls in transit. The NSO answer was to bypass encryption by hacking devices.

Two years ago the then UN special rapporteur on freedom of expression, David Kaye, called for a moratorium on the sale of NSO-style spyware to governments until viable export controls could be put in place. He warned of an industry that seemed “out of control, unaccountable and unconstrained in providing governments with relatively low-cost access to the sorts of spying tools that only the most advanced state intelligence services were previously able to use”.

His warnings were ignored. The sale of surveillance continued unabated. That GCHQ-like surveillance tools are now available for purchase by repressive governments may give some of Snowden’s critics pause for thought.

[…]

Source: Huge data leak shatters the lie that the innocent need not fear surveillance | Surveillance | The Guardian

Police Are Telling ShotSpotter to Alter Evidence From Gunshot-Detecting AI

On May 31 last year, 25-year-old Safarain Herring was shot in the head and dropped off at St. Bernard Hospital in Chicago by a man named Michael Williams. He died two days later.

Chicago police eventually arrested the 64-year-old Williams and charged him with murder (Williams maintains that Herring was hit in a drive-by shooting). A key piece of evidence in the case is video surveillance footage showing Williams’ car stopped on the 6300 block of South Stony Island Avenue at 11:46 p.m.—the time and location where police say they know Herring was shot.

How did they know that’s where the shooting happened? Police said ShotSpotter, a surveillance system that uses hidden microphone sensors to detect the sound and location of gunshots, generated an alert for that time and place.

Except that’s not entirely true, according to recent court filings.

That night, 19 ShotSpotter sensors detected a percussive sound at 11:46 p.m. and determined the location to be 5700 South Lake Shore Drive—a mile away from the site where prosecutors say Williams committed the murder, according to a motion filed by Williams’ public defender. The company’s algorithms initially classified the sound as a firework. That weekend had seen widespread protests in Chicago in response to George Floyd’s murder, and some of those protesting lit fireworks.

But after the 11:46 p.m. alert came in, a ShotSpotter analyst manually overrode the algorithms and “reclassified” the sound as a gunshot. Then, months later and after “post-processing,” another ShotSpotter analyst changed the alert’s coordinates to a location on South Stony Island Drive near where Williams’ car was seen on camera.

Williams reclassified photo

A screenshot of the ShotSpotter alert from 11:46 PM, May 31, 2020 showing that the sound was manually reclassified from a firecracker to a gunshot.

“Through this human-involved method, the ShotSpotter output in this case was dramatically transformed from data that did not support criminal charges of any kind to data that now forms the centerpiece of the prosecution’s murder case against Mr. Williams,” the public defender wrote in the motion.

[…]

The case isn’t an anomaly, and the pattern it represents could have huge ramifications for ShotSpotter in Chicago, where the technology generates an average of 21,000 alerts each year. The technology is also currently in use in more than 100 cities.

Motherboard’s review of court documents from the Williams case and other trials in Chicago and New York State, including testimony from ShotSpotter’s favored expert witness, suggests that the company’s analysts frequently modify alerts at the request of police departments—some of which appear to be grasping for evidence that supports their narrative of events.

[…]

Untested evidence

Had the Cook County State’s Attorney’s office not withdrawn the evidence in the Williams case, it would likely have become the first time an Illinois court formally examined the science and source code behind ShotSpotter, Jonathan Manes, an attorney at the MacArthur Justice Center, told Motherboard.

“Rather than defend the evidence, [prosecutors] just ran away from it,” he said. “Right now, nobody outside of ShotSpotter has ever been able to look under the hood and audit this technology. We wouldn’t let forensic crime labs use a DNA test that hadn’t been vetted and audited.”

[…]

A pattern of alterations

In 2016, Rochester, New York, police looking for a suspicious vehicle stopped the wrong car and shot the passenger, Silvon Simmons, in the back three times. They charged him with firing first at officers.

The only evidence against Simmons came from ShotSpotter. Initially, the company’s sensors didn’t detect any gunshots, and the algorithms ruled that the sounds came from helicopter rotors. After Rochester police contacted ShotSpotter, an analyst ruled that there had been four gunshots—the number of times police fired at Simmons, missing once.

Paul Greene, ShotSpotter’s expert witness and an employee of the company, testified at Simmons’ trial that “subsequently he was asked by the Rochester Police Department to essentially search and see if there were more shots fired than ShotSpotter picked up,” according to a civil lawsuit Simmons has filed against the city and the company. Greene found a fifth shot, despite there being no physical evidence at the scene that Simmons had fired. Rochester police had also refused his multiple requests for them to test his hands and clothing for gunshot residue.

Curiously, the ShotSpotter audio files that were the only evidence of the phantom fifth shot have disappeared.

Both the company and the Rochester Police Department “lost, deleted and/or destroyed the spool and/or other information containing sounds pertaining to the officer-involved shooting,”

[…]

Greene—who has testified as a government witness in dozens of criminal trials—was involved in another altered report in Chicago, in 2018, when Ernesto Godinez, then 27, was charged with shooting a federal agent in the city.

The evidence against him included a report from ShotSpotter stating that seven shots had been fired at the scene, including five from the vicinity of a doorway where video surveillance showed Godinez to be standing and near where shell casings were later found. The video surveillance did not show any muzzle flashes from the doorway, and the shell casings could not be matched to the bullets that hit the agent, according to court records.

During the trial, Greene testified under cross-examination that the initial ShotSpotter alert only indicated two gunshots (those fired by an officer in response to the original shooting). But after Chicago police contacted ShotSpotter, Greene re-analyzed the audio files.

[…]

Prior to the trial, the judge ruled that Godinez could not contest ShotSpotter’s accuracy or Greene’s qualifications as an expert witness. Godinez has appealed the conviction, in large part due to that ruling.

“The reliability of their technology has never been challenged in court and nobody is doing anything about it,” Gal Pissetzky, Godinez’s attorney, told Motherboard. “Chicago is paying millions of dollars for their technology and then, in a way, preventing anybody from challenging it.”

The evidence

At the core of the opposition to ShotSpotter is the lack of empirical evidence that it works—in terms of both its sensor accuracy and the system’s overall effect on gun crime.

The company has not allowed any independent testing of its algorithms, and there’s evidence that the claims it makes in marketing materials about accuracy may not be entirely scientific.

Over the years, ShotSpotter’s claims about its accuracy have increased, from 80 percent accurate to 90 percent accurate to 97 percent accurate. According to Greene, those numbers aren’t actually calculated by engineers, though.

“Our guarantee was put together by our sales and marketing department, not our engineers,” Greene told a San Francisco court in 2017. “We need to give them [customers] a number … We have to tell them something. … It’s not perfect. The dot on the map is simply a starting point.”

In May, the MacArthur Justice Center analyzed ShotSpotter data and found that over a 21-month period 89 percent of the alerts the technology generated in Chicago led to no evidence of a gun crime and 86 percent of the alerts led to no evidence a crime had been committed at all.

[..]

Meanwhile, a growing body of research suggests that ShotSpotter has not led to any decrease in gun crime in cities where it’s deployed, and several customers have dropped the company, citing too many false alarms and the lack of return on investment.

[…]

a 2021 study by New York University School of Law’s Policing Project that determined that assaults (which include some gun crime) decreased by 30 percent in some districts in St. Louis County after ShotSpotter was installed. The study authors disclosed that ShotSpotter has been providing the Policing Project unrestricted funding since 2018, that ShotSpotter’s CEO sits on the Policing Project’s advisory board, and that ShotSpotter has previously compensated Policing Project researchers.

[…]

Motherboard recently obtained data demonstrating the stark racial disparity in how Chicago has deployed ShotSpotter. The sensors have been placed almost exclusively in predominantly Black and brown communities, while the white enclaves in the north and northwest of the city have no sensors at all, despite Chicago police data that shows gun crime is spread throughout the city.

Community members say they’ve seen little benefit from the technology in the form of less gun violence—the number of shootings in 2021 is on pace to be the highest in four years—or better interactions with police officers.

[…]

Source: Police Are Telling ShotSpotter to Alter Evidence From Gunshot-Detecting AI

QR Menu Codes Are Tracking You More Than You Think

If you’ve returned to the restaurants and bars that have reopened in your neighborhood lately, you might have noticed a new addition to the post-quarantine decor: QR codes. Everywhere. And as they’ve become more ubiquitous on the dining scene, so has the quiet tracking and targeting that they do.

That’s according to a new analysis by the New York Times, that found these QR codes have the ability to collect customer data—enough to create what Jay Stanley, a senior policy analyst at the American Civil Liberties Union, called an “entire apparatus of online tracking,” that remembers who you are every time you sit down for a meal. While the data itself contains pretty uninteresting information, like your order history or contact information, it turns out there’s nothing stopping that data from being passed to whomever the establishment wants.

[…]

But as the Times piece points out, these little pieces of tech aren’t as innocuous as they might initially seem. Aside from storing data like menus or drink options, QR codes are often designed to transmit certain data about the person who scanned them in the first place—like their phone number or email address, along with how often the user might be scanning the code in question. This data collection comes with a few perks for the restaurants that use the codes (they know who their repeat customers are and what they might order). The only problem is that we actually don’t know where that data actually goes.

Source: QR Menu Codes Are Tracking You More Than You Think

Note for ant fuckers: the QR code does not in fact “transmit” anything – a server behind it detects that you have visited it (if you follow a URL in the code) and then collects data based on what you do on the server, but also on the initial connection (eg location through IP address, URL parameters which can include location information, OS, browser type, etc etc etc)

Want unemployment benefits in the US? You may have to submit to facial recognition with a little known company ID.me

[…]

Watkins, a self-described privacy advocate whose mother and grandmother shredded personal information when he was growing up, said he is unwilling to complete the identity verification process his state now requires, which includes having his face analyzed by a little-known company called ID.me.
He sent a sharply worded letter to his state’s unemployment agency criticizing ID.me’s service, saying he would not take part in it given his privacy concerns. In response, he received an automated note from the agency: “If you do not verify your identity soon, your claim will be disqualified and no further benefit payments will be issued.” (A spokesperson for the Colorado Department of Labor and Employment said the agency only allows manual identity verification “as a last resort” for unemployment claimants who are under 18 — because ID.me doesn’t work with minors — and those who have “technological barriers.”)
[…]
Watkins is one of millions across the United States who are being instructed to use ID.me, along with its facial recognition software, to get their unemployment benefits. A rapidly growing number of US states, including Colorado, California and New York, turned to ID.me in hopes of cutting down on a surge of fraudulent claims for state and federal benefits that cropped up during the pandemic alongside a tidal wave of authentic unemployment claims.
As of this month, 27 states’ unemployment agencies had entered contracts with ID.me, according to the company, with 25 of them already using its technology. ID.me said it is in talks with seven more. ID.me also verifies user identities for numerous federal agencies, such as the Department of Veterans Affairs, Social Security Administration and IRS.
[…]
The face-matching technology ID.me employs comes from a San Francisco-based startup called Paravision
[…]
Facial recognition technology, in general, is contentious. Civil rights groups frequently oppose it for privacy issues and other potential dangers. For instance, it has been shown to be less accurate when identifying people of color, and several Black men, at least, have been wrongfully arrested due to the use of facial recognition. It’s barely regulated — there are no federal laws governing its use, though some states and local governments have passed their own rules to limit or prohibit its use. Despite these concerns, the technology has been used across the US federal government, as a June report from the Government Accountability Office showed.
Several ID.me users told CNN Business about problems they had verifying their identities with the company, which ranged from the facial recognition technology failing to recognize their face to waiting for hours to reach a human for a video chat after encountering problems with the technology. A number of people who claim to have had issues with ID.me have taken to social media to beg the company for help with verification, express their own concerns about its face-data collection or simply rant, often in response to ID.me’s own posts on Twitter. And some like Watkins are simply frustrated not to have a say in the matter.
[…]
ID.me said it does not sell user data — which includes biometric and related information such as selfies people upload, data related to facial analyses, and recordings of video chats users participate in with ID.me — but it does keep it. Biometric data, like the facial geometry produced from a user’s selfie, may be kept for years after a user closes their account.
Hall said ID.me keeps this information only for auditing purposes, particularly for government agencies in cases of fraud or identity theft. Users, according to its privacy policy, can ask ID.me to delete personally identifiable information it has gathered from them, but the company “may keep track of certain information if required by law” and may not be able to “completely delete” all user information since it “periodically” backs up such data. (As Ryan Calo, codirector of the University of Washington’s Tech Policy Lab, put it, this data retention policy is “pretty standard,” but, he added, that “doesn’t make it great!”)
[…]
Beyond state unemployment agencies, ID.me is also becoming more widespread among federal agencies such as the IRS, which in June began using ID.me to verify identities of people who want to use its Child Tax Credit Update Portal.
“We’re verifying more than 1% of the American adult population each quarter, and that’s starting to compress more to like 45 or 50 days,” Hall said. The company has more than 50 million users, he said, and signs up more than 230,000 new ones each day.
[…]
Vasquez said that, when a state chooses to use a tool it knows has a tendency to not work as well on some people, she thinks that “starts to invade something more than privacy and get at questions of what society values and how it values different members’ work and what our society believes about dignity.”
Hall claims ID.me’s facial recognition software is over 99% accurate and said an internal test conducted on hundreds of faces of people who had failed to pass the facial recognition check for logging in to the social security website did not show statistically significant evidence of racial bias.

In cases where users are able to opt out of the ID.me process, it can still be arduous and time-consuming: California’s Employment Development Department website, for instance, instructs people who can’t verify their identity via ID.me when applying online to file their claim over the phone or by mail or fax.
Most people aren’t doing this, however; it’s time consuming to deal with snail mail or wade through EDD’s phone system, and many people don’t have access to a fax machine. An EDD spokesperson said that such manual identity verification, which used to be a “significant” part of EDD’s backlog, now accounts for “virtually none” of it.

Long wait times for some

Eighty-five percent of people are able to verify their identity with ID.me immediately for state workforce agencies without needing to go through a video chat, Hall said.
What happens to the remaining 15% worries Akselrod, of the ACLU, since users must have access to a device with a camera — like a smartphone or computer — as well as decent internet access. According to recent Pew research, 15% of American adults surveyed don’t have a smartphone and 23% don’t have home broadband.
“These technologies may be inaccessible for precisely the people for whom access to unemployment insurance is the most critical,” Akselrod said.
[…]

Source: Want your unemployment benefits? You may have to submit to facial recognition first – CNN

What this excellent article doesn’t go into is what a terrible idea having huge centralised databases is, especially one filled with biometric information (which you can’t change) of an entire population

Commission starts legal action against 23 EU countries over copyright rules they won’t implement that favour big tech over small business and forced censorship

EU countries may be taken to court for their tardiness in enacting landmark EU copyright rules into national law, the European Commission said on Monday as it asked the group to explain the delays.

The copyright rules, adopted two years ago, aim to ensure a level playing field between the European Union’s trillion-euro creative industries and online platforms such as Google, owned by Alphabet (GOOGL.O), and Facebook (FB.O).

Note: level if you are one of the huge tech giants, not so much if you’re a small business or startup – in fact, this makes it very very difficult for startups to enter some sectors at all.

Some of Europe’s artists and broadcasters, however, are still not happy, in particular over the interpretation of a key provision, Article 17, which is intended to force sharing platforms such as YouTube and Instagram to filter copyrighted content.

[…]

The EU executive also said it had asked France, Spain and 19 other EU countries to explain why they missed a June 7 deadline to enact separate copyright rules for online transmission of radio and TV programmes.

The other countries are Austria, Belgium, Bulgaria, Croatia, Cyprus, the Czech Republic, Estonia, Greece, Finland, Ireland, Italy, Lithuania, Luxembourg, Latvia, Poland, Portugal, Romania, Slovenia and Slovakia.

Source: Commission starts legal action against 23 EU countries over copyright rules | Reuters

For more information see:
Article 11, Article 13: EU’s Dangerous Copyright Bill Advances: massive censorship and upload filters (which are impossible) and huge taxes for links.

European Commission Betrays Internet Users By Cravenly Introducing Huge Loophole For Copyright Companies In Upload Filter Guidance

EU Copyright Companies Want Legal Memes Blocked Too Because They Now Admit Upload Filters Are ‘Practically Unworkable’

Wow, the EU actually voted to break the internet for big business copyright gain

Anyway, well done those 23 countries for fighting for freedom of expression and going against big tech and non-democratic authoritarianism in Europe.

Japanese Police Arrest Man For Selling Modded Save Files For Single-Player Nintendo Game

Japan’s onerous Unfair Competition Prevention Law has created what looks from here like a massive overreach on the criminalization of copyright laws. Past examples include Japanese journalism executives being arrested over a book that tells people how to back up their own DVDs, along with more high-profile cases in which arrests occurred over the selling of cheats or exploits in online multiplayer video games. While these too seem like an overreach of copyright law, or at least an over-criminalization of relatively minor business problems facing electronic media companies, they are nothing compared with the idea that a person could be arrested and face jail time for the crime of selling modded save-game files for single player game like The Legend of Zelda: Breath of the Wild.

A 27-year old man in Japan was arrested after he was caught attempting to sell modified Zelda: Breath of The Wild save files.

As reported by the Broadcasting System of Niigata (and spotted by Dextro) Ichimin Sho was arrested on July 8 after he posted about modified save files for the Nintendo Switch version of Breath of The Wild. He posted his services onto an unspecified auction site, describing it as “the strongest software.” He would provide modded save files that would give the player improved in-game abilities and also items that were difficult to obtain were made available as requested by the customer. In his original listing, he reportedly was charging folks 3,500 yen (around $31 USD) for his service.

Upon arrest, Sho admitted that he’s made something like $90k over 18 months selling modded saves and software. Whatever his other ventures, the fact remains that Sho was arrested for selling modded saves for this one Zelda game to the public. And this game is fully a single-player game. In other words, there is not aspect of this arrest that involved staving off cheating in online multiplayer games, which is one of the concerns that has typically led to these arrests in Japan within the gaming industry.

[…]

Source: Japanese Police Arrest Man For Selling Modded Save Files For Single-Player Nintendo Game | Techdirt

Google fined €500m for not paying French publishers after copying their texts on search results

Google was fined €500m ($590m, £425m) by the French Competition Authority on Tuesday for failing to negotiate fees with news publishers for using their content.

In April last year, the regulator ruled the American search giant had to compensate French publishers for using snippets of their articles in Google News, citing European antitrust rules and copyright law. Google was given three months to figure out how much to pay publishers. More than a year later, no licensing deals have been struck, and Google did not “enter into negotiations in good faith,” we’re told. For one thing, it just stopped including snippets from French publishers in all Google services.

[…]

Now, the FCA has sanctioned the Chocolate Factory €500m and has given it two months to negotiate with French publishers. If the web giant continues to dilly-dally after this point, it’ll be fined up to €900,000 (over $1m or around £767,000) a day until it complies with the FCA’s demands.

[…]

Source: Google fined €500m for not paying French publishers after using their words on web • The Register

Inside the Industry That Unmasks People at Scale: yup your mobile advertising ID isn’t anonymous either

Tech companies have repeatedly reassured the public that trackers used to follow smartphone users through apps are anonymous or at least pseudonymous, not directly identifying the person using the phone. But what they don’t mention is that an entire overlooked industry exists to purposefully and explicitly shatter that anonymity.

They do this by linking mobile advertising IDs (MAIDs) collected by apps to a person’s full name, physical address, and other personal identifiable information (PII). Motherboard confirmed this by posing as a potential customer to a company that offers linking MAIDs to PII.

“If shady data brokers are selling this information, it makes a mockery of advertisers’ claims that the truckloads of data about Americans that they collect and sell is anonymous,” Senator Ron Wyden told Motherboard in a statement.

“We have one of the largest repositories of current, fresh MAIDS<>PII in the USA,” Brad Mack, CEO of data broker BIGDBM told us when we asked about the capabilities of the product while posing as a customer. “All BIGDBM USA data assets are connected to each other,” Mack added, explaining that MAIDs are linked to full name, physical address, and their phone, email address, and IP address if available. The dataset also includes other information, “too numerous to list here,” Mack wrote.

A MAID is a unique identifier a phone’s operating system gives to its users’ individual device. For Apple, that is the IDFA, which Apple has recently moved to largely phase out. For Google, that is the AAID, or Android Advertising ID. Apps often grab a user’s MAID and provide that to a host of third parties. In one leaked dataset from a location tracking firm called Predicio previously obtained by Motherboard, the data included users of a Muslim prayer app’s precise locations. That data was somewhat pseudonymized, because it didn’t contain the specific users’ name, but it did contain their MAID. Because of firms like BIGDBM, another company that buys the sort of data Predicio had could take that or similar data and attempt to unmask the people in the dataset simply by paying a fee.

[…]

“This real-world research proves that the current ad tech bid stream, which reveals mobile IDs within them, is a pseudonymous data flow, and therefore not-compliant with GDPR,” Edwards told Motherboard in an online chat.

“It’s an anonymous identifier, but has been used extensively to report on user behaviour and enable marketing techniques like remarketing,” a post on the website of the Internet Advertising Bureau, a trade group for the ad tech industry, reads, referring to MAIDs.

In April Apple launched iOS 14.5, which introduced sweeping changes to how apps can track phone users by making each app explicitly ask for permission to track them. That move has resulted in a dramatic dip in the amount of data available to third parties, with just 4 percent of U.S. users opting-in. Google said it plans to implement a similar opt-in measure broadly across the Android ecosystem in early 2022.

[…]

Source: Inside the Industry That Unmasks People at Scale

Samsung Washing Machine App Requires Access to Your Contacts and Location

A series of Samsung apps that allow customers to control their internet-connected appliances require access to all the phone’s contacts and, in some cases, the phone call app, phone’s location, and camera. Customers have been furious about this for years.

On Wednesday, a Reddit user complained that their washing machine app, the Samsung Smart Washer, wouldn’t work “unless I give it access to my contacts, location and camera.”

This is a common complaint.

[…]

These situations speak to two issues: Apps that demand permissions that they don’t need, and “smart” and internet of things devices that make formerly simple tasks very complicated, and open up potential privacy and security concerns.

Generally speaking, over the last few years, people have become more sensitive to what they’re giving up in privacy and potentially security when they deal with big tech companies. Smart TVs (Samsung included), for example, have been caught listening to users and automatically deliver ads. Tech companies have had to adapt and do better. For example, both Apple and Google allow users to see what data an app has access to, and in some cases users can toggle the permissions individually. The upcoming new version of Android will even have a dedicated “Privacy Dashboard” where users can see which apps used what permissions, and revoke them if they want. Apple’s iOS has similar functionality. But none of this stops app developers from asking users to accept unnecessary permissions.

It’s unclear why apps that are designed to let you set the type of washing cycle you want, or see how long it’s gonna take for the dryer to be done, would need access to your phone’s contacts. In an FAQ for another Samsung app, the company says it needs access to contacts “to check if you already have a Samsung account set up in your device. Knowing this information helps mySamsung to make the sign-in process seamless.”

[…]

Source: Samsung Washing Machine App Requires Access to Your Contacts and Location

DRM Strikes Again: Ubisoft Makes Its Own Game Unplayable By Shutting Down DRM Server

DRM has shown time after time to be of almost no hindrance whatsoever for those seeking to pirate video games, but has done an excellent job of hindering those who actually bought the game in playing what they’ve bought. Ubisoft, in particular, has had issues with this over the years, with DRM servers failing and preventing customers from playing games that can no longer ping the DRM server.

And while those instances involved unforeseen downtime or migrations impacting customers’ ability to play their games, this time it turns out that Ubisoft simply stopped supporting the DRM server for Might and Magic X-Legacy. And now basically everyone is screwed.

Last month, Ubisoft decided to end online support for a bunch of older games, but in doing so also brought down the DRM servers for Might and Magic X – Legacy, meaning players couldn’t access the game’s single-player content or DLC.

As Eurogamer reports, fans were not happy, having to cobble together an unofficial workaround to be able to continue playing past a certain point in the single-player. But instead of Ubisoft taking the intervening weeks to release something official to fix this, or reversing their original move to shut down the game’s DRM servers, they’ve decided to do something else.

They have simply removed the game for sale on Steam.

This, of course, does nothing for the people who already bought the game and now suddenly cannot progress through it completely, as all the DLC is non-functional. They can play the game up until a point, but then it just doesn’t work.

There are multiple bad actions on Ubisoft’s part here. First, using DRM like this is a terrible idea with almost no good consequences. But once it’s in use, you would think it would be the obligation of the company to ensure any changes it makes on its end don’t suddenly render purchases made by its customers unplayable. In other words, rather than ending support for a DRM server that nixes parts of a paid-for game, the company could have rolled out patches to remove the DRM completely so that none of this happened. After all, with the game no longer even available as a new purchase, what would be the harm in removing the DRM? And, of course, there’s the total lack of communication to Ubisoft customers about basically all of this.

Which is what has people so understandably pissed.

Source: DRM Strikes Again: Ubisoft Makes Its Own Game Unplayable By Shutting Down DRM Server | Techdirt

Audacity users stick the knife – and fork – in to strip audio editor of unwanted features and govt / police spyware

Contributors disgruntled with the recent direction of cross-platform FOSS audio software Audacity are forking the sound editor to a version that does not have the features or requirements that have upset some in the community.

One such project can be found on GitHub, with user “cookiengineer” proclaiming themselves “evil benevolent temporary dictator” in order to get the ball rolling.

“Being friendly seemed to have invited too many trolls,” observed the engineer, “and we must stop this behaviour.”

Presumably that refers to the trolling rather than being friendly. And goodness, the project has had somewhat of a baptism by fire in recent hours as a number of 4chan users elected to launch a raid on it.

This is why we can’t have nice things.

The project is blunt with regard to the causes of the fork – Audacity’s privacy policy updates, the contributors licence agreement, and the a furore over introducing telemetry have all played a part.

[…]

Source: Audacity users stick the knife – and fork – in to strip audio editor of unwanted features • The Register

Sam Altman’s New Startup Wants to Give You Crypto for Eyeball Scans – yes this is a terrible dr evil plan idea

hould probably sit down for this one. Sam Altman, the former CEO of famed startup incubator Y Combinator, is reportedly working on a new cryptocurrency that’ll be distributed to everyone on Earth. Once you agree to scan your eyeballs.

Yes, you read correctly.

You can thank Bloomberg for inflicting this cursed news on the rest of us. In its report, Bloomberg says Altman’s forthcoming cryptocurrency and the company behind it, both dubbed Worldcoin, recently raised $25 million from investors. The company is purportedly backed by Andreessen Horowitz, LinkedIn founder Reid Hoffman, and Day One Ventures.

“I’ve been very interested in things like universal basic income and what’s going to happen to global wealth redistribution and how we can do that better,” Altman told Bloomberg, explaining what fever dream inspired this.
[…]

What supposedly makes Worldcoin different is it adds a hardware component to cryptocurrency in a bid to “ensur[e] both humanness and uniqueness of everybody signing up, while maintaining their privacy and the overall transparency of a permissionless blockchain.” Specifically, Bloomberg says the gadget is a portable “silver-colored spherical gizmo the size of a basketball” that’s used to scan people’s irises. It’s undergoing testing in some cities, and since Worldcoin is not yet ready for distribution, the company is giving volunteers other cryptocurrencies like Bitcoin in exchange for participating. There are supposedly fewer than 20 prototypes of this eyeball scanning orb, and currently, each reportedly costs $5,000 to make.

Supposedly the whole iris scanning thing is “essential” as it would generate a “unique numerical code” for each person, thereby discouraging scammers from signing up multiple times. As for the whole privacy problem, Worldcoin says the scanned image is deleted afterward and the company purportedly plans to be “as transparent as possible.”

Source: Sam Altman’s New Startup Wants to Give You Crypto for Eyeball Scans

Advertisers Are Selling Americans’ Data to Hundreds of Shady Foreign Businesses

Senator Ron Wyden has released a list of hundreds of secretive, foreign-owned companies that are buying up Americans’ data. Some of the customers include companies based in states that are ostensibly “unfriendly” to the U.S., like Russia and China.

First reported by Motherboard, the news comes after recent information requests made by a bipartisan coalition of Senators, who asked prominent advertising exchanges to provide a transparent list of any “foreign-headquartered or foreign-majority owned” firms to whom they sell consumer “bidstream data.” Such data is typically collected, bought, and sold amidst the intricate advertising ecosystem, which uses “real-time bidding” to monetize consumer preferences and interests.

Wyden, who helped lead the effort, has expressed concerns that Americans’ data could fall into the hands of foreign intelligence agencies to “supercharge hacking, blackmail, and influence campaigns,” as a previous letter from him and other Senators puts it.

“Few Americans realize that some auction participants are siphoning off and storing ‘bidstream’ data to compile exhaustive dossiers about them. In turn, these dossiers are being openly sold to anyone with a credit card, including to hedge funds, political campaigns, and even to governments,” the letter states.

In response to the information requests, most companies seem to have responded with vague, evasive answers. However, advertising firm Magnite has provided a list of over 150 different companies it sells to while declining to note which countries they are based in. Wyden’s staff spent time researching the companies and Motherboard reports that the list includes the likes of Adfalcon—a large ad firm based in Dubai that calls itself the “first mobile advertising network in the Middle East”—as well as Chinese companies like Adtiming and Mobvista International.

Magnite’s response further shows that the kinds of data it provides to these companies may include all sorts of user information—including age, name, and the site names and domains they visit, device identifiers, IP address, and other information that would help any discerning observer piece together a fairly comprehensive picture of who you are, where you’re located, and what you’re interested in.

You can peruse the full list of companies that Magnite works with and, foreign ownership aside, they just naturally sound creepy. With confidence-inspiring names like “12Mnkys,” “Freakout,” “CyberAgent Dynalst,” and “Zucks,” these firms—many of which you’d be hard-pressed to even find an accessible website for—are doing God knows what with the data they procure.

The question naturally arises: How is it that these companies that we know literally nothing about seem to have access to so much of our personal information? Also: Where are the federal regulations when you need them?

Source: Advertisers Are Selling Americans’ Data to Hundreds of Shady Foreign Businesses

And that’s why Europe has GDPR

Microsoft exec: Targeting of Americans’ records ‘routine’

Federal law enforcement agencies secretly seek the data of Microsoft customers thousands of times a year, according to congressional testimony Wednesday by a senior executive at the technology company.

Tom Burt, Microsoft’s corporate vice president for customer security and trust, told members of the House Judiciary Committee that federal law enforcement in recent years has been presenting the company with between 2,400 to 3,500 secrecy orders a year, or about seven to 10 a day.

“Most shocking is just how routine secrecy orders have become when law enforcement targets an American’s email, text messages or other sensitive data stored in the cloud,” said Burt, describing the widespread clandestine surveillance as a major shift from historical norms.

[…]

Brad Smith, Microsoft’s president, called for an end to the overuse of secret gag orders, arguing in a Washington Post opinion piece that “prosecutors too often are exploiting technology to abuse our fundamental freedoms.” Attorney General Merrick Garland, meanwhile, has said the Justice Department will abandon its practice of seizing reporter records and will formalize that stance soon.

[…]

Burt said that while the revelation that federal prosecutors had sought data about journalists and political figures was shocking to many Americans, the scope of surveillance is much broader. He criticized prosecutors for reflexively seeking secrecy through boilerplate requests that “enable law enforcement to just simply assert a conclusion that a secrecy order is necessary.”

[…]

As possible solutions, Burt said, the government should end indefinite secrecy orders and should also be required to notify the target of the data demand once the secrecy order has expired.

Just this week, he said, prosecutors sought a blanket gag order affecting the government of a major U.S. city for a Microsoft data request targeting a single employee there.

“Without reform, abuses will continue to occur and they will occur in the dark,” Burt said.

Source: Microsoft exec: Targeting of Americans’ records ‘routine’

High Court disallows Dutch Filmworks forcing ISPs to give out personal details of potential movie downloaders

As expected, the Supreme Court rejected the appeal in cassation by Dutch FilmWorks. The highest judicial body follows the motivation of the Prosecutor General, who previously issued advice on this. DFW announced in 2015 that it would take enforcement action against people who illegally download films. The matter was widely publicized. DFW wanted to address individual users and possibly even fine them. It engaged an outside company to collect the IP addresses. The distributor also received permission for this data collection. However, in order to address these users, DFW had to have their name and address details, which are only known to internet providers. Ziggo refused to provide that information. Dutch Filmworks was rejected by the court and the Supreme Court also sees no reason to annul the earlier judgment.

Source: Zaak Dutch Filmworks strandt bij Hoge Raad – Emerce

Windows Users Surprised by Windows 11’s Short List of Supported CPUs – and front facing camera requirements

While a lot of focus has been on the TPM requirements for Windows 11, Microsoft has since updated its documentation to provide a complete list of supported processors. At present the list includes only Intel 8th Generation Core processors or newer, and AMD Ryzen Zen+ processors or newer, effectively limiting Windows 11 to PC less than 4-5 years old.

Notably absent from the list is the Intel Core i7-7820HQ, the processor used in Microsoft’s current flagship $3500+ Surface Studio 2. This has prompted many threads on Reddit from users angry that their (in some cases very new) Surface PC is failing the Windows 11 upgrade check.
The Verge confirms: Windows 11 will only support 8th Gen and newer Intel Core processors, alongside [Intel’s 2016-era] Apollo Lake and newer Pentium and Celeron processors. That immediately rules out millions of existing Windows 10 devices from upgrading to Windows 11… Windows 11 will also only support AMD Ryzen 2000 and newer processors, and 2nd Gen or newer [AMD] EPYC chips. You can find the full list of supported processors on Microsoft’s site…

Originally, Microsoft noted that CPU generation requirements are a “soft floor” limit for the Windows 11 installer, which should have allowed some older CPUs to be able to install Windows 11 with a warning, but hours after we published this story, the company updated that page to explicitly require the list of chips above.

Many Windows 10 users have been downloading Microsoft’s PC Health App (available here) to see whether Windows 11 works on their systems, only to find it fails the check… This is the first significant shift in Windows hardware requirements since the release of Windows 8 back in 2012, and the CPU changes are understandably catching people by surprise.

Microsoft is also requiring a front-facing camera for all Windows 11 devices except desktop PCs from January 2023 onwards.
“In order to run Windows 11, devices must meet the hardware specifications,” explains Microsoft’s official compatibility page for Windows 11.

“Devices that do not meet the hardware requirements cannot be upgraded to Windows 11.”

Source: Windows Users Surprised by Windows 11’s Short List of Supported CPUs – Slashdot

Why on earth should Microsoft require that it can look at you?!

Ubisoft Takes Down Fan’s Incredible Far Cry 5 ‘GoldenEye’ Maps

For the past few years, a YouTuber known as Krollywood has painstakingly recreated every level from GoldenEye 007 inside the level editor of Far Cry 5. This week, Ubisoft removed all of those levels from Far Cry 5 due to a copyright infringement claim.

Kotaku first reported on Krollywood’s efforts earlier this month. Over the course of three years, in an endeavor that tallied more than 1,400 hours, Krollywood recreated every stage from GoldenEye 007, the classic N64 shooter (well, save for the two bonus levels). It was an impressive effort: a modernized recreation of a beloved yet tough-to-find old game. And it looked great, too.

Read More: Here’s GoldenEye 007 Remade From The Ground Up In Far Cry 5

You could find and play these levels yourself by hopping into Far Cry 5’s arcade mode and punching in Krollywood’s username. As of this writing, you no longer can. Ubisoft removed them all from Far Cry 5, a move that Krollywood described as “really sad,” noting that he probably won’t be able to restore them since he’s “on their radar now.”

“I’m really sad—not because of myself or the work I put in the last three years, [but] because of the players who wanna play it or bought Far Cry just to play my levels,” Krollywood told Kotaku in an email today.

When reached for comment, a representative for Ubisoft kicked over this statement:

In following the guidelines within the ‘Terms of Use’, there were maps created within Far Cry 5 arcade that have been removed due to copyright infringement claims from a right [sic] holder received by Ubisoft and are currently unavailable. We respect the intellectual property rights of others and expect our users to do the same. This matter is currently with the map’s creator and the rights holder and we have nothing further to share at this time.

Ubisoft did not immediately respond to follow-up requests asking whether the rights holder mentioned is MGM, which controls the license to the original GoldenEye 007.

The rights around the GoldenEye 007 game have been stuck in a quagmire for decades. Famously, Rare, the developer of the original game, planned a remake for the Xbox 360. That was cancelled in 2008. (Years later, Xbox boss Phil Spencer chalked up the cancellation to the legal rights issues being “challenging.”) That canned remake resurfaced as a full 4K60 longplay via a leak this January, with a playable version making the rounds online shortly after. A Kotaku report concluded: It was fun.

It is further unclear how, exactly, Krollywood’s map remakes in Far Cry 5 harm MGM at all—or how it violates Ubisoft’s terms of service in the first place. Krollywood didn’t use any assets or code from the original game. He didn’t attempt to sell it or otherwise turn a profit. And MGM doesn’t own any of the code from Ubisoft’s open-world shooter.

A sampling of Krollywood’s efforts…Image: Krollywood / Ubisoft
Those corpses represent every attempt to play GoldenEye 007 in any other format than the original game.Image: Krollywood / Ubisoft
Some of the remade levels stoke major wanderlust.Image: Krollywood / Ubisoft

Players just want a taste of nostalgia, and MGM has a track record of shattering the plates before they’re even delivered to the table. (Recall GoldenEye 25, the fan remake of GoldenEye 007 remade entirely in Unreal 4 that was lawyered into oblivion last year.) MGM has further neglected to do anything with the license it’s sitting on—for a game that’s older than the Game Boy Color, by the way. At the end of the day, shooting this latest fan-made project out of the sky comes across as a punitive move, at best.

“In the beginning, I started this project just for me and my best friend, because we loved the original game so much,” Krollywood said. “But there are many GoldenEye fans out there … [The project] found many new fans and I’m so happy about it.”

Source: Ubisoft Takes Down Fan’s Incredible Far Cry 5 ‘GoldenEye’ Maps

Bah. Humbug.

New ‘Guardians Of The Galaxy’ Game Has Game Streamers Worried Over Integral Music In The Game, shows you how stupid copyright and DMCA is nowadays

With streaming games and “let’s plays” becoming a dominant force of influence in the gaming world, one of the sillier trends we’ve seen is video games coming out with “stream safe” settings that strip out audio content for which there is no broadcast license. We’ve talked already about how this sort of thing is not a solution to the actual problem — the complicated licenses surrounding copyrighted works and the permission culture that birthed them — but is rather a ploy to simply ignore that problem entirely. That hasn’t stopped this from becoming a more regular thing in the gaming world, even as we’ve seen examples of “stream safe” settings fail to keep streams from getting DMCA notices.

Well, if there were a perfect example of a video game that highlights the absurdity of all of this, it may well be the forthcoming Guardians of the Galaxy title. If you’re not familiar with the GotG movies, you should know that retro music plays a major role in the films. The game promises that retro music will be just as important as in the films. And that’s what immediately set off concern for game streamers.

One group that is wary of this heavy emphasis on pop music is the livestreaming crowd, who are concerned that it could make the game near-impossible to broadcast. This is because Twitch and YouTube creators are regularly hit with what are known as Digital Millennium Copyright Act (DMCA) notices.

[…]

The game publisher of course secured the rights to the songs to be included in the game, but did not license the songs for rebroadcast. Because the world is an extremely stupid place, streaming a game equates to a rebroadcast of any music within it. And, also because the world is an extremely stupid place, Eidos-Montreal’s solution to this is once again to mute licensed music.

Newsweek contacted Eidos-Montréal to ask if they had made any considerations for Twitch streamers in respect to Guardians of the Galaxy’s music. Over email, a spokesperson confirmed that there will actually be an option to mute licensed tracks, if players want to be absolutely safe from potential DMCA takedowns.

And so a major thematic element for the franchise will be nixed in any live-streams of the game.

[…]

Source: New ‘Guardians Of The Galaxy’ Game Has Game Streamers Worried Over Integral Music In The Game | Techdirt

Amazon is blocking Google’s FLoC

Most of Amazon’s properties including Amazon.com, WholeFoods.com and Zappos.com are preventing Google’s tracking system FLoC — or Federated Learning of Cohorts — from gathering valuable data reflecting the products people research in Amazon’s vast e-commerce universe, according to website code analyzed by Digiday and three technology experts who helped Digiday review the code.

Amazon declined to comment on this story.

As Google’s system gathers data about people’s web travels to inform how it categorizes them, Amazon’s under-the-radar move could not only be a significant blow to Google’s mission to guide the future of digital ad tracking after cookies die — it could give Amazon a leg up in its own efforts to sell advertising across what’s left of the open web.

[…]

Digiday watched last week as Amazon added code to its digital properties to block FLoC from tracking visitors using Google’s Chrome browser. For example, while earlier in the week WholeFoods.com and Woot.com did not include code to block FLoC, by Thursday Digiday saw that those sites did feature code telling Google’s system not to include activities of their visitors to inform cohorts or assign IDs. But Amazon’s blocking appears scattered.

[..]

Source: Amazon is blocking Google’s FLoC — and that could seriously weaken the system

Open-source projects glibc and gnulib look to sever copyright ties with Free Software Foundation

The GNU C Library (glibc) and GNU Portability Library (gnulib) are laying the groundwork to divorce themselves from the troubled Free Software Foundation by removing the requirement for copyright assignment.

This move follows in the footsteps of the same shift by the GNU Compiler Collection (GCC) on 2 June.

Like many projects under the GNU umbrella, glibc and gnulib – the GNU Project’s C standard library and a collection of subroutines designed to ease cross-platform porting respectively – allow anyone to contribute code. Those doing so are asked to assign copyright to the Free Software Foundation – for now, at least.

[…]

“The changes to accept patches with or without FSF copyright assignment would be effective on August 2nd, and would apply to all open branches.”

[…]

Andrew Katz, managing partner and head of tech and IP at Moorcrofts Corporate Law, said of the move: “My view is that the GPL is sufficient in itself. For GPL, licence in = licence out seems to be the fairest approach from both the developers’ and the project’s perspective, and it means that, ultimately, the developers remain in control of their code.

“Recent questions about governance of the FSF (specifically, concerning RMS’s departure and reinstatement) may cause people to be concerned about the quality of that governance as regards licensing decisions. Assigning copyright to an organisation requires a significant amount of trust, and developers may understandably be concerned that trusting a third party (whether a business or a not-for-profit) presents a greater risk than retaining their own rights in the code.”

Source: Open-source projects glibc and gnulib look to sever copyright ties with Free Software Foundation • The Register

House introduces five antitrust bills targeting Apple, Google, Facebook and Amazon

Lawmakers in the House have introduced five new bills that would place significant limits on major tech companies, including Apple, Google, Facebook and Amazon.The proposed legislation is part of a broader effort to step up antitrust enforcement against tech giants.The bills would place new limits on the companies’ ability to acquire new business and change how they treat their own services compared with competitors.

“From Amazon and Facebook to Google and Apple, it is clear that these unregulated tech giants have become too big to care and too powerful to ever put people over profit,” Rep. Pramila Jayapal said in a statement. “By reasserting the power of Congress, our landmark bipartisan bills rein in anti-competitive behavior, prevent monopolistic practices, and restore fairness and competition while finally leveling the playing field and allowing innovation to thrive.”

The bills include:

Notably, the bills have bipartisan support, as limiting the power of big tech platforms has been a rare source of bipartisan agreement in Congress. Though the bills don’t name individual companies, the legislation could have a significant impact on Facebook, Google, Amazon and Apple, which have faced increasing scrutiny from Congress over their business practices and market dominance.

Source: House introduces five antitrust bills targeting Apple, Google, Facebook and Amazon | Engadget

Apple and Microsoft Say They Had No Idea Trump-Era DOJ Requested Data on Political Rivals

Apple didn’t know the Department of Justice was requesting metadata of Democratic lawmakers when it complied with a subpoena during a Trump-era leak investigation, CNBC reports. And it wasn’t the only tech giant tapped in these probes: Microsoft confirmed Friday it received a similar subpoena for a congressional staffer’s personal email account. Both companies were under DOJ gag orders preventing them from notifying the affected users for years.

These instances are part of a growing list of questionable shit the DOJ carried out under former President Donald Trump amid his crusade to crack down on government leakers. The agency also quietly went after phone and email records of journalists at the Washington Post, CNN, and the New York Times to uncover their sources, none of whom were notified until last month.

On Thursday, a New York Times report revealed that a Trump-led DOJ seized records from two Democrats on the House Intelligence Committee who were frequently targeted in the president’s tantrums: California Representatives Eric Swalwell and Adam Schiff (Schiff now chairs the committee). The subpoena extended to at least a dozen people connected to them, including aides, family members, and one minor, in an attempt to identify sources related to news reports on Trump’s contacts with Russia. All told, prosecutors found zero evidence in this seized data, but their efforts have prompted the Justice Department’s inspector general to launch an inquiry into the agency’s handling of leak investigations during the Trump administration.

[…]

Source: Apple and Microsoft Say They Had No Idea Trump-Era DOJ Requested Data on Political Rivals

European Commission Betrays Internet Users By Cravenly Introducing Huge Loophole For Copyright Companies In Upload Filter Guidance

As a recent Techdirt article noted, the European Commission was obliged to issue “guidance” on how to implement the infamous Article 17 upload filters required by the EU’s Copyright Directive. It delayed doing so, evidently hoping that the adviser to the EU’s top court, the Court of Justice of the European Union (CJEU), would release his opinion on Poland’s attempt to get Article 17 struck down before the European Commission revealed its one-sided advice. That little gambit failed when the Advocate General announced that he would publish his opinion after the deadline for the release of the guidance. The European Commission has finally provided its advisory document on Article 17 and, as expected, it contains a real stinker of an idea. The best analysis of what the Commission has done, and why it is so disgraceful comes from Julia Reda and Paul Keller on the Kluwer Copyright Blog. Although Article 17 effectively made upload filters mandatory, it also included some (weak) protections for users, to allow people to upload copyright material for legal uses such as memes, parody, criticism etc. without being blocked. The copyright industry naturally hates any protections for users, and has persuaded the European Commission to eviscerate them:

According to the final guidance, rightholders can easily circumvent the principle that automatic blocking should be limited to manifestly infringing uses by “earmarking” content the “unauthorised online availability of which could cause significant economic harm to them” when requesting the blocking of those works. Uploads that include protected content thus “earmarked” do not benefit from the ex-ante protections for likely legitimate uses. The guidance does not establish any qualitative or quantitative requirements for rightholders to earmark their content. The mechanism is not limited to specific types of works, categories of rightholders, release windows, or any other objective criteria that could limit the application of this loophole.

The requirements that copyright companies must meet are so weak that it is probably inevitable that they will claim most uploads “could cause significant economic harm”, and should therefore be earmarked. Here’s what happens then: before it can be posted online, every earmarked upload requires a “rapid” human review of whether it is infringing or not. Leaving aside the fact that it is very hard for legal judgements to be both “rapid” and correct, there’s also the problem that copyright companies will earmark millions of uploads (just look at DMCA notices), making it infeasible to carry out proper review. But the European Commission also says that if online platforms fail to carry out a human review of everything that is earmarked, and allow some unchecked items to be posted, they will lose their liability protection:

this means that service providers face the risk of losing the liability protections afforded to them by art. 17(4) unless they apply ex-ante human review to all uploads earmarked by rightholders as merely having the potential to “cause significant economic harm”. This imposes a heavy burden on platform operators. Under these conditions rational service providers will have to revert to automatically blocking all uploads containing earmarked content at upload. The scenario described in the guidance is therefore identical to an implementation without safeguards: Platforms have no other choice but to block every upload that contains parts of a work that rightholders have told them is highly valuable.

Thus the already unsatisfactory user rights contained in Article 17 are rendered null and void because of the impossibility of following the European Commission’s new guidance. That’s evidently the result of recent lobbying from the copyright companies, since none of this was present in previous drafts of the guidance. Not content with making obligatory the upload filters that they swore would not be required, copyright maximalists now want to take away what few protections remain for users, thus ensuring that practically all legal uses of copyright material — including memes — are likely to be automatically blocked.

The Kluwer Copyright blog post points out that this approach was not at all necessary. As Techdirt reported a couple of weeks ago, Germany has managed to come up with an implementation of Article 17 that preserves most user rights, even if it is by no means perfect. The European Commission, by contrast, has cravenly given what the copyright industry has demanded, and effectively stripped out those rights. But this cowardly move may backfire. Reda and Keller explain:

the Commission does not provide any justification or rationale why users’ fundamental rights do not apply in situations where rightholders claim that there is the potential for them to suffer significant economic harm. It’s hard to imagine that the CJEU will consider that the version of the guidance published today provides meaningful protection for users’ rights when it has to determine the compliance of the directive with fundamental rights [in the case brought by Poland]. The Commission appears to be acutely aware of this as well and so it has wisely included the following disclaimer in the introductory section of the guidance (emphasis ours):

“The judgment of the Court of Justice of the European Union in the case C-401/192 will have implications for the implementation by the Member States of Article 17 and for the guidance. The guidance may need to be reviewed following that judgment“.

In the end this may turn out to be the most meaningful sentence in the entire guidance.

It would be a fitting punishment for betraying the 450 million citizens the European Commission is supposed to serve, but rarely does, if this final overreach causes upload filters to be thrown out completely.

Source: European Commission Betrays Internet Users By Cravenly Introducing Huge Loophole For Copyright Companies In Upload Filter Guidance | Techdirt

Google to adapt its ad technology after France hands it a $267 million fine

Google has agreed to pay a €220 million ($267 million) fine and change its ad practices after France’s competition authority found it had abused its dominant online ad position. Following a 2019 complaint by News Corp. and French newspaper Le Figaro, France ruled that Google was favoring its own advertising services to the detriment of rivals.

[…]

In a blog post, Google explained how it planned to change its ad rules by offering publishers “increased flexibility” by improving interoperability between its ad manager and third-party ad servers. “Also, we are reaffirming that we will not limit Ad Manager publishers from negotiating specific terms or pricing directly with other sell-side platforms.”

Google’s ad division has faced scrutiny from French regulators in the past. In 2019, the watchdog fined Google €150 million ($167 million) for opaque and unpredictable advertising rules after it suspended the Google Ads account of a French company without notice. Google has also clashed with regulators and publishers in the nation over the use of snippets of content in its news section.

Source: Google to adapt its ad technology after France hands it a $267 million fine | Engadget

Apple’s tightly controlled App Store is teeming with scams

Apple chief executive Tim Cook has long argued it needs to control app distribution on iPhones, otherwise the App Store would turn into “a flea market.”

But among the 1.8 million apps on the App Store, scams are hiding in plain sight. Customers for several VPN apps, which allegedly protect users’ data, complained in Apple App Store reviews that the apps told users their devices have been infected by a virus to dupe them into downloading and paying for software they don’t need. A QR code reader app that remains on the store tricks customers into paying $4.99 a week for a service that is now included in the camera app of the iPhone. Some apps fraudulently present themselves as being from major brands such as Amazon and Samsung.

Of the highest 1,000 grossing apps on the App Store, nearly two percent are scams, according to an analysis by The Washington Post. And those apps have bilked consumers out of an estimated $48 million during the time they’ve been on the App Store, according to market research firm Appfigures. The scale of the problem has never before been reported. What’s more, Apple profits from these apps because it takes a cut of up to a 30 percent of all revenue generated through the App Store. Even more common, according to The Post’s analysis, are “fleeceware” apps that use inauthentic customer reviews to move up in the App Store rankings and give apps a sense of legitimacy to convince customers to pay higher prices for a service usually offered elsewhere with higher legitimate customer reviews.

Two-thirds of the 18 apps The Post flagged to Apple were removed from the App Store.

[…]

Apple has long maintained that its exclusive control of the App Store is essential to protecting customers, and it only lets the best apps on its system. But Apple’s monopoly over how consumers access apps on iPhones can actually create an environment that gives customers a false sense of safety, according to experts. Because Apple doesn’t face any major competition and so many consumers are locked into using the App Store on iPhones, there’s little incentive for Apple to spend money on improving it, experts say.

[…]

Apple unwittingly may be aiding the most sophisticated scammers by eliminating so many of the less competent ones during its app review process, said Miles, who co-authored a paper called “The Economics of Scams.”

[…]

Apple has argued that it is the only company with the resources and know-how to police the App Store. In the trial that Epic Games, the maker of the popular video game “Fortnite,” brought against Apple last month for alleged abuse of its monopoly power, Apple’s central defense was that competition would loosen protections against unwanted apps that pose security risks to customers. The federal judge in the case said she may issue a verdict by August.

The prevalence of scams on Apple’s App Store played a key role at trial. Apple’s lawyers were so focused on the company’s role in making the App Store safe that Epic’s attorneys accused them of trying to scare the court into a ruling in favor of Apple. In other internal emails unearthed during trial that date as far back as 2013, Apple’s Phil Schiller, who runs the App Store, expressed dismay when fraudulent apps made it past App Store review.

After a rip-off version of the Temple Run video game became the top-rated app, according to Schiller’s email exchange, he sent an irate message to two other Apple executives responsible for the store. “Remember our talking about finding bad apps with low ratings? Remember our talk about becoming the ‘Nordstroms’ of stores in quality of service? How does an obvious rip off of the super popular Temple Run, with no screenshots, garbage marketing text, and almost all 1-star ratings become the #1 free app on the store?” Schiller asked his team. “Is no one reviewing these apps? Is no one minding the store?” Apple declined to make Schiller available to comment. At trial, Schiller defended the safety of the app store on the stand. The app review process is “the best way we could come up with … to make it safe and fair.”

Eric Friedman, head of Apple’s Fraud Engineering Algorithms and Risk unit, or FEAR, said that Apple’s screening process is “more like the pretty lady who greets you with a lei at the Hawaiian airport than the drug sniffing dog,” according to a 2016 internal email uncovered during the Epic Games trial. Apple employs a 500-person App Review team, which sifts through submissions from developers. “App Review is bringing a plastic butter knife to a gun fight,” Friedman wrote in another email.

[…]

Though the App Store ratings section is filled with customer complaints referring to apps as scams, there is no way for Apple customers to report this to Apple, other than reaching out to a regular Apple customer service representative. Apple used to have a button, just under the ratings and reviews section in the App Store, that said “report a problem,” which allowed users to report inappropriate apps. Based on discussions among Apple customers on Apple’s own website, the feature was removed some time around 2016.

[…]

 

Source: Apple’s tightly controlled App Store is teeming with scams – Anchorage Daily News