Access To Big Data Turns Farm Machine Makers Into Tech Firms

The combine harvester, a staple of farmers’ fields since the late 1800s, does much more these days than just vacuum up corn, soybeans and other crops. It also beams back reams of data to its manufacturer.

GPS records the combine’s precise path through the field. Sensors tally the number of crops gathered per acre and the spacing between them. On a sister machine called a planter, algorithms adjust the distribution of seeds based on which parts of the soil have in past years performed best. Another machine, a sprayer, uses algorithms to scan for weeds and zap them with pesticides. All the while sensors record the wear and tear on the machines, so that when the farmer who operates them heads to the local distributor to look for a replacement part, it has already been ordered and is waiting for them.

Farming may be an earthy industry, but much of it now takes place in the cloud. Leading farm machine makers like Chicago-based John Deere & Co. DE +1.1% or Duluth’s AGCO AGCO +0.9% collect data from all around the world thanks to the ability of their bulky machines to extract a huge variety of metrics from farmers’ fields and store them online. The farmers who sit in the driver’s seats of these machines have access to the data that they themselves accumulate, but legal murk obfuscates the question of whether they actually own that data and only the machine manufacturer can see all the data from all the machines leased or sold.

[…]

Still, farmers have yet to be fully won over. Many worry that by allowing the transfer of their data to manufacturers, it will inadvertently wind up in the hands of neighboring farmers with whom they compete for scarce land, who could then mine their closely guarded information about the number of acres they plow or the types of fertilizers and pesticides they use, thus gaining a competitive edge. Others fear that information about the type of seeds or fertilizer they use will wind up in the hands of the chemicals companies they buy from, allowing those companies to anticipate their product needs and charge them more, said Jonathan Coppess, a professor at the University of Illinois.

Sensitive to the suggestion that they are infringing on privacy, the largest equipment makers say they don’t share farmers’ data with third parties unless farmers give permission. (Farmers frequently agree to share data with, for example, their local distributors and dealers.)

It’s common to hear that farmers are, by nature, highly protective of their land and business, and that this predisposes them to worry about sharing data even when there are more potential benefits than drawbacks. Still, the concerns are at least partly the result of a lack of legal and regulatory standards around the collection of data from smart farming technologies, observers say. Contracts to buy or rent big machines are many pages long and the language unclear, especially since some of the underlying legal concepts regarding the sharing and collecting of agricultural data are still evolving.

As one 2019 paper puts it, “the lack of transparency and clarity around issues such as data ownership, portability, privacy, trust and liability in the commercial relationships governing smart farming are contributing to farmers’ reluctance to engage in the widespread sharing of their farm data that smart farming facilitates. At the heart of the concerns is the lack of trust between the farmers as data contributors, and those third parties who collect, aggregate and share their data.”

[…]

Some farmers may still find themselves surprised to discover the amount of access Deere and others have to their data. Jacob Maurer is an agronomist with RDO Equipment Co., a Deere dealer, who helps farmers understand how to use their data to work their fields more efficiently. He explained that some farmers would be shocked to learn how much information about their fields he can access by simply tapping into Deere’s vast online stores of data and pulling up their details.

[…]

Based on the mountains of data flowing in to their databases, equipment makers with sufficient sales of machines around the country may in theory actually be able to predict, at least to some small but meaningful extent, the prices of various crops by analyzing the data its machines are sending in — such as “yields” of crops per acre, the amount of fertilizer used, or the average number of seeds of a given crop planted in various regions, all of which would help to anticipate the supply of crops come harvest season.

Were the company then to sell that data to a commodities trader, say, it could likely reap a windfall. Normally, the markets must wait for highly-anticipated government surveys to run their course before having an indication of the future supply of crops. The agronomic data that machine makers collect could offer similar insights but far sooner.

Machine makers don’t deny the obvious value of the data they collect. As AGCO’s Crawford put it: “Anybody that trades grains would love to have their hands on this data.”

Experts occasionally wonder about what companies could do with the data. Mary Kay Thatcher, a former official with the American Farm Bureau, raised just such a concern in an interview with National Public Radio in 2014, when questions about data ownership were swirling after Monsanto began deploying a new “precision planting” tool that required it to have gobs of data.

“They could actually manipulate the market with it. You know, they only have to know the information about what’s actually happening with harvest minutes before somebody else knows it,” Thatcher said in the interview.

“Not saying they will. Just a concern.”

Source: Access To Big Data Turns Farm Machine Makers Into Tech Firms

Apple Told This Developer That His App ‘Promoted’ Drugs – after 6 years in the store

In Apple’s world, an app can be inappropriate one day, but acceptable the next. That’s what the developer of Amphetamine—an app designed to keep Macs from going to sleep, which is useful in situations such as when a file is downloading or when a specific app is running—learned recently when Apple got in touch with him and told him that his app violated the company’s App Store guidelines.

Amphetamine developer William Gustafson published an account of the incident and his experience with Apple’s App Store review team on GitHub on Friday. In the post, Gustafson explained that Apple contacted him on Dec. 29 and told him that Amphetamine, which has been on the Mac App Store for six years, had suddenly begun violating one of the company’s App Store guidelines. Specifically, Gustafson said that Apple claimed that Amphetamine appeared to promote the inappropriate use of controlled substances given its very name—amphetamines are used to treat ADHD—and because its icon includes a pill.

[…]

“As we discussed, we found that your app includes content that some users may find upsetting, offensive, or otherwise objectionable,” an Apple representative told Gustafson on Dec. 29 according to a screenshot shared with Gizmodo. “Specifically, your app name and icon include references to controlled substances, pills.”

The representative then brought up App Store Guideline 1.4.3, which pertains to safety and physical harm. The guideline reads as follows:

“Apps that encourage consumption of tobacco and vape products, illegal drugs, or excessive amounts of alcohol are not permitted on the App Store. Apps that encourage minors to consume any of these substances will be rejected. Facilitating the sale of marijuana, tobacco, or controlled substances (except for licensed pharmacies) isn’t allowed.”

To resolve the issue, the Apple representative said that Gustafson had to remove all content that encourages inappropriate consumption of drugs or alcohol. Gustafson explained in his Github post that Apple had threatened to remove Amphetamine from the Mac App Store on Jan. 12 if he did not oblige with its request for changes.

If this is all sounding a bit wild to you, that’s because it is. Although Amphetamine uses its name and branding to lightheartedly convey the fact that the app will prevent your Mac from going to sleep, it does not do anything that violates Guideline 1.4.3.

Source: Apple Told This Developer That His App ‘Promoted’ Drugs

China’s Secret War for U.S. Data Blew American Spies’ Cover

Around 2013, U.S. intelligence began noticing an alarming pattern: Undercover CIA personnel, flying into countries in Africa and Europe for sensitive work, were being rapidly and successfully identified by Chinese intelligence, according to three former U.S. officials. The surveillance by Chinese operatives began in some cases as soon as the CIA officers had cleared passport control. Sometimes, the surveillance was so overt that U.S. intelligence officials speculated that the Chinese wanted the U.S. side to know they had identified the CIA operatives, disrupting their missions; other times, however, it was much more subtle and only detected through U.S. spy agencies’ own sophisticated technical countersurveillance capabilities.

[…]

CIA officials believed the answer was likely data-driven—and related to a Chinese cyberespionage campaign devoted to stealing vast troves of sensitive personal private information, like travel and health data, as well as U.S. government personnel records. U.S. officials believed Chinese intelligence operatives had likely combed through and synthesized information from these massive, stolen caches to identify the undercover U.S. intelligence officials. It was very likely a “suave and professional utilization” of these datasets, said the same former intelligence official. This “was not random or generic,” this source said. “It’s a big-data problem.”

[…]

In 2010, a new decade was dawning, and Chinese officials were furious. The CIA, they had discovered, had systematically penetrated their government over the course of years, with U.S. assets embedded in the military, the CCP, the intelligence apparatus, and elsewhere. The anger radiated upward to “the highest levels of the Chinese government,” recalled a former senior counterintelligence executive.

Exploiting a flaw in the online system CIA operatives used to secretly communicate with their agents—a flaw first identified in Iran, which Tehran likely shared with Beijing—from 2010 to roughly 2012, Chinese intelligence officials ruthlessly uprooted the CIA’s human source network in China, imprisoning and killing dozens of people.

[…]

The anger in Beijing wasn’t just because of the penetration by the CIA but because of what it exposed about the degree of corruption in China. When the CIA recruits an asset, the further this asset rises within a county’s power structure, the better. During the Cold War it had been hard to guarantee the rise of the CIA’s Soviet agents; the very factors that made them vulnerable to recruitment—greed, ideology, blackmailable habits, and ego—often impeded their career prospects. And there was only so much that money could buy in the Soviet Union, especially with no sign of where it had come from.

But in the newly rich China of the 2000s, dirty money was flowing freely. The average income remained under 2,000 yuan a month (approximately $240 at contemporary exchange rates), but officials’ informal earnings vastly exceeded their formal salaries. An official who wasn’t participating in corruption was deemed a fool or a risk by his colleagues. Cash could buy anything, including careers, and the CIA had plenty of it.

[…]

Over the course of their investigation into the CIA’s China-based agent network, Chinese officials learned that the agency was secretly paying the “promotion fees” —in other words, the bribes—regularly required to rise up within the Chinese bureaucracy, according to four current and former officials. It was how the CIA got “disaffected people up in the ranks. But this was not done once, and wasn’t done just in the [Chinese military],” recalled a current Capitol Hill staffer. “Paying their bribes was an example of long-term thinking that was extraordinary for us,” said a former senior counterintelligence official. “Recruiting foreign military officers is nearly impossible. It was a way to exploit the corruption to our advantage.” At the time, “promotion fees” sometimes ran into the millions of dollars, according to a former senior CIA official: “It was quite amazing the level of corruption that was going on.” The compensation sometimes included paying tuition and board for children studying at expensive foreign universities, according to another CIA officer.

[…]

This was a global problem for the CCP. Corrupt officials, even if they hadn’t been recruited by the CIA while in office, also often sought refuge overseas—where they could then be tapped for information by enterprising spy services. In late 2012, party head Xi Jinping announced a new anti-corruption campaign that would lead to the prosecution of hundreds of thousands of Chinese officials. Thousands were subject to extreme coercive pressure, bordering on kidnapping, to return from living abroad. “The anti-corruption drive was about consolidating power—but also about how Americans could take advantage of [the corruption]. And that had to do with the bribe and promotion process,” said the former senior counterintelligence official.

The 2013 leaks from Edward Snowden, which revealed the NSA’s deep penetration of the telecommunications company Huawei’s China-based servers, also jarred Chinese officials, according to a former senior intelligence analyst.

[…]

By about 2010, two former CIA officials recalled, the Chinese security services had instituted a sophisticated travel intelligence program, developing databases that tracked flights and passenger lists for espionage purposes. “We looked at it very carefully,” said the former senior CIA official. China’s spies “were actively using that for counterintelligence and offensive intelligence. The capability was there and was being utilized.” China had also stepped up its hacking efforts targeting biometric and passenger data from transit hubs, former intelligence officials say—including a successful hack by Chinese intelligence of biometric data from Bangkok’s international airport.

To be sure, China had stolen plenty of data before discovering how deeply infiltrated it was by U.S. intelligence agencies. However, the shake-up between 2010 and 2012 gave Beijing an impetus not only to go after bigger, riskier targets, but also to put together the infrastructure needed to process the purloined information. It was around this time, said a former senior NSA official, that Chinese intelligence agencies transitioned from merely being able to steal large datasets en masse to actually rapidly sifting through information from within them for use. U.S. officials also began to observe that intelligence facilities within China were being physically co-located near language and data processing centers, said this person.

For U.S. intelligence personnel, these new capabilities made China’s successful hack of the U.S. Office of Personnel Management (OPM) that much more chilling. During the OPM breach, Chinese hackers stole detailed, often highly sensitive personnel data from 21.5 million current and former U.S. officials, their spouses, and job applicants, including health, residency, employment, fingerprint, and financial data. In some cases, details from background investigations tied to the granting of security clearances—investigations that can delve deeply into individuals’ mental health records, their sexual histories and proclivities, and whether a person’s relatives abroad may be subject to government blackmail—were stolen as well. Though the United States did not disclose the breach until 2015, U.S. intelligence officials became aware of the initial OPM hack in 2012, said the former counterintelligence executive. (It’s not clear precisely when the compromise actually happened.)

[…]

For some at the CIA, recalled Gail Helt, a former CIA China analyst, the reaction to the OPM breach was, “Oh my God, what is this going to mean for everybody who had ever traveled to China? But also what is it going to mean for people who we had formally recruited, people who might be suspected of talking to us, people who had family members there? And what will this mean for agency efforts to recruit people in the future? It was terrifying. Absolutely terrifying.” Many feared the aftershocks would be widespread. “The concern just wasn’t that [the OPM hack] would curtail info inside China,” said a former senior national security official. “The U.S. and China bump up against each other around the world. It opened up a global Pandora’s box of problems.”

[…]

. During this same period, U.S. officials concluded that Russian intelligence officials, likely exploiting a difference in payroll payments between real State Department employees and undercover CIA officers, had identified some of the CIA personnel working at the U.S. Embassy in Moscow. Officials thought that this insight may have come from data derived from the OPM hack, provided by the Chinese to their Russian counterparts. U.S. officials also wondered whether the OPM hack could be related to an uptick in attempted recruitments by Chinese intelligence of Chinese American translators working for U.S. intelligence agencies when they visited family in China. “We also thought they were trying to get Mandarin speakers to apply for jobs as translators” within the U.S. intelligence community, recalled the former senior counterintelligence official. U.S. officials believed that Chinese intelligence was giving their agents “instructions on how to pass a polygraph.”

But after the OPM breach, anomalies began to multiply. In 2012, senior U.S. spy hunters began to puzzle over some “head-scratchers”: In a few cases, spouses of U.S. officials whose sensitive work should have been difficult to discern were being approached by Chinese and Russian intelligence operatives abroad, according to the former counterintelligence executive. In one case, Chinese operatives tried to harass and entrap a U.S. official’s wife while she accompanied her children on a school field trip to China. “The MO is that, usually at the end of the trip, the lightbulb goes on [and the foreign intelligence service identifies potential persons of interest]. But these were from day one, from the airport onward,” the former official said.

[…]

Source: China’s Secret War for U.S. Data Blew American Spies’ Cover

Firefox to ship ‘network partitioning’ as a new anti-tracking defense

Firefox 85, scheduled to be released next month, in January 2021, will ship with a feature named Network Partitioning as a new form of anti-tracking protection.

The feature is based on “Client-Side Storage Partitioning,” a new standard currently being developed by the World Wide Web Consortium’s Privacy Community Group.

“Network Partitioning is highly technical, but to simplify it somewhat; your browser has many ways it can save data from websites, not just via cookies,” privacy researcher Zach Edwards told ZDNet in an interview this week.

“These other storage mechanisms include the HTTP cache, image cache, favicon cache, font cache, CORS-preflight cache, and a variety of other caches and storage mechanisms that can be used to track people across websites.”

Edwards says all these data storage systems are shared among websites.

The difference is that Network Partitioning will allow Firefox to save resources like the cache, favicons, CSS files, images, and more, on a per-website basis, rather than together, in the same pool.

This makes it harder for websites and third-parties like ad and web analytics companies to track users since they can’t probe for the presence of other sites’ data in this shared pool.

According to Mozilla, the following network resources will be partitioned starting with Firefox 85:

  • HTTP cache
  • Image cache
  • Favicon cache
  • Connection pooling
  • StyleSheet cache
  • DNS
  • HTTP authentication
  • Alt-Svc
  • Speculative connections
  • Font cache
  • HSTS
  • OCSP
  • Intermediate CA cache
  • TLS client certificates
  • TLS session identifiers
  • Prefetch
  • Preconnect
  • CORS-preflight cache

But while Mozilla will be deploying the broadest user data “partitioning system” to date, the Firefox creator isn’t the first.

Edwards said the first browser maker to do so was Apple, in 2013, when it began partitioning the HTTP cache, and then followed through by partitioning even more user data storage systems years later, as part of its Tracking Prevention feature.

Google also partitioned the HTTP cache last month, with the release of Chrome 86, and the results began being felt right away, as Google Fonts lost some of its performance metrics as it couldn’t store fonts in the shared HTTP cache anymore.

The Mozilla team expects similar performance issues for sites loaded in Firefox, but it’s willing to take the hit just to improve the privacy of its users.

“Most policy makers and digital strategists are focused on the death of the 3rd party cookie, but there are a wide variety of other fingerprinting techniques and user tracking strategies that need to be broken by browsers,” Edwards also ZDNet, lauding Mozilla’s move.

PS: Mozilla also said that a side-effect of deploying Network Partitioning is that Firefox 85 will finally be able to block “supercookies” better, a type of browser cookie file that abuses various shared storage mediums to persist in browsers and allow advertisers to track user movements across the web.

Source: Firefox to ship ‘network partitioning’ as a new anti-tracking defense | ZDNet

French Film Company Somehow Trademarks ‘Planet’, Goes After Environmental NGOs For Using The Word

We cover a great many ridiculous and infuriating trademark disputes here, but it’s always the disputes around overly broad terms that never should have been trademarked to begin with that are the most frustrating. And that most irritating of those is when we get into geographic terms that never should be locked up by any single company or entity. Examples in the past have included companies fighting over who gets to use the name of their home city of “Detroit“, or when grocer Iceland Foods got so aggressive in its own trademark enforcement that the — checks notes — nation of Iceland had to seek to revoke the company’s EU trademark registration.

While it should be self-evident how antithetical to the purpose of trademark laws are to even approve of these kinds of marks, I will say that I didn’t see it coming that a company at some point would attempt to play trademark bully over the “planet.”

Powerful French entertainment company Canal Plus trademarked the term in France, but environmental groups are pushing back, saying they should be allowed to use the word “planet” to promote their projects to save it. Multiple cases are under examination by France’s intellectual property regulator INPI, including one coming to a head this week.

Canal Plus argues that the groups’ use of the terms “planete” in French, or “planet” in English, for marketing purposes violates its trademarks, registered to protect its Planete TV channels that showcase nature documentaries.

That this dispute is even a thing raises questions. Why in the world (heh) would any trademark office approve a mark solely on the word “planet”. Such a registration violates all kinds of rules and norms, explicit and otherwise. Geographic terms are supposed to have a high bar for trademark approval. Single word marks not inherently creative typically do as well. And, when trademarks for either are approved, they are typically done so in very narrow terms. That EUIPO somehow managed to approve a trademark that caused a film company to think it can sue or bully NGOs focused on environmental issues for using the word for the rock we all live on together should ring as absurd to anyone who finds out about it.

Certainly it did to those on the other end of Canal Plus’ bullying, as they seemed to think the whole thing was either a joke or attempt at fraud.

The head of environmental group Planete Amazone, Gert-Peter Bruch, thought it was a hoax when he first received a mail from Canal Plus claiming ownership of the planet brand.

[…]

But, as we often note, trademark bullying tends to work. Bruch’s organization is hammering out a deal with Canal Plus in an effort to keep using the word “planet.” That shouldn’t have to occur, but it is. Other groups are waiting on a ruling from the French National Intellectual Property Institute in the hopes that someone somewhere will be sane about all of this.

Source: French Film Company Somehow Trademarks ‘Planet’, Goes After Environmental NGOs For Using The Word | Techdirt

Should We Use Search History for Credit Scores? IMF Says Yes

With more services than ever collecting your data, it’s easy to start asking why anyone should care about most of it. This is why. Because people start having ideas like this.

In a new blog post for the International Monetary Fund, four researchers presented their findings from a working paper that examines the current relationship between finance and tech as well as its potential future. Gazing into their crystal ball, the researchers see the possibility of using the data from your browsing, search, and purchase history to create a more accurate mechanism for determining the credit rating of an individual or business. They believe that this approach could result in greater lending to borrowers who would potentially be denied by traditional financial institutions.

At its heart, the paper is trying to wrestle with the dawning notion that the institutional banking system is facing a serious threat from tech companies like Google, Facebook, and Apple. The researchers identify two key areas in which this is true: Tech companies have greater access to soft-information, and messaging platforms can take the place of the physical locations that banks rely on for meeting with customers.

[…]

But how would all this data be incorporated into credit ratings? Machine learning, of course. It’s black boxes all the way down.

The researchers acknowledge that there will be privacy and policy concerns related to incorporating this kind of soft-data into credit analysis. And they do little to explain how this might work in practice. The paper isn’t long, and it’s worth a read just to wrap your mind around some of the notions of fintech’s future and why everyone seems to want in on the payments game.

As it is, getting the really fine soft-data points would probably require companies like Facebook and Apple to loosen up their standards on linking unencrypted information with individual accounts. How they might share information would other institutions would be its own can of worms.

[…]

Yes, the idea of every move you make online feeding into your credit score is creepy. It may not even be possible in the near future. The IMF researchers stress that “governments should follow and carefully support the technological transition in finance. It is important to adjust policies accordingly and stay ahead of the curve.” When’s the last time a government did any of that?

Source: Should We Use Search History for Credit Scores? IMF Says Yes

Secret Agents Implicated In The Poisoning Of Opposition Leader Alexey Navalny Identified Thanks To Russia’s Black Market In Everybody’s Personal Data

Back in August, the Russian opposition leader Alexei Navalny was poisoned on a flight to Moscow. Despite initial doubts — and the usual denials by the Russian government that Vladimir Putin was involved — everyone assumed it had been carried out by the country’s FSB, successor to the KGB. Remarkable work by the open source intelligence site Bellingcat, which Techdirt first wrote about in 2014, has now established beyond reasonable doubt that FSB agents were involved:

A joint investigation between Bellingcat and The Insider, in cooperation with Der Spiegel and CNN, has discovered voluminous telecom and travel data that implicates Russia’s Federal Security Service (FSB) in the poisoning of the prominent Russian opposition politician Alexey Navalny. Moreover, the August 2020 poisoning in the Siberian city of Tomsk appears to have happened after years of surveillance, which began in 2017 shortly after Navalny first announced his intention to run for president of Russia.

That’s hardly a surprise. Perhaps more interesting for Techdirt readers is the story of how Bellingcat pieced together the evidence implicating Russian agents. The starting point was finding passengers who booked similar flights to those that Navalny took as he moved around Russia, usually earlier ones to ensure they arrived in time but without making their shadowing too obvious. Once Bellingcat had found some names that kept cropping up too often to be a coincidence, the researchers were able to draw on a unique feature of the Russian online world:

Due to porous data protection measures in Russia, it only takes some creative Googling (or Yandexing) and a few hundred euros worth of cryptocurrency to be fed through an automated payment platform, not much different than Amazon or Lexis Nexis, to acquire telephone records with geolocation data, passenger manifests, and residential data. For the records contained within multi-gigabyte database files that are not already floating around the internet via torrent networks, there is a thriving black market to buy and sell data. The humans who manually fetch this data are often low-level employees at banks, telephone companies, and police departments. Often, these data merchants providing data to resellers or direct to customers are caught and face criminal charges. For other batches of records, there are automated services either within websites or through bots on the Telegram messaging service that entirely circumvent the necessity of a human conduit to provide sensitive personal data.

The process of using these leaked resources to establish the other agents involved in the surveillance and poisoning of Navalny, and their real identities, since they naturally used false names when booking planes and cars, is discussed in fascinating detail on the Bellingcat site. But the larger point here is that strong privacy protections are good not just for citizens, but for governments too. As the Bellingcat researchers put it:

While there are obvious and terrifying privacy implications from this data market, it is clear how this environment of petty corruption and loose government enforcement can be turned against Russia’s security service officers.

As well as providing Navalny with confirmation that the Russian government at the highest levels was probably behind his near-fatal poisoning, this latest Bellingcat analysis also achieves something else that is hugely important. It has given privacy advocates a really powerful argument for why governments — even the most retrogressive and oppressive — should be passing laws to protect the personal data of every citizen effectively. Because if they don’t, clever people like Bellingcat will be able to draw on the black market resources that inevitably spring up, to reveal lots of things those in power really don’t want exposed.

Source: Secret Agents Implicated In The Poisoning Of Opposition Leader Alexey Navalny Identified Thanks To Russia’s Black Market In Everybody’s Personal Data | Techdirt

France fines Google $120M and Amazon $42M for dropping tracking cookies without consent

France’s data protection agency, the CNIL, has slapped Google and Amazon with fines for dropping tracking cookies without consent.

Google has been hit with a total of €100 million ($120 million) for dropping cookies on Google.fr and Amazon €35 million (~$42 million) for doing so on the Amazon .fr domain under the penalty notices issued today.

The regulator carried out investigations of the websites over the past year and found tracking cookies were automatically dropped when a user visited the domains in breach of the country’s Data Protection Act.

In Google’s case the CNIL has found three consent violations related to dropping non-essential cookies.

“As this type of cookies cannot be deposited without the user having expressed his consent, the restricted committee considered that the companies had not complied with the requirement provided for by article 82 of the Data Protection Act and the prior collection of the consent before the deposit of non-essential cookies,” it writes in the penalty notice [which we’ve translated from French].

Amazon was found to have made two violations, per the CNIL penalty notice.

CNIL also found that the information about the cookies provided to site visitors was inadequate — noting that a banner displayed by Google did not provide specific information about the tracking cookies the Google.fr site had already dropped.

Under local French (and European) law, site users should have been clearly informed before the cookies were dropped and asked for their consent.

In Amazon’s case its French site displayed a banner informing arriving visitors that they agreed to its use of cookies. CNIL said this did not comply with transparency or consent requirements — since it was not clear to users that the tech giant was using cookies for ad tracking. Nor were users given the opportunity to consent.

The law on tracking cookie consent has been clear in Europe for years. But in October 2019 a CJEU ruling further clarified that consent must be obtained prior to storing or accessing non-essential cookies. As we reported at the time, sites that failed to ask for consent to track were risking a big fine under EU privacy laws.

Source: France fines Google $120M and Amazon $42M for dropping tracking cookies without consent | TechCrunch

‘Save Europe from Software Patents’, Urges Nonprofit FFII – DE is trying for 3rd time using underhanded sneaky tactics

Long-time Slashdot reader zoobab shares this update about the long-standing Foundation for a Free Information Infrastructure, a Munich-based non-profit opposing ratification of a “Unified Patent Court” by Germany: The FFII is crowdfunding a constitutional complaint in Germany against the third attempt to impose software patents in Europe, calling on all software companies, independent software developers and FLOSS authors to donate.

The Unitary Patent and its Court will promote patent trolls, without any appeal possible to the European Court of Justice, which won’t be able to rule on patent law, and software patents in particular. The FFII also says that the proposed court system will be more expensive for small companies then the current national court system.
The stakes are high — so the FFII writes that they’re anticipating some tricky counter-maneuvering: Stopping the UPC in Germany will be enough to kill the UPC for the whole Europe… German government believe that they can ratify before the end of the year, as they consider the UK still a member of the EU till 31st December. The agenda of next votes have been designed on purpose to ratify the UPC before the end of the year. FFII expects dirty agenda and political hacks to declare the treaty “into force”, dismiss “constitutional complaints”, while the presence of UK is still problematic.

Source: ‘Save Europe from Software Patents’, Urges Nonprofit FFII – Slashdot

These have been batted off the table before and for very good reason.

TSA Oversight Says Agency’s Suspicionless Surveillance Program Is Worthless And The TSA Can’t Prove It Isn’t

The TSA’s “Quiet Skies” program continues to suffer under scrutiny. When details first leaked out about the TSA’s suspicionless surveillance program, even the air marshals tasked with tailing non-terrorists all over the nation seemed concerned. Marshals questioned the “legality and validity” of the program that sent them after people no government agency had conclusively tied to terrorist organizations or activities. Simply changing flights in the wrong country was enough to initiate the process.

First, the TSA lost the support of the marshals. Then it lost itself. The TSA admitted during a Congressional hearing that it had trailed over 5,000 travelers (in less than four months!) but had yet to turn up even a single terrorist. Nonetheless, it stated it would continue to trail thousands of people a year, presumably in hopes of preventing another zero terrorist attacks.

Then it lost the Government Accountability Office. The GAO’s investigation of the program contained more investigative activity than the program itself. According to its report, the TSA felt surveillance was good but measuring the outcome was bad. When you’re trailing 5,000 people and stopping zero terrorists, the less you know, the better. Not being able to track effectiveness appeared to be a feature of “Quiet Skies,” rather than a bug.

Now it’s lost the TSA’s Inspector General. The title of the report [PDF] underplays the findings, stating the obvious while also understating the obvious: TSA Needs to Improve Management of the Quiet Skies Program. A good alternative title would be “TSA Needs to Scrap the Quiet Skies Program Until it Can Come Up with Something that Might Actually Stop Terrorists.”

I mean…

TSA did not properly plan, implement, and manage the Quiet Skies program to meet the program’s mission of mitigating the threat to commercial aviation posed by higher risk passengers.

In slightly more detail, the TSA did nothing to set up the program correctly or ensure it actually worked. The IG says the TSA never developed performance goals or other metrics to gauge the effectiveness of the suspicionless surveillance. It also ignored its internal guidance to more effectively deploy its ineffective program.

Here’s why:

This occurred because TSA lacked sufficient, centralized oversight to ensure the Quiet Skies program operated as intended.

[…]

Source: TSA Oversight Says Agency’s Suspicionless Surveillance Program Is Worthless And The TSA Can’t Prove It Isn’t | Techdirt

Facebook crushed rivals to maintain an illegal monopoly, the entire United States yells in Zuckerberg’s face

Facebook illegally crushed its competition and continues to do so to this day to maintain its monopoly, according to a lawsuit filed on Wednesday by the attorneys general of no fewer than 46 US states plus Guam and DC.

The lawsuit alleges that the social media giant “illegally acquired competitors in a predatory manner and cut services to smaller threats – depriving users from the benefits of competition and reducing privacy protections and services along the way – all in an effort to boost its bottom line through increased advertising revenue.”

America’s consumer watchdog the FTC is also suing the antisocial network in a parallel action, and making the same basic allegations: that Facebook has been “illegally maintaining its personal social networking monopoly through a years-long course of anticompetitive conduct.”

It’s been a long time coming but the, as alleged, privacy-invading, competition-crushing Zuckerberg spin machine that is Facebook has finally been taken on by the United States.

The action is being led by New York’s Attorney General Letitia James, and she wasn’t holding back in her declaration of legal war. “For nearly a decade, Facebook has used its dominance and monopoly power to crush smaller rivals and snuff out competition, all at the expense of everyday users,” she said. “Today, we are taking action to stand up for the millions of consumers and many small businesses that have been harmed by Facebook’s illegal behavior.”

She also highlighted the biggest complaint against Facebook by its users, a complaint that has been commonplace for nearly a decade, that it has made “billions by converting personal data into a cash cow.”

[…]

The 123-page lawsuit [PDF] dives into how what was once just a website among many others became an online monster devouring anything in its path. “Facebook illegally maintains that monopoly power by deploying a buy-or-bury strategy that thwarts competition and harms both users and advertisers. Facebook’s illegal course of conduct has been driven, in part, by fear that the company has fallen behind in important new segments and that emerging firms were ‘building networks that were competitive with’ Facebook’s and could be ‘very disruptive to’ the company’s dominance,” the lawsuit stated.

It quotes CEO Mark Zuckerberg directly and notes that the Silicon Valley goliath would ruthlessly buy up companies in order to “build a competitive moat” or “neutralize a competitor” in its bid for dominance. And notes that Facebook has “coupled its acquisition strategy with exclusionary tactics that snuffed out competitive threats and sent the message to technology firms that, in the words of one participant, if you stepped into Facebook’s turf or resisted pressure to sell, Zuckerberg would go into ‘destroy mode’ subjecting your business to the ‘wrath of Mark.’ As a result, Facebook has chilled innovation, deterred investment, and forestalled competition in the markets in which it operates, and it continues to do so.”

The lawsuit is a much tighter and angrier indictment of Facebook than a similar one lodged against Google in October by the Department of Justice. It still relies on traditional antitrust arguments, however, rather than trying to break new ground to deal with the modern internet era.

[…]

Source: Facebook crushed rivals to maintain an illegal monopoly, the entire United States yells in Zuckerberg’s face • The Register

I have been talking about this since the beginning of 2019 and it’s wonderful to see the tsunami of action happening now

Proposed U.S. Law Could Slap Twitch Streamers With Felonies For Broadcasting Copyrighted Material

According to Politico offshoot Protocol, the felony streaming proposal is the work of Republican senator Thom Tillis, who has backed similar proposals previously. It is more or less exactly what it sounds like: A proposal to turn unauthorized commercial streaming of copyrighted material—progressive policy publication The American Prospect specifically points to examples like “an album on YouTube, a video clip on Twitch, or a song in an Instagram story”—into a felony offense with a possible prison sentence. Currently, such violations, no matter how severe, are considered misdemeanors rather than felonies, because the law regards streaming as a public performance. With Twitch currently in the crosshairs of the music industry, such a change would turn up the heat on streamers and Twitch even higher—perhaps to an untenable degree. Other platforms, like YouTube, would almost certainly suffer as well.

“A felony streaming bill would likely be a chill on expression,” Katharine Trendacosta, associate director of policy and activism with the Electronic Frontier Foundation, told The American Prospect. “We already see that it’s hard enough in just civil copyright and the DMCA for people to feel comfortable asserting their rights. The chance of a felony would impact both expression and innovation.”

According to Protocol, House and Senate Judiciary Committees have agreed to package the streaming felony proposal with other controversial provisions that include the CASE act, which would establish a new court-like entity within the U.S. Copyright Office to resolve copyright disputes, and the Trademark Modernization Act, which would give the U.S. Patent and Trademark Office more flexibility to crack down on illegitimate claims from foreign countries.

Alongside the felony streaming proposal, these provisions have drawn ire from civil rights groups, digital rights nonprofits, and companies including the aforementioned Electronic Frontier Foundation, the Internet Archive, the American Library Association, and the Center for Democracy & Technology. Collectively, these groups and others penned a letter to the U.S. Senate last week.

[…]

Source: Proposed U.S. Law Could Slap Twitch Streamers With Felonies For Broadcasting Copyrighted Material

It’s incredible that not only does copyright stifle competition, but it allows a creator to create something once, get lucky and then sit on his / her arse for the rest of their lives – and  their childrens’ doing sweet fuck all and raking in dosh. And that these laws get stronger and stronger for the people who do pretty much nothing.

As if Productivity Score wasn’t creepy enough, Microsoft has patented tech for ‘meeting quality monitoring devices’ – PS is being defanged though

The slightly creepy “Productivity Score” may not be all that’s in store for Microsoft 365 users, judging by a trawl of Redmond’s patents.

One that has popped up recently concerns a “Meeting Insight Computing System“, spotted first by GeekWire, created to give meetings a quality score with a view to improving upcoming get-togethers.

It all sounds innocent enough until you read about the requirement for “quality parameters” to be collected from “meeting quality monitoring devices”, which might give some pause for thought.

Productivity Score relies on metrics captured within Microsoft 365 to assess how productive a company and its workers are. Metrics include the take-up of messaging platforms versus email. And though Microsoft has been quick to insist the motives behind the tech are pure, others have cast more of a jaundiced eye over the technology.

[…]

Meeting Insights would take things further by plugging data from a variety of devices into an algorithm in order to score the meeting. Sampling of environmental data such as air quality and the like is all well and good, but proposed sensors such as “a microphone that may, for instance, detect speech patterns consistent with boredom, fatigue, etc” as well as measuring other metrics, such as how long a person spends speaking, could also provide data to be stirred into the mix.

And if that doesn’t worry attendees, how about some more metrics to measure how focused a person is? Are they taking care of emails, messaging or enjoying a surf of the internet when they should be paying attention to the speaker? Heck, if one is taking data from a user’s computer, one could even consider the physical location of the device.

[…]

Talking to The Reg, one privacy campaigner who asked to remain anonymous said of tools such as Productivity Score and the Meeting Insight Computing System patent: “There is a simple dictum in privacy: you cannot lose data you don’t have. In other words, if you collect it you have to protect it, and that sort of data is risky to start with.

“Who do you trust? The correct answer is ‘no one’.”

Source: As if Productivity Score wasn’t creepy enough, Microsoft has patented tech for ‘meeting quality monitoring devices’ • The Register

Since then, Microsoft will remove user names from ‘Productivity Score’ feature after privacy backlash ( Geekwire )

Microsoft says it will make changes in its new Productivity Score feature, including removing the ability for companies to see data about individual users, to address concerns from privacy experts that the tech giant had effectively rolled out a new tool for snooping on workers.

“Going forward, the communications, meetings, content collaboration, teamwork, and mobility measures in Productivity Score will only aggregate data at the organization level—providing a clear measure of organization-level adoption of key features,” wrote Jared Spataro, Microsoft 365 corporate vice president, in a post this morning. “No one in the organization will be able to use Productivity Score to access data about how an individual user is using apps and services in Microsoft 365.”

The company rolled out its new “Productivity Score” feature as part of Microsoft 365 in late October. It gives companies data to understand how workers are using and adopting different forms of technology. It made headlines over the past week as reports surfaced that the tool lets managers see individual user data by default.

As originally rolled out, Productivity Score turned Microsoft 365 into a “full-fledged workplace surveillance tool,” wrote Wolfie Christl of the independent Cracked Labs digital research institute in Vienna, Austria. “Employers/managers can analyze employee activities at the individual level (!), for example, the number of days an employee has been sending emails, using the chat, using ‘mentions’ in emails etc.”

The initial version of the Productivity Score tool allowed companies to see individual user data. (Screenshot via YouTube)

Spataro wrote this morning, “We appreciate the feedback we’ve heard over the last few days and are moving quickly to respond by removing user names entirely from the product. This change will ensure that Productivity Score can’t be used to monitor individual employees.”

Poland’s Bid To Get Upload Filters Taken Out Of The EU Copyright Directive Suddenly Looks Much More Hopeful

one of the biggest defeats for users of the Internet — and for online freedom of expression — was the passage of the EU Copyright Directive last year. The law was passed using a fundamentally dishonest argument that it did not require upload filters, because they weren’t explicitly mentioned in the text. As a result, supporters of the legislation claimed, platforms would be free to use other technologies that did not threaten freedom of speech in the way that automated upload filters would do. However, as soon as the law was passed, countries like France said that the only way to implement Article 17 (originally Article 13) was through upload filters, and copyright companies started pushing for legal memes to be blocked because they now admitted that upload filters were “practically unworkable“.

This dishonesty may come back to bite supporters of the law. Techdirt reported last August that Poland submitted a formal request for upload filters to be removed from the final text. The EU’s top court, the Court of Justice of the European Union (CJEU) has just held a public hearing on this case, and as the detailed report by Paul Keller makes abundantly clear, there are lots of reason to be hopeful that Article 17’s upload filters are in trouble from a legal point of view.

The hearing was structured around four questions. Principally, the CJEU wanted to know whether Article 17 meant that upload filters were mandatory. This is a crucial question because the court has found in the past that a general obligation to monitor all user uploads for illegal activities violates the fundamental rights of Internet users and platform operators. This is why proponents of the law insisted that upload filters were not mandatory, but simply one technology that could be applied

[…]

Poland also correctly pointed out that the alternatives presented by the European institutions, such as fingerprinting, hashing, watermarking, Artificial Intelligence or keyword search, all constitute alternative methods of filtering, but not alternatives to filtering.

This is the point that every expert has been making for years: there are no viable alternatives to upload filters, which means that Article 17 necessarily imposes a general monitoring requirement, something that is not permitted under current EU law. The fact that the Advocate General Øe, who will release his own recommendations on the case early next year, made his comment about the lack of any practical alternative to upload filters is highly significant. During the hearing, representatives of the French and Spanish governments claimed that this doesn’t matter, for the following remarkable reason:

The right to intellectual property should be prioritized over freedom of expression in cases of uncertainty over the legality of user uploads, because the economic damage to copyright-holders from leaving infringements online even for a short period of time would outweigh the damage to freedom of expression of users whose legal uploads may get blocked.

The argument here seems to be that as soon as even a single illegal copy is placed online, it will be copied rapidly and spread around the Internet. But this line of reasoning undermines itself. If placing a single illegal copy online for even a short time really is enough for it to be shared widely, then it only requires a copy to be placed on a site outside the EU’s reach for copies to spread around the entire Internet anyway — because copying is so easy — which makes the speed of the takedown within the EU irrelevant.

[…]

In other words, what seemed at the time like a desperate last attempt by Poland to stop the awful upload filters, with little hope of succeeding, now looks to have a decent chance because of the important general issues it raises — something explored at greater length in a new study written by Reda and others (pdf). That’s not to say that Article 17’s upload filters are dead, but it seems like the underhand methods used to force this legislation through could turn out to be their downfall.

Source: Poland’s Bid To Get Upload Filters Taken Out Of The EU Copyright Directive Suddenly Looks Much More Hopeful | Techdirt

Privacy campaigner flags concerns about Microsoft’s creepy Productivity Score now in 365

Microsoft’s Productivity Score has put in a public appearance in Microsoft 365 and attracted the ire of privacy campaigners and activists.

The Register had already noted the vaguely creepy-sounding technology back in May. The goal of it is to use telemetry captured by the Windows behemoth to track the productivity of an organisation through metrics such as a corporate obsession with interminable meetings or just how collaborative employees are being.

The whole thing sounds vaguely disturbing in spite of Microsoft’s insistence that it was for users’ own good.

As more details have emerged, so have concerns over just how granular the level of data capture is.

Vienna-based researcher (and co-creator of Data Dealer) Wolfie Christl suggested that the new features “turns Microsoft 365 into an full-fledged workplace surveillance tool.”

Christl went on to claim that the software allows employers to dig into employee activities, checking the usage of email versus Teams and looking into email threads with @mentions. “This is so problematic at many levels,” he noted, adding: “Managers evaluating individual-level employee data is a no go,” and that there was the danger that evaluating “productivity” data can shift power from employees to organisations.

Earlier this year we put it to Microsoft corporate vice president Brad Anderson that employees might find themselves under the gimlet gaze of HR thanks to this data.

He told us: “There is no PII [personally identifiable information] data in there… it’s a valid concern, and so we’ve been very careful that as we bring that telemetry back, you know, we bring back what we need, but we stay out of the PII world.”

Microsoft did concede that there could be granularity down to the individual level although exceptions could be configured. Melissa Grant, director of product marketing for Microsoft 365, told us that Microsoft had been asked if it was possible to use the tool to check, for example, that everyone was online and working by 8 but added: “We’re not in the business of monitoring employees.”

Christl’s concerns are not limited to the Productivity Score dashboard itself, but also regarding what is going on behind the scenes in the form of the Microsoft Graph. The People API, for example, is a handy jumping off point into all manner of employee data.

For its part, Microsoft has continued to insist that Productivity Score is not a stick with which to bash employees. In a recent blog on the matter, the company stated:

To be clear, Productivity Score is not designed as a tool for monitoring employee work output and activities. In fact, we safeguard against this type of use by not providing specific information on individualized actions, and instead only analyze user-level data aggregated over a 28-day period, so you can’t see what a specific employee is working on at a given time. Productivity Score was built to help you understand how people are using productivity tools and how well the underlying technology supports them in this.

In an email to The Register, Christl retorted: “The system *does* clearly monitor employee activities. And they call it ‘Productivity Score’, which is perhaps misleading, but will make managers use it in a way managers usually use tools that claim to measure ‘productivity’.”

He added that Microsoft’s own promotional video for the technology showed a list of clearly identifiable users, which corporate veep Jared Spataro said enabled companies to “find your top communicators across activities for the last four weeks.”

We put Christl’s concerns to Microsoft and asked the company if its good intentions extended to the APIs exposed by the Microsoft Graph.

While it has yet to respond to worries about the APIs, it reiterated that the tool was compliant with privacy laws and regulations, telling us: “Productivity Score is an opt-in experience that gives IT administrators insights about technology and infrastructure usage.

It added: “Insights are intended to help organizations make the most of their technology investments by addressing common pain points like long boot times, inefficient document collaboration, or poor network connectivity. Insights are shown in aggregate over a 28-day period and are provided at the user level so that an IT admin can provide technical support and guidance.”

Source: Privacy campaigner flags concerns about Microsoft’s creepy Productivity Score • The Register

IRS contracted to Search Warrantless Location Database Over 10,000 Times

The IRS was able to query a database of location data quietly harvested from ordinary smartphone apps over 10,000 times, according to a copy of the contract between IRS and the data provider obtained by Motherboard.

The document provides more insight into what exactly the IRS wanted to do with a tool purchased from Venntel, a government contractor that sells clients access to a database of smartphone movements. The Inspector General is currently investigating the IRS for using the data without a warrant to try to track the location of Americans.

“This contract makes clear that the IRS intended to use Venntel’s spying tool to identify specific smartphone users using data collected by apps and sold onwards to shady data brokers. The IRS would have needed a warrant to obtain this kind of sensitive information from AT&T or Google,” Senator Ron Wyden told Motherboard in a statement after reviewing the contract.

[…]

Venntel sources its location data from gaming, weather, and other innocuous looking apps. An aide for the office of Senator Ron Wyden, whose office has been investigating the location data industry, previously told Motherboard that officials from Customs and Border Protection (CBP), which has also purchased Venntel products, said they believe Venntel also obtains location information from the real-time bidding that occurs when advertisers push their adverts into users’ browsing sessions.

One of the new documents says Venntel sources the location information from its “advertising analytics network and other sources.” Venntel is a subsidiary of advertising firm Gravy Analytics.

The data is “global,” according to a document obtained from CBP.

[…]

Source: IRS Could Search Warrantless Location Database Over 10,000 Times

GM launches OnStar Insurance Services – uses your driving data to calculate insurance rate

Andrew Rose, president of OnStar Insurance Services commented: “OnStar Insurance will promote safety, security and peace of mind. We aim to be an industry leader, offering insurance in an innovative way.

“GM customers who have subscribed to OnStar and connected services will be eligible to receive discounts, while also receiving fully-integrated services from OnStar Insurance Services.”

The service has been developed to improve the experience for policyholders who have an OnStar Safety & Security plan, as Automatic Crash Response has been designed to notify an OnStar Emergency-certified Advisor who can send for help.

The service is currently working with its insurance carrier partners to remove biased insurance plans by focusing on factors within the customer’s control, which includes individual vehicle usage and rewarding smart driving habits that benefit road safety.

OnStar Insurance Services plans to provide customers with personalised vehicle care and promote safer driving habits, along with a data-backed analysis of driving behaviour.

Source: General Motors launches OnStar Insurance Services – Reinsurance News

What it doesn’t say is whether it could raise insurances or deny them entirely, how transparent the reward system will be or what else they will be doing with your data.

Australia’s spy agencies caught collecting COVID-19 app data

Australia’s intelligence agencies have been caught “incidentally” collecting data from the country’s COVIDSafe contact-tracing app during the first six months of its launch, a government watchdog has found.

The report, published Monday by the Australian government’s inspector general for the intelligence community, which oversees the government’s spy and eavesdropping agencies, said the app data was scooped up “in the course of the lawful collection of other data.”

But the watchdog said that there was “no evidence” that any agency “decrypted, accessed or used any COVID app data.”

Incidental collection is a common term used by spies to describe the data that was not deliberately targeted but collected as part of a wider collection effort. This kind of collection isn’t accidental, but more of a consequence of when spy agencies tap into fiber optic cables, for example, which carries an enormous firehose of data. An Australian government spokesperson told one outlet, which first reported the news, that incidental collection can also happen as a result of the “execution of warrants.”

The report did not say when the incidental collection stopped, but noted that the agencies were “taking active steps to ensure compliance” with the law, and that the data would be “deleted as soon as practicable,” without setting a firm date.

For some, fears that a government spy agency could access COVID-19 contact-tracing data was the worst possible outcome.

[…]

Source: Australia’s spy agencies caught collecting COVID-19 app data | TechCrunch

Amazon’s ad-hoc Ring, Echo mesh network can mooch off your neighbors’ Wi-Fi if needed – and it’s opt-out

Amazon is close to launching Sidewalk – its ad-hoc wireless network for smart-home devices that taps into people’s Wi-Fi – and it is pretty much an opt-out affair.

The gist of Sidewalk is this: nearby Amazon gadgets, regardless of who owns them, can automatically organize themselves into their own private wireless network mesh, communicating primarily using Bluetooth Low Energy over short distances, and 900MHz LoRa over longer ranges.

At least one device in a mesh will likely be connected to the internet via someone’s Wi-Fi, and so, every gadget in the mesh can reach the ‘net via that bridging device. This means all the gadgets within a mesh can be remotely controlled via an app or digital assistant, either through their owners’ internet-connected Wi-Fi or by going through a suitable bridge in the mesh. If your internet goes down, your Amazon home security gizmo should still be reachable, and send out alerts, via the mesh.

It also means if your neighbor loses broadband connectivity, their devices in the Sidewalk mesh can still work over the ‘net by routing through your Sidewalk bridging device and using your home ISP connection.

[…]

Amazon Echoes, Ring Floodlight Cams, and Ring Spotlight Cams will be the first Sidewalk bridging devices as well as Sidewalk endpoints. The internet giant hopes to encourage third-party manufacturers to produce equipment that is also Sidewalk compatible, extending meshes everywhere.

Crucially, it appears Sidewalk is opt-out for those who already have the hardware, and will be opt-in for those buying new gear.

[…]

if you already have, say, an Amazon Ring, it will soon get a software update that will automatically enable Sidewalk connectivity, and you’ll get an email explaining how to switch that off. When powering up a new gizmo, you’ll at least get the chance to opt in or out.

[…]

We’re told Sidewalk will only sip your internet connection rather than hog it, limiting itself to half a gigabyte a month. This policy appears to live in hope that people aren’t on stingy monthly data caps.

[…]

Just don’t forget that Ring and the police, in the US at least, have a rather cosy relationship. While Amazon stresses that Ring owners are in control of the footage recorded by their camera-fitted doorbells, homeowners are often pressured into turning their equipment into surveillance systems for the cops.

Source: Amazon’s ad-hoc Ring, Echo mesh network can mooch off your neighbors’ Wi-Fi if needed – and it’s opt-out • The Register

Disney (Disney!) Accused Of Trying To Lawyer Its Way Out Of Paying Royalties To Alan Dean Foster, Star Wars and Alien book writer

Disney, of course, has quite the reputation as a copyright maximalist. It has been accused of being the leading company in always pushing for more draconian copyright laws. And then, of course, there’s the infamous Mickey Mouse curve, first designated a decade ago by Tom Bell, highlighting how copyright term extensions seemed to always happen just as Mickey Mouse was set to go into the public domain (though, hopefully that’s about to end):

Whether accurate or not, Disney is synonymous with maximizing copyright law, which the company and its lobbyists always justify with bullshit claims of how they do it “for the artist.”

Except that it appears that Disney is not paying artists. While the details are a bit fuzzy, yesterday the Science Fiction & Fantasy Writers of America (SFWA) and famed author Alan Dean Foster announced that Disney was no longer paying him royalties for the various Star Wars books he wrote (including the novelization of the very first film back in 1976), along with his novelizations of the Aliens movies. He claims he’d always received royalties before, but they suddenly disappeared.

Foster wrote a letter (amusingly addressed to “Mickey”) in which he lays out his side of the argument, more or less saying that as Disney has gobbled up various other companies and rights, it just stopped paying royalties:

When you purchased Lucasfilm you acquired the rights to some books I wrote. STAR WARS, the novelization of the very first film. SPLINTER OF THE MIND’S EYE, the first sequel novel. You owe me royalties on these books. You stopped paying them.

When you purchased 20th Century Fox, you eventually acquired the rights to other books I had written. The novelizations of ALIEN, ALIENS, and ALIEN 3. You’ve never paid royalties on any of these, or even issued royalty statements for them.

All these books are all still very much in print. They still earn money. For you. When one company buys another, they acquire its liabilities as well as its assets. You’re certainly reaping the benefits of the assets. I’d very much like my miniscule (though it’s not small to me) share.

[…]

In a video press conference, Foster and SFWA […] said that Disney is claiming that it purchased “the rights but not the obligations” to these works.

Source: Disney (Disney!) Accused Of Trying To Lawyer Its Way Out Of Paying Royalties To Alan Dean Foster | Techdirt

Nintendo Continues Cracking Down On People Selling Switch Hacks: jailbraking w RCM = piracy in their minds

Nintendo filed a lawsuit Wednesday against an Amazon Marketplace user who was allegedly selling devices called RCM loaders. Used to help people jailbreak their Switch, shutting these down is the latest in the company’s efforts to stop players from pirating its games.

As first reported by Polygon, the lawsuit against reseller Le Hoang Minh seeks “relief for unlawful trafficking in circumvention devices in violation of the Digital Millennium Copyright Act (DMCA).” In addition to having the Seattle District Court order Minh to stop selling the devices, Nintendo also wants $2,500 in damages for each one already sold.

“Piracy of video game software has become a serious, worsening international problem,” Nintendo’s lawyers write (without offering any further detail), arguing that the RCM loaders and other devices like them are are a big contributor to that. While jailbreaking a Switch isn’t necessarily itself against the law, pirating games is, and devices whose primary purpose is to facilitating that are also prohibited. The loaders aren’t hard to find on Amazon and other resellers, but it’s essentially the code the loaders are running to jailbreak the Switch that people buy them for and which Nintendo wants to stop the spread of.

According to the legal complaint Nintendo filed, the company originally sought to have Minh’s listings removed from Amazon by issuing DMCA-related takedowns, but Minh filed a counter-notification with Amazon to keep the listings up, forcing Nintendo to take the matter to court.

Source: Nintendo Continues Cracking Down On People Selling Switch Hacks

Just because a device can somehow be used for jailbraking doesn’t mean it always is. A bit like a phone can be used to plot a bank heist, but that isn’t the sole purpose of a phone.

The ones who brought you Let’s Encrypt, bring you: Tools for gathering anonymized app usage metrics from netizens

The Internet Security Research Group (ISRG) has a plan to allow companies to collect information about how people are using their products while protecting the privacy of those generating the data.

Today, the California-based non-profit, which operates Let’s Encrypt, introduced Prio Services, a way to gather online product metrics without compromising the personal information of product users.

“Applications such as web browsers, mobile applications, and websites generate metrics,” said Josh Aas, founder and executive director of ISRG, and Tim Geoghegan, site reliability engineer, in an announcement. “Normally they would just send all of the metrics back to the application developer, but with Prio, applications split the metrics into two anonymized and encrypted shares and upload each share to different processors that do not share data with each other.”

Prio is described in a 2017 research paper [PDF] as “a privacy-preserving system for the collection of aggregate statistics.” The system was developed by Henry Corrigan-Gibbs, then a Stanford doctoral student and currently an MIT assistant professor, and Dan Boneh, a professor of computer science and electrical engineering at Stanford.

Prio implements a cryptographic approach called secret-shared non-interactive proofs (SNIPs). According to its creators, it handles data only 5.7x slower than systems with no privacy protection. That’s considerably better than the competition: client-generated non-interactive zero-knowledge proofs of correctness (NIZKs) are 267x slower than unprotected data processing and privacy methods based on succinct non-interactive arguments of knowledge (SNARKs) clock in at three orders of magnitude slower.

“With Prio, you can get both: the aggregate statistics needed to improve an application or service and maintain the privacy of the people who are providing that data,” said Boneh in a statement. “This system offers a robust solution to two growing demands in our tech-driven economy.”

In 2018 Mozilla began testing Prio to gather Firefox telemetry data and found the cryptographic scheme compelling enough to make it the basis of its Firefox Origin Telemetry service.

[…]

Source: The ones who brought you Let’s Encrypt, bring you: Tools for gathering anonymized app usage metrics from netizens • The Register

Apple’s ‘Batterygate’ Saga Wraps Up With $113 Million Settlement

Younger readers might not know, but there was once an annual tradition in which Apple would release a new iPhone, old iPhones would suddenly start performing poorly, and users would speculate about a conspiracy to get them to buy the shiny new thing. It turned out that a conspiracy, of sorts, did exist, and Apple has been trying to make the whole embarrassing saga go away for years. On Wednesday, the finish line came into view after Arizona Attorney General Mark Brnovich announced that an investigation involving 34 states is concluding with a settlement and no admission of guilt from Apple.

In 2017, Apple admitted that updates to iOS were throttling older iPhone models but framed it as a misunderstanding. Apple said that the software tweaks were intended to mitigate unwanted shutdowns in devices with aging batteries. It apologized and offered discounted battery replacements as a consolation prize. Many users felt that Apple’s secretive approach was deceptive and intended to lead them to believe they need a new phone when a fresh battery might keep the old one going for another cycle. The discounted battery offer wasn’t enough for some users, and this spring Apple agreed to settle a class-action suit for up to $500 million, doling out $25 per phone that filed a claim. Apple did not admit any wrongdoing.

Today’s announcement tentatively concludes a separate investigation launched by state attorneys general into the controversy. In a statement, Brnovich’s office said that the proposed settlement includes a $113 million fine to be distributed amongst the states involved as well as a requirement that “Apple also must provide truthful information to consumers about iPhone battery health, performance, and power management. Apple must provide this important information in various forms on its website, in update installation notes, and in the iPhone user interface itself.”

Source: Apple’s ‘Batterygate’ Saga Wraps Up With $113 Million Settlement

Google Will Make It a bit Easier to Turn Off Smart Features which track you, Slightly Harder for Regulators to Break Up Google

Soon, Google will present you with a clear choice to disable smart features, like Google assistant reminders to pay your bills and predictive text in Gmail. Whether you like the Gmail mindreader function that autofills “all the best” and “reaching out,” or have long dreaded the arrival of the machine staring back from the void,: it’s your world, Google’s just living in it. According to Google.

We’ve always been able to disable these functions if we bothered hunting through account settings. But “in the coming weeks” Google will show a new blanket setting to “turn off smart features” which will disable features like Smart Compose, Smart Reply, in apps like Gmail; the second half of the same prompt will disable whether additional Google products—like Maps or Assistant, for example—are allowed to be personalized based on data from Gmail, Meet, and Chat.

Google writes in its blog post about the new-ish settings that humans are not looking at your emails to enable smart features, and Google ads are “not based on your personal data in Gmail,” something CEO Sundar Pichai has likewise said time and again. Google claims to have stopped that practice in 2017, although the following year the Wall Street Journal reported that third-party app developers had freely perused inboxes with little oversight. (When asked whether this is still a problem, the spokesperson pointed us to Google’s 2018 effort to tighten security.)

A Google spokesperson emphasized that the company only uses email contents for security purposes like filtering spam and phishing attempts.

These personalization changes aren’t so much about tightening security as they are another informed consent defense which Google can use to repel the current regulatory siege being waged against it by lawmakers. It has expanded incognito mode for maps and auto-deleting data in location history or web and app activity and on YouTube (though after a period of a few months).

Inquiries in the U.S. and EU have found that Google’s privacy settings have historically presented the appearance of privacy, rather than privacy itself. After a 2018 AP article exposed the extent of Google’s location data harvesting, an investigation found that turning location off in Android was no guarantee that Google wouldn’t collect location data (though Google has denied this.) Plaintiffs in a $5 billion class-action lawsuit filed this summer alleged that “incognito mode” in Chrome didn’t prevent Google from capturing and sharing their browsing history. And last year, French regulators fined Google nearly $57 million for violating the General Data Protection Regulation (GDPR) by allegedly burying privacy controls beneath five or six layers of settings. (When asked, the spokesperson said Google has no additional comment on these cases.)

So this is nice, and also Google’s announcement reads as a letter to regulators. “This new setting is designed to reduce the work of understanding and managing [a choice over how data is processed], in view of what we’ve learned from user experience research and regulators’ emphasis on comprehensible, actionable user choices over data.”

Source: Google Will Make It Easier to Turn Off Smart Features

Apple hits back at European activist lawsuit against unauthorised tracking installs – says it doesn’t use it… but 3rd parties do

The group, led by campaigner Max Schrems, filed complaints with data protection watchdogs in Germany and Spain alleging that the tracking tool illegally enabled the $2 trillion U.S. tech giant to store users’ data without their consent.

Apple directly rebutted the claims filed by Noyb, the digital rights group founded by Schrems, saying they were “factually inaccurate and we look forward to making that clear to privacy regulators should they examine the complaint”.

Schrems is a prominent figure in Europe’s digital rights movement that has resisted intrusive data-gathering by Silicon Valley’s tech platforms. He has fought two cases against Facebook, winning landmark judgments that forced the social network to change how it handles user data.

Noyb’s complaints were brought against Apple’s use of a tracking code, known as the Identifier for Advertisers (IDFA), that is automatically generated on every iPhone when it is set up.

The code, stored on the device, makes it possible to track a user’s online behaviour and consumption preferences – vital in allowing companies to send targeted adverts.

“Apple places codes that are comparable to a cookie in its phones without any consent by the user. This is a clear breach of European Union privacy laws,” Noyb lawyer Stefano Rossetti said.

Rossetti referred to the EU’s e-Privacy Directive, which requires a user’s consent before installation and using such information.

Apple said in response that it “does not access or use the IDFA on a user’s device for any purpose”.

It said its aim was to protect the privacy of its users and that the latest release of its iOS 14 operating system gave users greater control over whether apps could link with third parties for the purposes of targeted advertising.

Source: Apple hits back at European activist complaints against tracking tool | Reuters

The complaint against Apple is that the IDFA is set at all without consent from the user. And it’s not the point that Apple accesses it or not, the point is that unspecified 3rd parties (advertisers, hackers, government, etc) can.