Facebook: Remember how we promised we weren’t tracking your location? Psych! Can’t believe you fell for that

For years the antisocial media giant has claimed it doesn’t track your location, insisting to suspicious reporters and privacy advocates that its addicts “have full control over their data,” and that it does not gather or sell that data unless those users agree to it.

No one believed it. So, when it (and Google) were hit with lawsuits trying to get to the bottom of the issue, Facebook followed its well-worn path to avoiding scrutiny: it changed its settings and pushed out carefully worded explanations that sounded an awful lot like it wasn’t tracking you anymore. But it was. Because location data is valuable.

Then, late on Monday, Facebook emitted a blog post in which it kindly offered to help users “understand updates” to their “device’s location settings.”

It begins: “Facebook is better with location. It powers features like check-ins and makes planning events easier. It helps improve ads and keep you and the Facebook community safe. Features like Find Wi-Fi and Nearby Friends use precise location even when you’re not using the app to make sure that alerts and tools are accurate and personalized for you.”

You may have missed the critical part amid the glowing testimony so we’ll repeat it: “… use precise location even when you’re not using the app…”

Huh, fancy that. It sounds an awful lot like tracking. After all, why would you want Facebook to know your precise location at all times, even when you’re not using its app? And didn’t Facebook promise it wasn’t doing that?

Timing

Well, yes it did, and it was being economical with the truth. But perhaps the bigger question is: why now? Why has Facebook decided to come clean all of a sudden? Is it because of the newly announced antitrust and privacy investigations into tech giants? Well, yes, in a roundabout way.

Surprisingly, in a moment of almost honesty which must have felt quite strange for Facebook’s execs, the web giant actually explains why it has stopped pretending it doesn’t track users: because soon it won’t be able to keep up the pretense.

“Android and iOS have released new versions of their operating systems, which include updates to how you can view and manage your location,” the blog post reveals.

That’s right, under pressure from lawmakers and users, both Google and Apple have added new privacy features to their upcoming mobile operating systems – Android and iOS – that will make it impossible for Facebook to hide its tracking activity.

Source: Facebook: Remember how we promised we weren’t tracking your location? Psych! Can’t believe you fell for that • The Register

The Windows 10 Privacy Settings You Should Check Right Now

If you’re at all concerned about the privacy of your data, you don’t want to leave the default settings in place on your devices—and that includes anything that runs Windows 10.

Microsoft’s operating system comes with a variety of controls and options you can modify to lock down the use of your data, from the information you share with Microsoft to the access that individual apps have to your location, camera, and microphone. Check these privacy-related settings as soon as you’ve got your Windows 10 computer set up—or now, in case you’re a longtime user who hasn’t gotten around to it yet.

Source: The Windows 10 Privacy Settings You Should Check Right Now | WIRED

Cops did hand over photos for King’s Cross facial-recog CCTV to 3rd parties after all – a property developer, between 2016-2018

London cops have admitted they gave photos of people to a property developer to use in a facial-recognition system in the heart of the UK capital.

Back in July, Siân Berry, co-leader of the Green Party of England and Wales, asked London Mayor Sadiq Khan whether the Met Police had collaborated with any retailers or other private companies in the operation of facial-recognition systems. A month later, Khan replied that the police force had not worked with any organisations on face-scanning tech in the capital beyond its own experiments.

However, that turned out to be incorrect. On Wednesday this week, the mayor revealed the cops had in actual fact handed over snaps of people to the private landlord for most of the busy King’s Cross area – which, it emerged last month, had set up facial-recognition cameras to snoop on thousands of Brits going about their day.

“The MPS [Metropolitan Police Service] has just now brought it to my attention that the original information they provided … was incorrect and they have in fact shared images related to facial recognition with King’s Cross Central Limited Partnership,” Khan said in an update, adding that this handover of photos ended sometime in 2018.

Source: Oops, wait, yeah, we did hand over photos for King’s Cross facial-recog CCTV, cops admit • The Register

Google has secret webpages that feed your personal data to advertisers, report to EU says

New evidence submitted for an investigation into Google’s collection of personal data in the European Union reportedly accuses the search giant of stealthy sending your personal user data to advertisers. The company allegedly relays this information to advertisers using hidden webpages, allowing it to circumvent EU privacy regulations.

The evidence was submitted to Ireland’s Data Protection Commission, the main watchdog over the company in the European Union, by Johnny Ryan, chief policy officer for privacy-focused browser maker Brave, according to a Financial Times report Wednesday. Ryan reportedly said he discovered that Google used a tracker containing web browsing information, location and other data and sent it to ad companies via webpages that “showed no content,” according to FT. This could allow companies buying ads to match a user’s Google profile and web activity to profiles from other companies, which is against Google’s own ad buying rules, according to the FT.

In response, Google said Wednesday it doesn’t serve “personalized ads or send bid requests to bidders without user consent.”

The process laid out by Ryan could potentially be “cookie matching” or “cookie syncing,” an ad industry practice of matching ads across multiple sites based on a user’s browsing history. A Google developer page on cookie matching explains the process and the privacy principles the search engine follows, such as not allowing the info to be harvested by multiple companies.

The Data Protection Commission began an investigation into Google’s practices in May after it received a complaint from Brave that Google was allegedly violating the EU’s General Data Protection Regulation.

Source: Google has secret webpages that feed your personal data to advertisers, report says – CNET

Online Depression Tests Are Collecting and Sharing Your Data

This week, Privacy International published a report—Your mental health for sale—which explored how mental health websites handle user data. The digital rights nonprofit looked at 136 mental health webpages across Google France, Google Germany and the UK version of Google, according to the report. They chose websites based on advertised links and featured page search results for depression-related terms in French, German, and English, and also included the most visited sites according to web analytics service SimilarWeb.

According to the report, the organization used the open-source software webxray to identify third-party HTTP requests and cookies. It then analyzed the websites on July 8th of this year. The analysis found that 97.78 percent of the webpages had a third-party element, which might include cookies, JavaScript, or an image hosted on an outside server. And Privacy International also pointed out that its research found that the main reason for these third-party elements was for advertising.

Webxray’s analysis found that 76.04 percent of the webpages had trackers for marketing purposes—80.49 percent of the pages in France, 61.36 percent of the pages in Germany, and 86.27 percent of them in the UK. Among the third-party trackers also included the likes of advertising services from Google, Facebook, and Amazon, with Google trackers being the most present, followed by Facebook and Amazon.

A deeper dive into a subset of these websites—the first three Google search results for “depression test” in the three countries—also indicated some more specific and egregious ways in which these trackers are shilling some of our most intimate data. For instance, among the findings from that additional analysis, Privacy International found that some of the depression test websites stored user’s responses and shared them along with their test results with third parties. They also found that two depression test websites use Hotjar, an online feedback tool that can record what someone types and clicks on a webpage. It’s not difficult to imagine how such data—responses to a depression test—can be exploited.

Source: Online Depression Tests Are Collecting and Sharing Your Data

Mozilla says Firefox won’t defang ad blockers – unlike Google Chrome, which is steadily removing your privacy from 3rd parties

On Tuesday, Mozilla said it is not planning to change the ad-and-content blocking capabilities of Firefox to match what Google is doing in Chrome.

Google’s plan to revise its browser extension APIs, known as Manifest v3, follows from the web giant’s recognition that many of its products and services can be abused by unscrupulous developers. The search king refers to its product security and privacy audit as Project Strobe, “a root-and-branch review of third-party developer access to your Google account and Android device data.”

In a Chrome extension, the manifest file (manifest.json) tells the browser which files and capabilities (APIs) will be used. Manifest v3, proposed last year and still being hammered out, will alter and limit the capabilities available to extensions.

Developers who created extensions under Manifest v2 may have to revise their code to keep it working with future versions of Chrome. That may not be practical or possible in all cases, though. The developer of uBlock Origin, Raymond Hill, has said his web-ad-and-content-blocking extension will break under Manifest v3. It’s not yet clear whether uBlock Origin can or will be adapted to the revised API.

The most significant change under Manifest v3 is the deprecation of the blocking webRequest API (except for enterprise users), which lets extensions intercept incoming and outgoing browser data, so that the traffic can be modified, redirected or blocked.

Firefox not following

“In its place, Google has proposed an API called declarativeNetRequest,” explains Caitlin Neiman, community manager for Mozilla Add-ons (extensions), in a blog post.

“This API impacts the capabilities of content blocking extensions by limiting the number of rules, as well as available filters and actions. These limitations negatively impact content blockers because modern content blockers are very sophisticated and employ layers of algorithms to not only detect and block ads, but to hide from the ad networks themselves.”

Mozilla offers Firefox developers the Web Extensions API, which is mostly compatible with the Chrome extensions platform and is supported by Chromium-based browsers Brave, Opera and Vivaldi. Those other three browser makers have said they intend to work around Google’s changes to the blocking webRequest API. Now, Mozilla says as much.

“We have no immediate plans to remove blocking webRequest and are working with add-on developers to gain a better understanding of how they use the APIs in question to help determine how to best support them,” said Neiman.

[…]

Google maintains, “We are not preventing the development of ad blockers or stopping users from blocking ads,” even as it acknowledges “these changes will require developers to update the way in which their extensions operate.”

Yet Google’s related web technology proposal two weeks ago to build a “privacy sandbox,” through a series of new technical specifications that would hinder anti-tracking mechanisms, has been dismissed as disingenuous “privacy gaslighting.”

On Friday, EFF staff technologist Bennett Cyphers, lambasted the ad biz for its self-serving specs. “Google not only doubled down on its commitment to targeted advertising, but also made the laughable claim that blocking third-party cookies – by far the most common tracking technology on the Web, and Google’s tracking method of choice – will hurt user privacy,” he wrote in a blog post.

Source: Mozilla says Firefox won’t defang ad blockers – unlike a certain ad-giant browser • The Register

REVEALED: Hundreds of words to avoid using online if you don’t want the government spying on you

The Department of Homeland Security has been forced to release a list of keywords and phrases it uses to monitor social networking sites and online media for signs of terrorist or other threats against the U.S.

The intriguing the list includes obvious choices such as ‘attack’, ‘Al Qaeda’, ‘terrorism’ and ‘dirty bomb’ alongside dozens of seemingly innocent words like ‘pork’, ‘cloud’, ‘team’ and ‘Mexico’.

Released under a freedom of information request, the information sheds new light on how government analysts are instructed to patrol the internet searching for domestic and external threats.

The words are included in the department’s 2011 Analyst’s Desktop Binder‘ used by workers at their National Operations Center which instructs workers to identify ‘media reports that reflect adversely on DHS and response activities’.

Department chiefs were forced to release the manual following a House hearing over documents obtained through a Freedom of Information Act lawsuit which revealed how analysts monitor social networks and media organisations for comments that ‘reflect adversely’ on the government.

However they insisted the practice was aimed not at policing the internet for disparaging remarks about the government and signs of general dissent, but to provide awareness of any potential threats.

As well as terrorism, analysts are instructed to search for evidence of unfolding natural disasters, public health threats and serious crimes such as mall/school shootings, major drug busts, illegal immigrant busts.

The list has been posted online by the Electronic Privacy Information Center – a privacy watchdog group who filed a request under the Freedom of Information Act before suing to obtain the release of the documents.

In a letter to the House Homeland Security Subcommittee on Counter-terrorism and Intelligence, the centre described the choice of words as ‘broad, vague and ambiguous’.

Threat detection: Released under a freedom of information request, the information sheds new light on how government analysts are instructed to patrol the internet searching for domestic and external threats

They point out that it includes ‘vast amounts of First Amendment protected speech that is entirely unrelated to the Department of Homeland Security mission to protect the public against terrorism and disasters.’

A senior Homeland Security official told the Huffington Post that the manual ‘is a starting point, not the endgame’ in maintaining situational awareness of natural and man-made threats and denied that the government was monitoring signs of dissent.

However the agency admitted that the language used was vague and in need of updating.

Spokesman Matthew Chandler told website: ‘To ensure clarity, as part of … routine compliance review, DHS will review the language contained in all materials to clearly and accurately convey the parameters and intention of the program.’

MIND YOUR LANGUAGE: THE LIST OF KEYWORDS IN FULL

List1

List

list3

Source: REVEALED: Hundreds of words to avoid using online if you don’t want the government spying on you | Daily Mail Online

Basically you’re being censored through the use of unnecessary, ubiquitous surveillance – by a democracy.

Why do tech companies file so many weird patents?

There are lots of reasons to patent something. The most obvious one is that you’ve come up with a brilliant invention, and you want to protect your idea so that nobody can steal it from you. But that’s just the tip of the patent strategy iceberg. It turns out there is a whole host of strategies that lead to “zany” or “weird” patent filings, and understanding them offers a window not just into the labyrinthine world of the U.S. Patent and Trademark Office and its potential failings, but also into how companies think about the future. And while it might be fun to gawk at, say, Motorola patenting a lie-detecting throat tattoo, it’s also important to see through the eye-catching headlines and to the bigger issue here: Patents can be weapons and signals. They can spur innovation, as well as crush it.

Let’s start with the anatomy of a patent. Patents have many elements—the abstract, a summary, a background section, illustrations, and a section called “claims.” It’s crucial to know that the thing that matters most in a patent isn’t the abstract, or the title, or the illustrations. It’s the claims, where the patent filer has to list all the new, innovative things that her patent does and why she in fact deserves government protection for her idea. It’s the claims that matter over everything else.

[…]

For a long time, companies didn’t really worry about the PR that patents might generate. Mostly because nobody was looking. But now, journalists are using patents as a window into a company’s psyche, and not always in a way that makes these companies look good.

So why patent something that could get you raked across the internet coals? In many cases, when a company files for a patent, it has no idea whether it’s actually going to use the invention. Often, patents are filed as early as possible in an idea’s life span. Which means that at the moment of filing, nobody really knows where a field might go or what the market might be for something. So companies will patent as many ideas as they can at the early stages and then pick and choose which ones actually make sense for their business as time goes by.

[…]

In some situations, companies file for patents to blanket the field—like dogs peeing on every bush just in case. Many patents are defensive, a way to keep your competitors from developing something more than a way to make sure you can develop that thing. Will Amazon ever make a delivery blimp? Probably not, but now none of its competitors can. (Amazon seems to be a leader in these patent oddities. Its portfolio also includes a flying warehouse, self-destructing drones, an underwater warehouse, and a drone tunnel.

[…]

David Stein, a patent attorney, says that he sees this at companies he works with. He tells me that once he was in a meeting with inventors about something they wanted to patent, and he asked one of his standard questions to help him prepare the patent: What products will this invention go into? “And they said, ‘Oh, it won’t.’ ” The team that had invented this thing had been disbanded, and the company had moved to a different solution. But they had gone far enough with the patent application that they might as well keep going, if only to use the patent in the future to keep their competitors from gaining an advantage. (It’s almost impossible to know how many patents wind up being “useful” to a company or turn up in actual products.)

As long as you have a budget for it (and patents aren’t cheap—filing for one can easily cost more than $10,000 all told), there’s an incentive for companies to amass as many as they can. Any reporter can tell you that companies love to boast about the number of patents they have, as if it’s some kind of quantitative measure of brilliance. (This makes about as much sense as boasting about how many lines of code you’ve written—it doesn’t really matter how much you’ve got, it matters if it actually works.) “The number of patents a company is filing has more to do with the patent budget than with the amount they’re actually investing in research,” says Lisa Larrimore Ouellette, a professor at Stanford Law School

[…]

This patent arm wrestling doesn’t just provide low-hanging fruit to reporters. It also affects business dealings. Let’s say you have two companies that want to make some kind of business deal, Charles Duan, a patent expert at the R Street Institute, says. One of their key negotiation points might be patents. If two giant companies want to cut a deal that involves their patent portfolios, nobody is going to go through and analyze every one of those patents to make sure they’re actually useful or original, Duan says, since analyzing a single patent thoroughly can cost thousands of dollars in legal fees. So instead of actually figuring out who has the more valuable patents, “the [company] with more patents ends up getting more. I’m not sure there’s honestly much more to it.”

Several people I spoke with for this story described patent strategy as “an arms race” in which businesses all want to amass as many patents as they can to protect themselves and bolster their position in these negotiations. “There’s not that many companies that are willing to engage in unilateral disarmament,”

[…]

While disarmament might be unlikely, many companies have chosen not to engage in the patent warfare at all. In fact, companies often don’t patent technologies they’re most interested in. A patent necessarily lays out how your product works, information that not all companies want to divulge. “We have essentially no patents in SpaceX,” Elon Musk told Chris Anderson at Wired. “Our primary long-term competition is in China. If we published patents, it would be farcical, because the Chinese would just use them as a recipe book.”

[…]

In most cases, once the inventors and engineers hand over their ideas and answer some questions, it’s the lawyer’s job to build those things out into an actual patent. And here is where a lot of the weirdness actually enters the picture, because the lawyer essentially has to get creative. “You dress up science fiction with words like ‘means for processing’ or ‘data storage device,’ ” says Mullin.

Even the actual language of the patents themselves can be misleading. It turns you actually can write fan fiction about your own invention in a patent. Patent applications can include what are called “prophetic examples,” which are descriptions of how the patent might work and how you might test it. Those prophetic examples can be as specific as you want, despite being completely fictional. Patents can legally describe a “46-year-old woman” who never existed and say that her “blood pressure is reduced within three hours” when that never actually happened. The only rule about prophetic examples is that they cannot be written in the past tense. Which means that when you’re reading a patent, the examples written in the present tense could be real or completely made up. There’s no way to know.

If this sounds confusing, it is, and not just to journalists trying to wade through these documents. Ouellette, who published a paper in Science about this problem recently, admitted that even she wouldn’t necessarily be able to tell whether experiments described in a patent had actually been conducted.

Some people might argue that these kinds of speculative patents are harmless fun, the result of a Kafkaesque kaleidoscope of capitalism, competition, and bureaucracy. But it’s worth thinking about how they can be misused, says Mullin. Companies that are issued vague patents can go after smaller entities and try to extract money from them. “It’s like beating your competitor over the head with a piece of science fiction you wrote,” he says.

Plus, everyday people can be misled about just how much to trust a company based on its patents. One study found that out of 100 patents cited in scientific articles or books that used only prophetic examples (in other words, had no actual data or evidence in them), 99 were inaccurately described as having been based on real data.

[…]

Stein says that recently he’s had companies bail on patents because they might be perceived as creepy. In fact, in one case, Stein says that the company even refiled a patent to avoid a PR headache.* As distrust of technology corporations mounts, the way we read patents has changed. “Everybody involved in the patent process is a technologist. … We don’t tend to step back and think, this could be perceived as something else by people who don’t trust us.” But people are increasingly unwilling to give massive tech companies the benefit of the doubt. This is why Google’s patent for a “Gaze tracking system” got pushback—do you really want Google to know exactly what you look at and for how long?

[…]

there is still real value in reading the patents that companies apply for—not because doing so will necessarily tell you what they’re actually going to make, but because they tell you what problems the company is trying to solve. “They’re indicative of what’s on the engineer’s mind,” says Duan. “They’re not going to make the cage, but it does tell you that they’re worried about worker safety.” Spotify probably won’t make its automatic parking finder, so you don’t have to pause your music in a parking garage while you hunt for a spot. But it does want to figure out how to reduce interruptions in your music consumption. So go forth and read patents. Just remember that they’re often equal parts real invention and sci-fi.

Source: Why do tech companies file so many weird patents?

That science fiction concepts can be patented is new for me. So you can whack companies around with patents that you thought of but didn’t implement. Sounds like a really good idea. Not.

PowerShell 7 ups the telemetry but… hey… is that an off switch?

Microsoft emitted a fresh preview of command-line darling PowerShell 7 last night, highlighting some additional slurping – and how to shut it off.

PowerShell 7 Preview 3, which is built on .NET Core 3.0 Preview 8, is the latest step on the way to final release at the end of 2019 and a potential replacement for the venerable PowerShell 5.1.

The first preview dropped back in May and the gang has made solid progress since. This time around, the team has opted to switch on all experimental features of the command-line shell by default in order to get more feedback on whether those features are worth the extra effort to gain “stable” status.

[…]

there are a number of useful features, some targeted squarely at Windows (stripping away reasons to stay with PowerShell 7’s more Windows-focused ancestors) and others that simply make life easy for script fans. The ability to stick a -Parallel parameter to ForEach-Object in order to execute scriptblocks in parallel is a good example, as is a -ThrottleLimit parameter to keep the thread usage under control.

Preview 3 and Telemetry

However, it’s not all good news. Lee, with impressive openness, highlighted the extra telemetry PowerShell would be capturing with this release. Microsoft’s Sydney Smith provided further details and, perhaps more importantly for some users, explained how to turn the slurping off.

New data points being collected include counts of application types such as Cmdlets and Functions, hosted sessions and PowerShell starts by type (API vs Console).

[…]

for the benefit of those who get twitchy about the slurping of data, Smith highlighted the POWERSHELL_TELEMETRY_OPTOUT environment variable, which can be set to the true, yes or 1 to stop PowerShell squirting anything back at Redmond’s servers.

Source: Latest sneak peek at PowerShell 7 ups the telemetry but… hey… is that an off switch? • The Register

Microsoft Contractors Listened to Xbox Owners (mainly kids) in Their Homes – since 2013

Contractors working for Microsoft have listened to audio of Xbox users speaking in their homes in order to improve the console’s voice command features, Motherboard has learned. The audio was supposed to be captured following a voice command like “Xbox” or “Hey Cortana,” but contractors said that recordings were sometimes triggered and recorded by mistake.

The news is the latest in a string of revelations that show contractors working on behalf of Microsoft listen to audio captured by several of its products. Motherboard previously reported that human contractors were listening to some Skype calls as well as audio recorded by Cortana, Microsoft’s Siri-like virtual assistant.

“Xbox commands came up first as a bit of an outlier and then became about half of what we did before becoming most of what we did,” one former contractor who worked on behalf of Microsoft told Motherboard. Motherboard granted multiple sources in this story anonymity as they had signed non-disclosure agreements.

The former contractor said they worked on Xbox audio data from 2014 to 2015, before Cortana was implemented into the console in 2016. When it launched in November 2013, the Xbox One had the capability to be controlled via voice commands with the Kinect system.

[…]

The former contractor said most of the voices they heard were of children.

“The Xbox stuff was actually a bit of a welcome respite, honestly. It was frequently the same games. Same DLCs. Same types of commands,” they added. “‘Xbox give me all the games for free’ or ‘Xbox download [newest Minecraft skins pack]’ or whatever,” they added. The former contractor was paid $10 an hour for their work, according to an employment document shared with Motherboard.

“Occasionally I heard ‘Xbox, tell Solas to heal,’ or something similar, which would be a command for Dragon Age: Inquisition,” the former contractor said, referring to hearing audio of in-game commands.

And that listening continued as the Xbox moved from using Kinect for voice commands over to Cortana. A current contractor provided a document that describes how workers should work with different types of Cortana audio, including commands given to control an Xbox.

Source: Microsoft Contractors Listened to Xbox Owners in Their Homes – VICE

All these guys are using this kind of voice data to improve their AI, so there’s nothing really particularly sinister in that (although they could probably turn on targeted microphones if they want and listen to YOU) but the fact that they lied about it, withheld the information from us and didn’t even mention it in their privacy statements, don’t allow you to opt out – THAT’s a problem.

BTW SONOS is also involved in this…

Google, Apple, Mozilla end Kazakhstan internet by blocking root CA

On Wednesday, Google, Apple, and Mozilla said their web browsers will block the Kazakhstan root Certificate Authority (CA) certificate – following reports that ISPs in the country have required customers to install a government-issued certificate that enables online spying.

According to the University of Michigan’s Censored Planet project, the country’s snoops “recently began using a fake root CA to perform a man-in-the-middle (MitM) attack against HTTPS connections to websites including Facebook, Twitter, and Google.”

A root CA certificate can, to put it simply, be abused to intercept and access otherwise protected communication between internet users and websites.

The Censored Planet report indicates that researchers first detected data interception on July 17, a practice that has continued intermittently since then (though discussions of Kazakhstan’s possible abuse of root CA certificates date back several years).

The interception does not appear to be widespread – it’s said to affect only 459 (7 per cent) of the country’s 6,736 HTTPS servers. But it affects 37 domains, largely social media and communications services linked to Google, Facebook, and Twitter domains, among others.

Kazakhstan has a population of 18m and 76 per cent internet penetration, according to advocacy group Freedom House, which rates it 62 on a scale of 100 for lack of internet freedom – 100 means no internet access.

Two weeks ago, the government of Kazakhstan said it had discontinued its internet surveillance scheme, initially justified as a way to improve cybersecurity, after lawyers in the country criticized the move.

In notifications to Kazakhstani telecom customers, mobile operators maintained that the government-mandated security certificate represented a lawful demand. Yet, in a statement on August 6, the National Security Committee of the Republic of Kazakhstan said the certificate requirement was just a test, and a successful one at that. And the committee provided instructions for removing the certificate from Android, iOS and Windows devices.

In 2015, Kazakhstan tried to get its root CA certificate into Mozilla trusted root store program but was rebuffed, and then tried to get its citizens to install the cert themselves until thwarted by legal action.

“As far as we know, the installation of the certificate is not legally required in Kazakhstan at this time,” a Mozilla spokesperson said in an email to The Register.

Source: Finally. Thanks so much, nerds. Google, Apple, Mozilla end government* internet spying for good • The Register

Google Play Publisher account gets terminated – but Google won’t tell you why

Developer Patrick Godeau has claimed his business is under threat after his Google Play Publisher account was terminated without a specific reason given.

Godeau, from France, provides apps for iOS and Android via his company Tokata.

It is a small business but Godeau said in his complaint that he has achieved “millions of downloads”, most via the Play Store.

On 31 July, Godeau received an email stating that “your Google Play Publisher account has been terminated”. He appealed and was told that “we’re unable to reinstate your developer account”. The reason given was not specific, just that it was “due to multiple violations of the Developer Program Policies”.

[…]

In July 2018, Google removed another of his applications specifying “device and network abuse”. He never discovered what the issue was. Maybe he was using the YouTube API wrongly? “Having read though the API terms of service, I couldn’t deduce how my app infringed them,” he said. However, he was able to publish a new version.

The new issue is not so easily resolved. First one of his apps was suspended for what the Play team said is “malicious behaviour”. Shortly after, his entire account was terminated complete with the advice “please do not attempt to register a new developer account”.

Patrick Godeau informs customers that his apps have been removed from the Play Store

Patrick Godeau informs customers that his apps have been removed from the Play Store

The apps remain available on the Apple and Amazon app stores.

Godeau said he has no objection to Google’s efforts to remove malicious apps from the Play Store. His frustration is that he has not been told any specifics about what is wrong with his apps, and that there is no meaningful dialogue with the Play team or appeal against a decision that directly impacts his ability to make a living from software development.

“It seems that I’m not the only one in this situation,” he wrote. “Many Android developers have seen their apps removed and their accounts abruptly terminated by the Google Play bots, often for minor and unintentional reasons, or even for no known reason at all, and almost always without any opportunity to prove their good faith, receiving no other response than automatic messages.”

This kind of incident is apparently not uncommon. Another company, Guidebook, which develops apps for events, has also had its apps removed, leaving users taking to Twitter to ask where they are. Guidebook’s Twitter support says “we’re actively working with Google to rectify this.”

Bemused customers take to Twitter in search of Guidebook apps removed from the Play Store

Bemused customers take to Twitter in search of Guidebook apps removed from the Play Store

Another common complaint is that Google does too little to remove pirated or copycat applications from the Play Store, causing potential reputational problems for developers whose customers may get an ad-laden copy instead of the real thing, or simply loss of business to the pirates.

Source: So your Google Play Publisher account has been terminated – of course you would want to know why exactly • The Register

And this is one of the problems when you’re working with an unregulated massive monopoly who can basically dictate whatever arbritrary terms they like, whilst people’s incomes are suffering from them.

They need to be broken up!

<iframe width=”560″ height=”315″ src=”https://www.youtube.com/embed/RFA92mXjXLI” frameborder=”0″ allow=”accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture” allowfullscreen></iframe>

If for some reason you want an Apple Card here’s How to Easily Opt Out of Binding Arbitration

You’ll spot binding arbitration clauses in a lot of financial agreements because it helps keep banks and their business partners out of court. If you agree to binding arbitration, you can’t go to trial against a company or join a class-action lawsuit; you can only have your issue settled by a third-party mediator. If you don’t like what the mediator decides, you still have to live with it.

Not all credit cards allow you to opt out of binding arbitration, but Apple Card does. And it makes it easy for you to opt out by allowing you to do so by text message. In fact, if you have any question about using Apple Card, you can get help via text message (instead of having to use your phone like an actual phone and wait on hold).

Nick Guy shared a screenshot on Twitter to illustrate just how easy it was to opt out of arbitration for his new Apple Card:

Take a minute now to send your opt-out request, then rest easy knowing that if you end up with major beef with Apple Card, you have access to all your options for dealing with it.

Source: How to Easily Opt Out of Apple Card Binding Arbitration

Man sued for using bogus YouTube takedowns to get address for swatting – so copyright is not only inane, it’s also physically dangerous

YouTube is suing a Nebraska man the company says has blatantly abused its copyright takedown process. The Digital Millennium Copyright Act offers online platforms like YouTube legal protections if they promptly take down content flagged by copyright holders. However, this process can be abused—and boy did defendant Christopher L. Brady abuse it, according to YouTube’s legal complaint (pdf).

Brady allegedly made fraudulent takedown notices against YouTube videos from at least three well-known Minecraft streamers. In one case, Brady made two false claims against a YouTuber and then sent the user an anonymous message demanding a payment of $150 by PayPal—or $75 in bitcoin.

“If you decide not to pay us, we will file a 3rd strike,” the message said. When a YouTube user receives a third copyright strike, the YouTuber’s account gets terminated.

A second target was ordered to pay $300 by PayPal or $200 in Bitcoin to avoid a third fraudulent copyright strike.

A third incident was arguably even more egregious. According to YouTube, Brady filed several fraudulent copyright notices against another YouTuber with whom he was “engaged in some sort of online dispute.” The YouTuber responded with a formal counter-notice stating that the content wasn’t infringing—a move that allows the content to be reinstated. However, the law requires the person filing the counter-notice to provide his or her real-world name and address—information that’s passed along to the person who filed the takedown request.

This contact information is supposed to enable a legitimate copyright holder to file an infringement lawsuit in court. But YouTube says Brady had another idea. A few days after filing a counter-notice, the targeted YouTuber “announced via Twitter that he had been the victim of a swatting scheme.” Swatting, YouTube notes, “is the act of making a bogus call to emergency services in an attempt to bring about the dispatch of a large number of armed police officers to a particular address.”

YouTube doesn’t provide hard proof that Brady was responsible for the swatting call, stating only that it “appears” he was responsible based on the sequence of events. But YouTube says it does have compelling evidence that Brady was responsible for the fraudulent takedown notices. And fraudulent takedown notices are themselves against the law.

Section 512(f) of the DMCA says that anyone who “knowingly materially misrepresents” that content is infringing in a takedown notice is liable for costs they impose on both accused infringers and platform owners. While this law has been on the books for more than 20 years, it has rarely been used because most misrepresentations have not been blatant enough to trigger legal liability.

For example, Ars covered the decade-long fight over a “dancing baby” video that happened to have a few seconds of Prince music playing in the background. The Electronic Frontier Foundation argued that the music was clearly allowed under copyright’s fair use doctrine—and that Universal Music should be held liable for submitting a takedown request anyway. A 2016 appeals court ruling made it clear that music labels had some obligation to consider fair use before issuing takedown requests, but the court set the bar so low that the targets of bogus takedowns have little hope of actually collecting damages.

Source: Man sued for using bogus YouTube takedowns to get address for swatting | Ars Technica

facial recognition ‘epidemic’ across UK private sites in conjunction with the police

Facial recognition is being extensively deployed on privately owned sites across the UK, according to an investigation by civil liberties group Big Brother Watch.

It found an “epidemic” of the controversial technology across major property developers, shopping centres, museums, conference centres and casinos in the UK.

The investigation uncovered live facial recognition in Sheffield’s major shopping centre Meadowhall.

Site owner British Land said: “We do not operate facial recognition at any of our assets. However, over a year ago we conducted a short trial at Meadowhall, in conjunction with the police, and all data was deleted immediately after the trial.”

The investigation also revealed that Liverpool’s World Museum scanned visitors with facial recognition surveillance during its exhibition, “China’s First Emperor and the Terracotta Warriors” in 2018.

The museum’s operator, National Museums Liverpool, said this had been done because there had been a “heightened security risk” at the time. It said it had sought “advice from Merseyside Police and local counter-terrorism advisors” and that use of the technology “was clearly communicated in signage around the venue”.

A spokesperson added: “World Museum did not receive any complaints and it is no longer in use. Any use of similar technology in the future would be in accordance with National Museums Liverpool’s standard operating procedures and with good practice guidance issued by the Information Commissioner’s Office.”

Big Brother Watch said it also found the Millennium Point conference centre in Birmingham was using facial-recognition surveillance “at the request of law enforcement”. In the privacy policy on Millennium Point’s website, it confirms it does “sometimes use facial recognition software at the request of law enforcement authorities”. It has not responded to a request for further comment.

Earlier this week it emerged the privately owned Kings Cross estate in London was using facial recognition, and Canary Wharf is considering following suit.

Information Commissioner Elizabeth Denham has since launched an investigation, saying she remains “deeply concerned about the growing use of facial recognition technology in public spaces, not only by law enforcement agencies but also increasingly by the private sector”.

The Metropolitan Police’s use of the tech was recently slammed as highly inaccurate and “unlawful”, according to an independent report by researchers from the University of Essex.

Silkie Carlo, director of Big Brother Watch, said: “There is an epidemic of facial recognition in the UK.

“The collusion between police and private companies in building these surveillance nets around popular spaces is deeply disturbing. Facial recognition is the perfect tool of oppression and the widespread use we’ve found indicates we’re facing a privacy emergency.

“We now know that many millions of innocent people will have had their faces scanned with this surveillance without knowing about it, whether by police or by private companies.

“The idea of a British museum secretly scanning the faces of children visiting an exhibition on the first emperor of China is chilling. There is a dark irony that this authoritarian surveillance tool is rarely seen outside of China.”

Carlo urged Parliament to follow in the footsteps of legislators in the US and “ban this authoritarian surveillance from public spaces”. ®

Source: And you thought the cops were bad… Civil rights group warns of facial recog ‘epidemic’ across UK private sites • The Register

YouTube shuts down music companies’ use of manual copyright claims to steal creator revenue, troll block videos

Going forward, copyright owners will no longer be able to monetize creator videos with very short or unintentional uses of music via YouTube’s “Manual Claiming” tool. Instead, they can choose to prevent the other party from monetizing the video or they can block the content. However, YouTube expects that by removing the option to monetize these sorts of videos themselves, some copyright holders will instead just leave them alone.

“One concerning trend we’ve seen is aggressive manual claiming of very short music clips used in monetized videos. These claims can feel particularly unfair, as they transfer all revenue from the creator to the claimant, regardless of the amount of music claimed,” explained YouTube in a blog post.

To be clear, the changes only involve YouTube’s Manual Claiming tool which is not how the majority of copyright violations are handled today. Instead, the majority of claims are created through YouTube’s Content ID match system. This system scans videos uploaded to YouTube against a database of files submitted to the site by copyright owners. Then, when a match is found, the copyright holder owner can choose to block the video or monetize it themselves, and track the video’s viewership stats.

The Manual Claiming tool, on the other hand, is only offered to partners who understand how Content ID works. It allows them to search through publicly available YouTube videos to look for those containing their content and apply a claim when a match is found.

The problem with the Manual Claiming policy is that it was impacting creator content even when the use of the claimed music in videos was very short — even a second long — or unintentional. For example, a creator who was vlogging may have walked past a store that was playing the copyrighted song, but then could lose the revenue from their video as a result.

In April, YouTube said it was looking to address this problem. And just ahead of this year’s VidCon, YouTube announced several well-received changes to the Manual Claiming Policy. It began to require that copyright owners specify the timestamp in the video where the claim occurs — a change that YouTube hoped would create additional friction and cut down on abuse.

Creators were also given tools of their own that let them easily remove the clip or replace the infringing content with free-to-use tracks.

These newly announced changes go even further as they remove the ability for the copyright owner to monetize the infringing video at all. Copyright holders can now only prevent the creators themselves from monetizing the video, or they can block the content. However, given the new creator tools for handling infringing content, it’s likely that creators in those situations would just address the problem content in order to keep their video online.

Source: YouTube shuts down music companies’ use of manual copyright claims to steal creator revenue | TechCrunch

This piece shows you how insane the copyright system is (if you walk past a shop playing some music you can consider it an infringement!) and how the large music maffia can muscle out small players – just calling something an infringement leads to a kafka-esque system where you can’t appeal easily. It’s a good thing that this muscling is now no longer easy to do and so automated.

Google “open sources” LiveTranscribe – except not really: only gives away android coding examples to connect to Google’s cloud speech products

Live Transcribe is an Android application that provides real-time captioning for people who are deaf or hard of hearing. This repository contains the Android client libraries for communicating with Google’s Cloud Speech API that are used in Live Transcribe.

[…]
The libraries provided are nearly identical to those running in the production application Live Transcribe. They have been extensively field tested and unit tested. However, the tests themselves are not open sourced at this time.

Github: live-transcribe-speech-engine

This is part of the problem with big companies playing Open Source – it’s not giving away anything useful or of any value, it’s just showing you how to connect to a product you will have to pay for. But Google is playing this one up and pretending that it’s releasing something worthwhile. It’s a scam.

Also Facebook Admits Yes, It Was Listening To Your Private Conversations via Messenger

“Much like Apple and Google, we paused human review of audio more than a week ago,” Facebook told Bloomberg on Tuesday.

The social media giant said that users could choose the option to have their voice chats on Facebook’s Messenger app transcribed. The contractors were testing artificial intelligence technology to make sure the messages were properly transcribed from voice to text.

Facebook has previously said that they are reading your messages on its Messenger App. Last year, Facebook CEO Mark Zuckerberg said that when “sensational messages” are found, “We stop those messages from going through.”

Zuckerberg also told Bloomberg last year that while conversations in the Messenger app are considered private, Facebook “scans them and uses the same tools to prevent abuse there that it does on the social network more generally.”

Source: Facebook Admits It Was Also Listening To Your Private Conversations | Digital Trends

 

Amazon, Google, Apple, Facebook – the five riders of the apocalypse are almost complete!

Ring Promised Swag to Users Who Narc on Their Neighbors

On top of turning their doorbell video feeds into a police surveillance network, Amazon’s home security subsidiary, Ring, also once tried to entice people with swag bags to snitch on their neighbors, Motherboard reported Friday.

The instructions are purportedly all laid out in a 2017 company presentation the publication obtained. Entitled “Digital Neighborhood Watch,” the slideshow apparently promised promo codes for Ring merch and other unspecified “swag” for those who formed watch groups, reported suspicious activity to the police, and raved about the device on social media. What qualifies as suspicious activity, you ask? According to the presentation, “strange vans and cars,” “people posing as utility workers,” and other dastardly deeds such as strolling down the street or peeping in car windows.

The slideshow goes on to outline monthly milestones for the group such as “Convert 10 new users” or ‘Solve a crime.” Meeting these goals would net the informant tiered Ring perks as if directing police scrutiny was a rewards program and not an act that can threaten people’s lives, particularly people of color.

These teams would have a “Neighborhood Manager,” a.k.a. a Ring employee, to help talk them through how to share their Ring footage with local officers. The presentation stated that if one of these groups of amateur sleuths succeeded in helping police solve a crime, each member would receive $50 off their next Ring purchase.

When asked about the presentation, a Ring spokesperson told Motherboard the program debuted before Amazon bought the company for a cool $1 billion last year. According to Motherboard, they also said it didn’t run for long:

“This particular idea was not rolled out widely and was discontinued in 2017. We will continue to invent, iterate, and innovate on behalf of our neighbors while aligning with our three pillars of customer privacy, security, and user control. Some of these ideas become official programs, and many others never make it past the testing phase.”

While Ring did eventually launch a neighborhood watch app, it doesn’t offer the same incentives this 2017 program promised, so choosing to narc on your neighbor won’t win you any $50 off coupons.

Ring has been the subject of mounting privacy concerns after reports from earlier this year revealed the company may have accidentally let its employees snoop on customers among other customer complaints. Earlier this week, the company also stated that it has partnerships with “over 225 law enforcement agencies,” in part to help cops figure out how to get their hands on users’ surveillance footage.

Source: Ring Promised Swag to Users Who Narc on Their Neighbors

This is just evil

Talk about unintended consequences: GDPR is an identity thief’s dream ticket to Europeans’ data

In a presentation at the Black Hat security conference in Las Vegas James Pavur, a PhD student at Oxford University who usually specialises in satellite hacking, explained how he was able to game the GDPR system to get all kinds of useful information on his fiancée, including credit card and social security numbers, passwords, and even her mother’s maiden name.

[…]

For social engineering purposes, GDPR has a number of real benefits, Pavur said. Firstly, companies only have a month to reply to requests and face fines of up to 4 per cent of revenues if they don’t comply, so fear of failure and time are strong motivating factors.

In addition, the type of people who handle GDPR requests are usually admin or legal staff, not security people used to social engineering tactics. This makes information gathering much easier.

Over the space of two months Pavur sent out 150 GDPR requests in his fiancée’s name, asking for all and any data on her. In all, 72 per cent of companies replied back, and 83 companies said that they had information on her.

Interestingly, five per cent of responses, mainly from large US companies, said that they weren’t liable to GDPR rules. They may be in for a rude shock if they have a meaningful presence in the EU and come before the courts.

Of the responses, 24 per cent simply accepted an email address and phone number as proof of identity and sent over any files they had on his fiancée. A further 16 per cent requested easily forged ID information and 3 per cent took the rather extreme step of simply deleting her accounts.

A lot of companies asked for her account login details as proof of identity, which is actually a pretty good idea, Pavur opined. But when one gaming company tried it, he simply said he’d forgotten the login and they sent it anyway.

The range of information the companies sent in is disturbing. An educational software company sent Pavur his fiancée’s social security number, date of birth and her mother’s maiden name. Another firm sent over 10 digits of her credit card number, the expiration date, card type and her postcode.

A threat intelligence company – not Have I been Pwned – sent over a list of her email addresses and passwords which had already been compromised in attacks. Several of these still worked on some accounts – Pavur said he has now set her up with a password manager to avoid repetition of this.

“An organisation she had never heard of, and never interacted with, had some of the most sensitive data about her,” he said. “GDPR provided a pretext for anyone in the world to collect that information.”

Fixing this issue is going to take action from both legislators and companies, Pavur said.

First off, lawmakers need to set a standard for what is a legitimate form of ID for GDPR requests. One rail company was happy to send out personal information, accepting a used envelope addressed to the fiancée as proof of identity.

Source: Talk about unintended consequences: GDPR is an identity thief’s dream ticket to Europeans’ data • The Register

Deep links to opt-out of data sharing by 60+ companies – Simple Opt Out

Simple Opt Out is drawing attention to opt-out data sharing and marketing practices that many people aren’t aware of (and most people don’t want), then making it easier to opt out. For example:

  • Target “may share your personal information with other companies which are not part of Target.”
  • Chase may share your “account balances and transaction history … For nonaffiliates to market to you.”
  • Crate & Barrel may share “your customer information [name, postal address and email address, and transactions you conduct on our Website or offline] with other select companies.”

This site makes it easier to opt out of data sharing by 50+ companies (or add a company, or see opt-out tips). Enjoy!

Source: Deep links to opt-out of data sharing by 60+ companies – Simple Opt Out

Skype, Cortana also have humans listening to you. The fine print says it listens to your audio recordings to improve its AI, but it means humans are listening.

If you use Skype’s AI-powered real-time translator, brief recordings of your calls may be passed to human contractors, who are expected to listen in and correct the software’s translations to improve it.

That means 10-second or so snippets of your sweet nothings, mundane details of life, personal information, family arguments, and other stuff discussed on Skype sessions via the translation feature may be eavesdropped on by strangers, who check the translations for accuracy and feed back any changes into the machine-learning system to retrain it.

An acknowledgement that this happens is buried in an FAQ for the translation service, which states:

To help the translation and speech recognition technology learn and grow, sentences and automatic transcripts are analyzed and any corrections are entered into our system, to build more performant services.

Microsoft reckons it is being transparent in the way it processes recordings of people’s Skype conversations. Yet one thing is missing from that above passage: humans. The calls are analyzed by humans. The more technological among you will have assumed living, breathing people are involved at some point in fine-tuning the code and may therefore have to listen to some call samples. However, not everyone will realize strangers are, so to speak, sticking a cup against the wall of rooms to get an idea of what’s said inside, and so it bears reiterating.

Especially seeing as sample recordings of people’s private Skype calls were leaked to Vice, demonstrating that the Windows giant’s security isn’t all that. “The fact that I can even share some of this with you shows how lax things are in terms of protecting user data,” one of the translation service’s contractors told the digital media monolith.

[…]

The translation contractors use a secure and confidential website provided by Microsoft to access samples awaiting playback and analysis, which are, apparently, scrubbed of any information that could identify those recorded and the devices used. For each recording, the human translators are asked to pick from a list of AI-suggested translations that potentially apply to what was overheard, or they can override the list and type in their own.

Also, the same goes for Cortana, Microsoft’s voice-controlled assistant: the human contractors are expected to listen to people’s commands to appraise the code’s ability to understand what was said. The Cortana privacy policy states:

When you use your voice to say something to Cortana or invoke skills, Microsoft uses your voice data to improve Cortana’s understanding of how you speak.

Buried deeper in Microsoft’s all-encompassing fine print is this nugget (with our emphasis):

We also share data with Microsoft-controlled affiliates and subsidiaries; with vendors working on our behalf; when required by law or to respond to legal process; to protect our customers; to protect lives; to maintain the security of our products; and to protect the rights and property of Microsoft and its customers.

[…]

Separately, spokespeople for the US tech titan claimed in an email to El Reg that users’ audio data is only collected and used after they opt in, however, as we’ve said, it’s not clear folks realize they are opting into letting strangers snoop on multi-second stretches of their private calls and Cortana commands. You can also control what voice data Microsoft obtains, and how to delete it, via a privacy dashboard, we were reminded.

In short, Redmond could just say flat out it lets humans pore over your private and sensitive calls and chats, as well as machine-learning software, but it won’t because it knows folks, regulators, and politicians would freak out if they knew the full truth.

This comes as Apple stopped using human contractors to evaluate people’s conversations with Siri, and Google came under fire in Europe for letting workers snoop on its smart speakers and assistant. Basically, as we’ve said, if you’re talking to or via an AI, you’re probably also talking to a person – and perhaps even the police.

Source: Reminder: When a tech giant says it listens to your audio recordings to improve its AI, it means humans are listening. Right, Skype? Cortana? • The Register

Genealogists running into AVG

The cards that are used to connect families in provinces in the Benelux as well as the family trees are published online are hugely anonymous, which means it’s nearly impossible to connect the dots as you don’t know when someone was born. Pictures and documents are being removed willy nilly from archives, in contravention of the archive laws (or openness laws, as they garauntee publication of data after a certain amount of time). Uncertainty about how far the AVG goes are leading people to take a very heavy handed view of it.

Source: Stamboomonderzoekers lopen tegen AVG aan – Emerce

Amazon’s Ring Is Teaching Cops How to Persuade Customers to Hand Over Surveillance Footage

according to a new report, Ring is also instructing cops on how to persuade customers to hang over surveillance footage even when they aren’t responsive to police requests.

According to a police memo obtained by Gizmodo and reported last week, Ring has partnerships with “over 225 law enforcement agencies,” Ring is actively involved in scripting and approving how police communicate those partnerships. As part of these relationships, Ring helps police obtain surveillance footage both by alerting customers in a given area that footage is needed and by asking to “share videos” with police. In a disclaimer included with the alerts, Ring claims that sharing the footage “is absolutely your choice.”

But according to documents and emails obtained by Motherboard, Ring also instructed police from two departments in New Jersey on how best to coax the footage out of Ring customers through its “neighborhood watch” app Neighbors in situations where police requests for video were not being met, including by providing police with templates for requests and by encouraging them to post often on the Neighbors app as well as on social media.

In one such email obtained by Motherboard, a Bloomfield Police Department detective requested advice from a Ring associate on how best to obtain videos after his requests were not being answered and further asked whether there was “anything that we can blast out to encourage Ring owners to share the videos when requested.”

In this email correspondence, the Ring associate informed the detective that a significant part of customer “opt in for video requests is based on the interaction law enforcement has with the community,” adding that the detective had done a “great job interacting with [community members] and this will be critical in regard to increased opt in rate.”

“The more users you have the more useful information you can collect,” the associate wrote.

Ring did not immediately return our request for comment about the practice of instructing police how to better obtain surveillance footage from its own customers. However, a spokesperson told Motherboard in a statement that the company “offers Neighbors app trainings and best practices for posting and engaging with app users for all law enforcement agencies utilizing the portal tool,” including by providing “templates and educational materials for police departments to utilize at their discretion.”

In addition to Gizmodo’s recent report that Ring is carefully controlling the messaging and implementation of its products with its police departments, a report from GovTech on Friday claimed that Amazon is also helping police work around denied requests by customers to supply their Ring footage. In such instances, according to the report, police can approach Ring’s parent company Amazon, which can provide the footage that police deem vital to an investigation.

“If we ask within 60 days of the recording and as long as it’s been uploaded to the cloud, then Ring can take it out of the cloud and send it to us legally so that we can use it as part of our investigation,” Tony Botti, public information officer for the Fresno County Sheriff’s Office, told GovTech. When contacted by Gizmodo, however, a Ring spokesperson denied this.

Source: Amazon’s Ring Is Teaching Cops How to Persuade Customers to Hand Over Surveillance Footage

Must. Surveill. The. People.