Ring Promised Swag to Users Who Narc on Their Neighbors

On top of turning their doorbell video feeds into a police surveillance network, Amazon’s home security subsidiary, Ring, also once tried to entice people with swag bags to snitch on their neighbors, Motherboard reported Friday.

The instructions are purportedly all laid out in a 2017 company presentation the publication obtained. Entitled “Digital Neighborhood Watch,” the slideshow apparently promised promo codes for Ring merch and other unspecified “swag” for those who formed watch groups, reported suspicious activity to the police, and raved about the device on social media. What qualifies as suspicious activity, you ask? According to the presentation, “strange vans and cars,” “people posing as utility workers,” and other dastardly deeds such as strolling down the street or peeping in car windows.

The slideshow goes on to outline monthly milestones for the group such as “Convert 10 new users” or ‘Solve a crime.” Meeting these goals would net the informant tiered Ring perks as if directing police scrutiny was a rewards program and not an act that can threaten people’s lives, particularly people of color.

These teams would have a “Neighborhood Manager,” a.k.a. a Ring employee, to help talk them through how to share their Ring footage with local officers. The presentation stated that if one of these groups of amateur sleuths succeeded in helping police solve a crime, each member would receive $50 off their next Ring purchase.

When asked about the presentation, a Ring spokesperson told Motherboard the program debuted before Amazon bought the company for a cool $1 billion last year. According to Motherboard, they also said it didn’t run for long:

“This particular idea was not rolled out widely and was discontinued in 2017. We will continue to invent, iterate, and innovate on behalf of our neighbors while aligning with our three pillars of customer privacy, security, and user control. Some of these ideas become official programs, and many others never make it past the testing phase.”

While Ring did eventually launch a neighborhood watch app, it doesn’t offer the same incentives this 2017 program promised, so choosing to narc on your neighbor won’t win you any $50 off coupons.

Ring has been the subject of mounting privacy concerns after reports from earlier this year revealed the company may have accidentally let its employees snoop on customers among other customer complaints. Earlier this week, the company also stated that it has partnerships with “over 225 law enforcement agencies,” in part to help cops figure out how to get their hands on users’ surveillance footage.

Source: Ring Promised Swag to Users Who Narc on Their Neighbors

This is just evil

Talk about unintended consequences: GDPR is an identity thief’s dream ticket to Europeans’ data

In a presentation at the Black Hat security conference in Las Vegas James Pavur, a PhD student at Oxford University who usually specialises in satellite hacking, explained how he was able to game the GDPR system to get all kinds of useful information on his fiancée, including credit card and social security numbers, passwords, and even her mother’s maiden name.

[…]

For social engineering purposes, GDPR has a number of real benefits, Pavur said. Firstly, companies only have a month to reply to requests and face fines of up to 4 per cent of revenues if they don’t comply, so fear of failure and time are strong motivating factors.

In addition, the type of people who handle GDPR requests are usually admin or legal staff, not security people used to social engineering tactics. This makes information gathering much easier.

Over the space of two months Pavur sent out 150 GDPR requests in his fiancée’s name, asking for all and any data on her. In all, 72 per cent of companies replied back, and 83 companies said that they had information on her.

Interestingly, five per cent of responses, mainly from large US companies, said that they weren’t liable to GDPR rules. They may be in for a rude shock if they have a meaningful presence in the EU and come before the courts.

Of the responses, 24 per cent simply accepted an email address and phone number as proof of identity and sent over any files they had on his fiancée. A further 16 per cent requested easily forged ID information and 3 per cent took the rather extreme step of simply deleting her accounts.

A lot of companies asked for her account login details as proof of identity, which is actually a pretty good idea, Pavur opined. But when one gaming company tried it, he simply said he’d forgotten the login and they sent it anyway.

The range of information the companies sent in is disturbing. An educational software company sent Pavur his fiancée’s social security number, date of birth and her mother’s maiden name. Another firm sent over 10 digits of her credit card number, the expiration date, card type and her postcode.

A threat intelligence company – not Have I been Pwned – sent over a list of her email addresses and passwords which had already been compromised in attacks. Several of these still worked on some accounts – Pavur said he has now set her up with a password manager to avoid repetition of this.

“An organisation she had never heard of, and never interacted with, had some of the most sensitive data about her,” he said. “GDPR provided a pretext for anyone in the world to collect that information.”

Fixing this issue is going to take action from both legislators and companies, Pavur said.

First off, lawmakers need to set a standard for what is a legitimate form of ID for GDPR requests. One rail company was happy to send out personal information, accepting a used envelope addressed to the fiancée as proof of identity.

Source: Talk about unintended consequences: GDPR is an identity thief’s dream ticket to Europeans’ data • The Register

Deep links to opt-out of data sharing by 60+ companies – Simple Opt Out

Simple Opt Out is drawing attention to opt-out data sharing and marketing practices that many people aren’t aware of (and most people don’t want), then making it easier to opt out. For example:

  • Target “may share your personal information with other companies which are not part of Target.”
  • Chase may share your “account balances and transaction history … For nonaffiliates to market to you.”
  • Crate & Barrel may share “your customer information [name, postal address and email address, and transactions you conduct on our Website or offline] with other select companies.”

This site makes it easier to opt out of data sharing by 50+ companies (or add a company, or see opt-out tips). Enjoy!

Source: Deep links to opt-out of data sharing by 60+ companies – Simple Opt Out

Skype, Cortana also have humans listening to you. The fine print says it listens to your audio recordings to improve its AI, but it means humans are listening.

If you use Skype’s AI-powered real-time translator, brief recordings of your calls may be passed to human contractors, who are expected to listen in and correct the software’s translations to improve it.

That means 10-second or so snippets of your sweet nothings, mundane details of life, personal information, family arguments, and other stuff discussed on Skype sessions via the translation feature may be eavesdropped on by strangers, who check the translations for accuracy and feed back any changes into the machine-learning system to retrain it.

An acknowledgement that this happens is buried in an FAQ for the translation service, which states:

To help the translation and speech recognition technology learn and grow, sentences and automatic transcripts are analyzed and any corrections are entered into our system, to build more performant services.

Microsoft reckons it is being transparent in the way it processes recordings of people’s Skype conversations. Yet one thing is missing from that above passage: humans. The calls are analyzed by humans. The more technological among you will have assumed living, breathing people are involved at some point in fine-tuning the code and may therefore have to listen to some call samples. However, not everyone will realize strangers are, so to speak, sticking a cup against the wall of rooms to get an idea of what’s said inside, and so it bears reiterating.

Especially seeing as sample recordings of people’s private Skype calls were leaked to Vice, demonstrating that the Windows giant’s security isn’t all that. “The fact that I can even share some of this with you shows how lax things are in terms of protecting user data,” one of the translation service’s contractors told the digital media monolith.

[…]

The translation contractors use a secure and confidential website provided by Microsoft to access samples awaiting playback and analysis, which are, apparently, scrubbed of any information that could identify those recorded and the devices used. For each recording, the human translators are asked to pick from a list of AI-suggested translations that potentially apply to what was overheard, or they can override the list and type in their own.

Also, the same goes for Cortana, Microsoft’s voice-controlled assistant: the human contractors are expected to listen to people’s commands to appraise the code’s ability to understand what was said. The Cortana privacy policy states:

When you use your voice to say something to Cortana or invoke skills, Microsoft uses your voice data to improve Cortana’s understanding of how you speak.

Buried deeper in Microsoft’s all-encompassing fine print is this nugget (with our emphasis):

We also share data with Microsoft-controlled affiliates and subsidiaries; with vendors working on our behalf; when required by law or to respond to legal process; to protect our customers; to protect lives; to maintain the security of our products; and to protect the rights and property of Microsoft and its customers.

[…]

Separately, spokespeople for the US tech titan claimed in an email to El Reg that users’ audio data is only collected and used after they opt in, however, as we’ve said, it’s not clear folks realize they are opting into letting strangers snoop on multi-second stretches of their private calls and Cortana commands. You can also control what voice data Microsoft obtains, and how to delete it, via a privacy dashboard, we were reminded.

In short, Redmond could just say flat out it lets humans pore over your private and sensitive calls and chats, as well as machine-learning software, but it won’t because it knows folks, regulators, and politicians would freak out if they knew the full truth.

This comes as Apple stopped using human contractors to evaluate people’s conversations with Siri, and Google came under fire in Europe for letting workers snoop on its smart speakers and assistant. Basically, as we’ve said, if you’re talking to or via an AI, you’re probably also talking to a person – and perhaps even the police.

Source: Reminder: When a tech giant says it listens to your audio recordings to improve its AI, it means humans are listening. Right, Skype? Cortana? • The Register

Genealogists running into AVG

The cards that are used to connect families in provinces in the Benelux as well as the family trees are published online are hugely anonymous, which means it’s nearly impossible to connect the dots as you don’t know when someone was born. Pictures and documents are being removed willy nilly from archives, in contravention of the archive laws (or openness laws, as they garauntee publication of data after a certain amount of time). Uncertainty about how far the AVG goes are leading people to take a very heavy handed view of it.

Source: Stamboomonderzoekers lopen tegen AVG aan – Emerce

Amazon’s Ring Is Teaching Cops How to Persuade Customers to Hand Over Surveillance Footage

according to a new report, Ring is also instructing cops on how to persuade customers to hang over surveillance footage even when they aren’t responsive to police requests.

According to a police memo obtained by Gizmodo and reported last week, Ring has partnerships with “over 225 law enforcement agencies,” Ring is actively involved in scripting and approving how police communicate those partnerships. As part of these relationships, Ring helps police obtain surveillance footage both by alerting customers in a given area that footage is needed and by asking to “share videos” with police. In a disclaimer included with the alerts, Ring claims that sharing the footage “is absolutely your choice.”

But according to documents and emails obtained by Motherboard, Ring also instructed police from two departments in New Jersey on how best to coax the footage out of Ring customers through its “neighborhood watch” app Neighbors in situations where police requests for video were not being met, including by providing police with templates for requests and by encouraging them to post often on the Neighbors app as well as on social media.

In one such email obtained by Motherboard, a Bloomfield Police Department detective requested advice from a Ring associate on how best to obtain videos after his requests were not being answered and further asked whether there was “anything that we can blast out to encourage Ring owners to share the videos when requested.”

In this email correspondence, the Ring associate informed the detective that a significant part of customer “opt in for video requests is based on the interaction law enforcement has with the community,” adding that the detective had done a “great job interacting with [community members] and this will be critical in regard to increased opt in rate.”

“The more users you have the more useful information you can collect,” the associate wrote.

Ring did not immediately return our request for comment about the practice of instructing police how to better obtain surveillance footage from its own customers. However, a spokesperson told Motherboard in a statement that the company “offers Neighbors app trainings and best practices for posting and engaging with app users for all law enforcement agencies utilizing the portal tool,” including by providing “templates and educational materials for police departments to utilize at their discretion.”

In addition to Gizmodo’s recent report that Ring is carefully controlling the messaging and implementation of its products with its police departments, a report from GovTech on Friday claimed that Amazon is also helping police work around denied requests by customers to supply their Ring footage. In such instances, according to the report, police can approach Ring’s parent company Amazon, which can provide the footage that police deem vital to an investigation.

“If we ask within 60 days of the recording and as long as it’s been uploaded to the cloud, then Ring can take it out of the cloud and send it to us legally so that we can use it as part of our investigation,” Tony Botti, public information officer for the Fresno County Sheriff’s Office, told GovTech. When contacted by Gizmodo, however, a Ring spokesperson denied this.

Source: Amazon’s Ring Is Teaching Cops How to Persuade Customers to Hand Over Surveillance Footage

Must. Surveill. The. People.

CASE Act Tackles Online Copyright Abuse by allowing copyright “owners” (trolls) to fine anyone they like for $15 – 30k, force immediate content take downs with no oversight

In July, members of the federal Senate Judiciary Committee chose to move forward with a bill targeting copyright abuse with a more streamlined way to collect damages, but critics say that it could still allow big online players to push smaller ones around—and even into bankruptcy.

Known as the Copyright Alternative in Small-Claims Enforcement (or CASE) Act, the bill was reintroduced in the House and Senate this spring by a roster of bipartisan lawmakers, with endorsements from such groups as the Copyright Alliance and the Graphic Artists’ Guild.

Under the bill, the U.S. Copyright Office would establish a new ‘small claims-style’ system for seeking damages, overseen by a three-person Copyright Claims Board. Owners of digital content who see that content used without permission would be able to file a claim for damages up to $15,000 for each work infringed, and $30,000 in total, if they registered their content with the Copyright Office, or half those amounts if they did not.

Groups such as the Electronic Frontier Foundation (EFF), Public Knowledge, and the Authors Alliance have opposed the bill, which such critics argue could also end up burdening individuals and small outfits, while potentially giving big companies and patent trolls a leg up.

[…]

In fact, in its present form, the bill establishes that content which is used without thinking does fall under the purview of the Copyright Claims Board—though reports of potential $15,000 fines for sharing memes are an obvious exaggeration.

According to the bill, “The Copyright Claims Board may not make any finding that, or consider whether, the infringement was committed willfully in making an award of statutory damages.” The Board would, however, be allowed to consider  “whether the infringer has agreed to cease or mitigate the infringing activity” when it comes to awarding statutory damages.

Ernesto Falcon argued in another EFF post last month that the bill would also present censorship risks, given that the current legal system for content “takedown” notices, as defined by the Digital Millennium Copyright Act (DMCA), is already abused.

Under the new, additional framework, Falcon wrote, “[An] Internet platform doesn’t have to honor the counter-notice by putting the posted material back online within 14 days. Already, some of the worst abuses of the DMCA occur with time-sensitive material, as even a false infringement notice can effectively censor that material for up to two weeks during a newsworthy event, for example.”

He continued, “The CASE Act would allow unscrupulous filers to extend that period by months, for a small filing fee.”

Source: CASE Act Tackles Online Copyright Abuse, But Critics Call The Cost Too High

Cops Are Giving Amazon’s Ring Your Real-Time 911 Caller Data, with location info

Amazon-owned home security company Ring is pursuing contracts with police departments that would grant it direct access to real-time emergency dispatch data, Gizmodo has learned.

The California-based company is seeking police departments’ permission to tap into the computer-aided dispatch (CAD) feeds used to automate and improve decisions made by emergency dispatch personnel and cut down on police response times. Ring has requested access to the data streams so it can curate “crime news” posts for its “neighborhood watch” app, Neighbors.

[…]

An internal police email dated April 2019, obtained by Gizmodo last week via a records request, stated that more than 225 police departments have entered into partnerships with Ring. (The company has declined to confirm that, or provide the actual number.) Doing so grants the departments access to a Neighbors “law enforcement portal” through which police can request access to videos captured by Ring doorbell cameras.

Ring says it does not provide the personal information of its customers to the authorities without consent. To wit, the company has positioned itself as an intermediary through which police request access to citizen-captured surveillance footage. When police make a request, they don’t know who receives it, Ring says, until a user chooses to share their video. Users are also prompted with the option to review their footage before turning it over.

[…]

Through its police partnerships, Ring has requested access to CAD, which includes information provided voluntarily by 911 callers, among other types of data automatically collected. CAD data is typically compromised of details such as names, phone numbers, addresses, medical conditions and potentially other types of personally identifiable information, including, in some instances, GPS coordinates.

In an email Thursday, Ring confirmed it does receive location information, including precise addresses from CAD data, which it does not publish to its app. It denied receiving other forms of personal information.

Ring CAD materials provided to police.

According to some internal documents, police CAD data is received by Ring’s “Neighbors News team” and is then reformatted before being posted on Neighbors in the form of an “alert” to users in the vicinity of the alleged incident.

[…]

Earlier this year, when the Seattle Police Department sought access to CAD software, it triggered a requirement for a privacy impact report under a city ordinance concerning the acquisition of any “surveillance technologies.”

According to the definition adopted by the city, a technology has surveillance capability if it can be used “to collect, capture, transmit, or record data that could be used to surveil, regardless of whether the data is obscured, de-identified, or anonymized before or after collection and regardless of whether technology might be used to obscure or prevent the capturing of certain views or types of information.”

Some CAD systems, such as those marketed by Central Square Technologies (formerly known as TriTech), are used to locate cellular callers by sending text messages that force the return of a phone-location service tracking report. CAD systems also pull in data automatically from phone companies, including ALI information—Automatic Location Identification—which is displayed to dispatch personnel whenever a 911 call is placed. CAD uses these details, along with manually entered information provided by callers, to make fast, initial decisions about which police units and first responders should respond to which calls.

According to Ring’s materials, the direct address, or latitude and longitude, of 911 callers is among the information the Neighbors app requires police to provide, along with the time of the incident, and the category and description of the alleged crime.

Ring said that while it uses CAD data to generate its “News Alerts,” sensitive details, such as the direct address of an incident or the number of police units responding, are never included.

Source: Cops Are Giving Amazon’s Ring Your Real-Time 911 Caller Data

Oddly enough no mention is made of voice recordings. Considering Amazon is building a huge database of voices and people through Alexa, cross referencing the two should be trivial and allow Amazon to surveil the population more closely

AI system ‘should be recognised as inventor’

An artificial intelligence system should be recognised as the inventor of two ideas in patents filed on its behalf, a team of academics says.

The AI has designed interlocking food containers that are easy for robots to grasp and a warning light that flashes in a rhythm that is hard to ignore.

Patents offices insist innovations are attributed to humans – to avoid legal complications that would arise if corporate inventorship were recognised.

The academics say this is “outdated”.

And it could see patent offices refusing to assign any intellectual property rights for AI-generated creations.

As a result, two professors from the University of Surrey have teamed up with the Missouri-based inventor of Dabus AI to file patents in the system’s name with the relevant authorities in the UK, Europe and US.

‘Inventive act’

Dabus was previously best known for creating surreal art thanks to the way “noise” is mixed into its neural networks to help generate unusual ideas.

Unlike some machine-learning systems, Dabus has not been trained to solve particular problems.

Instead, it seeks to devise and develop new ideas – “what is traditionally considered the mental part of the inventive act”, according to creator Stephen Thaler

The first patent describes a food container that uses fractal designs to create pits and bulges in its sides. One benefit is that several containers can be fitted together more tightly to help them be transported safely. Another is that it should be easier for robotic arms to pick them up and grip them.

Image copyright Ryan Abbott
Image caption This diagram shows how a container’s shape could be based on fractals

The second describes a lamp designed to flicker in a rhythm mimicking patterns of neural activity that accompany the formation of ideas, making it more difficult to ignore.

Law professor Ryan Abbott told BBC News: “These days, you commonly have AIs writing books and taking pictures – but if you don’t have a traditional author, you cannot get copyright protection in the US.

“So with patents, a patent office might say, ‘If you don’t have someone who traditionally meets human-inventorship criteria, there is nothing you can get a patent on.’

“In which case, if AI is going to be how we’re inventing things in the future, the whole intellectual property system will fail to work.”

Instead, he suggested, an AI should be recognised as being the inventor and whoever the AI belonged to should be the patent’s owner, unless they sold it on.

However, Prof Abbott acknowledged lawmakers might need to get involved to settle the matter and that it could take until the mid-2020s to resolve the issue.

A spokeswoman for the European Patent Office indicated that it would be a complex matter.

“It is a global consensus that an inventor can only be a person who makes a contribution to the invention’s conception in the form of devising an idea or a plan in the mind,” she explained.

“The current state of technological development suggests that, for the foreseeable future, AI is… a tool used by a human inventor.

“Any change… [would] have implications reaching far beyond patent law, ie to authors’ rights under copyright laws, civil liability and data protection.

“The EPO is, of course, aware of discussions in interested circles and the wider public about whether AI could qualify as inventor.”

The UK’s Patents Act 1977 currently requires an inventor to be a person, but the Intellectual Property Office is aware of the issue.

“The government believes that AI technology could increase the UK’s GDP by 10% in the next decade, and the IPO is focused on responding to the challenges that come with this growth,” said a spokeswoman.

Source: AI system ‘should be recognised as inventor’ – BBC News

UK made illegal copies and mismanaged Schengen travelers database, gave it away to unauthorised 3rd parties, both business and countries

Authorities in the United Kingdom have made unauthorized copies of data stored inside a EU database for tracking undocumented migrants, missing people, stolen cars, or suspected criminals.

Named the Schengen Information System (SIS), this is a EU-run database that stores information such as names, personal details, photographs, fingerprints, and arrest warrants for 500,000 non-EU citizens denied entry into Europe, over 100,000 missing people, and over 36,000 criminal suspects.

The database was created for the sole purpose of helping EU countries manage access to the passport-free Schengen travel zone.

The UK was granted access to this database in 2015, even if it’s not an official member of the Schengen zone.

2018 report revealed violations on the UK’s side

In May 2018, reporters from EU Observer obtained a secret EU report that highlighted years of violations in managing the SIS database by UK authorities.

According to the report, UK officials made copies of this database and stored it at airports and ports in unsafe conditions. Furthermore, by making copies, the UK was always working with outdated versions of the database.

This meant UK officials wouldn’t know in time if a person was removed from SIS, resulting in unnecessary detainments, or if a person was added to the database, allowing criminals to move through the UK and into the Schengen travel zone.

Furthermore, they also mismanaged and misused this data by providing unsanctioned access to this highly-sensitive and secret information to third-party contractors, including US companies (IBM, ATOS, CGI, and others).

The report expressed concerns that by doing so, the UK indirecly allowed contractors to copy this data as well, or allow US officials to request the database from a contractor under the US Patriot Act.

Source: UK made illegal copies and mismanaged Schengen travelers database | ZDNet

It’s official: Deploying Facebook’s ‘Like’ button on your website makes you a joint data slurper, puts you in GDPR danger

Organisations that deploy Facebook’s ubiquitous “Like” button on their websites risk falling foul of the General Data Protection Regulation following a landmark ruling by the European Court of Justice.

The EU’s highest court has decided that website owners can be held liable for data collection when using the so-called “social sharing” widgets.

The ruling (PDF) states that employing such widgets would make the organisation a joint data controller, along with Facebook – and judging by its recent record, you don’t want to be anywhere near Zuckerberg’s antisocial network when privacy regulators come a-calling.

‘Purposes of data processing’

According to the court, website owners “must provide, at the time of their collection, certain information to those visitors such as, for example, its identity and the purposes of the [data] processing”.

By extension, the ECJ’s decision also applies to services like Twitter and LinkedIn.

Facebook’s “Like” is far from an innocent expression of affection for a brand or a message: its primary purpose is to track individuals across websites, and permit data collection even when they are not explicitly using any of Facebook’s products.

[…]

On Monday, the ECJ ruled that Fashion ID could be considered a joint data controller “in respect of the collection and transmission to Facebook of the personal data of visitors to its website”.

The court added that it was not, in principle, “a controller in respect of the subsequent processing of those data carried out by Facebook alone”.

‘Consent’

“Thus, with regard to the case in which the data subject has given his or her consent, the Court holds that the operator of a website such as Fashion ID must obtain that prior consent (solely) in respect of operations for which it is the (joint) controller, namely the collection and transmission of the data,” the ECJ said.

The concept of “data controller” – the organisation responsible for deciding how the information collected online will be used – is a central tenet of both DPR and GDPR. The controller has more responsibilities than the data processor, who cannot change the purpose or use of the particular dataset. It is the controller, not the processor, who would be held accountable for any GDPR sins.

Source: It’s official: Deploying Facebook’s ‘Like’ button on your website makes you a joint data slurper • The Register

Dutch ministry of Justice recommends to Dutch gov to stop using office 365 and windows 10

Basically they don’t like data being shared with third parties doing predictive profiling with the data and they don’t like all the telemetry being sent everywhere, nor do they like MS being able to view and running through content such as text, pictures and videos.

Source: Ministerie van justitie: Stop met gebruik Office 365 – Webwereld (Dutch)

Facebook’s answer to the encryption debate: install spyware with content filters! (updated: maybe not)

The encryption debate is typically framed around the concept of an impenetrable link connecting two services whose communications the government wishes to monitor. The reality, of course, is that the security of that encryption link is entirely separate from the security of the devices it connects. The ability of encryption to shield a user’s communications rests upon the assumption that the sender and recipient’s devices are themselves secure, with the encrypted channel the only weak point.

After all, if either user’s device is compromised, unbreakable encryption is of little relevance.

This is why surveillance operations typically focus on compromising end devices, bypassing the encryption debate entirely. If a user’s cleartext keystrokes and screen captures can be streamed off their device in real-time, it matters little that they are eventually encrypted for transmission elsewhere.

[…]

Facebook announced earlier this year preliminary results from its efforts to move a global mass surveillance infrastructure directly onto users’ devices where it can bypass the protections of end-to-end encryption.

In Facebook’s vision, the actual end-to-end encryption client itself such as WhatsApp will include embedded content moderation and blacklist filtering algorithms. These algorithms will be continually updated from a central cloud service, but will run locally on the user’s device, scanning each cleartext message before it is sent and each encrypted message after it is decrypted.

The company even noted that when it detects violations it will need to quietly stream a copy of the formerly encrypted content back to its central servers to analyze further, even if the user objects, acting as true wiretapping service.

Facebook’s model entirely bypasses the encryption debate by globalizing the current practice of compromising devices by building those encryption bypasses directly into the communications clients themselves and deploying what amounts to machine-based wiretaps to billions of users at once.

Asked the current status of this work and when it might be deployed in the production version of WhatsApp, a company spokesperson declined to comment.

Of course, Facebook’s efforts apply only to its own encryption clients, leaving criminals and terrorists to turn to other clients like Signal or their own bespoke clients they control the source code of.

The problem is that if Facebook’s model succeeds, it will only be a matter of time before device manufacturers and mobile operating system developers embed similar tools directly into devices themselves, making them impossible to escape. Embedding content scanning tools directly into phones would make it possible to scan all apps, including ones like Signal, effectively ending the era of encrypted communications.

Governments would soon use lawful court orders to require companies to build in custom filters of content they are concerned about and automatically notify them of violations, including sending a copy of the offending content.

Rather than grappling with how to defeat encryption, governments will simply be able to harness social media companies to perform their mass surveillance for them, sending them real-time alerts and copies of the decrypted content.

Source: The Encryption Debate Is Over – Dead At The Hands Of Facebook

Update 4/8/19 Bruce Schneier is convinced that this story has been concocted from a single source and Facebook is not in fact planning to do this currently. I’m inclined to agree.

source: More on Backdooring (or Not) WhatsApp

Apple Contractors Reportedly Overhear Sensitive Information and Sexy Times Thanks to Siri

First Amazon, then Google, and now Apple have all confirmed that their devices are not only listening to you, but complete strangers may be reviewing the recordings. Thanks to Siri, Apple contractors routinely catch intimate snippets of users’ private lives like drug deals, doctor’s visits, and sexual escapades as part of their quality control duties, the Guardian reported Friday.

As part of its effort to improve the voice assistant, “[a] small portion of Siri requests are analysed to improve Siri and dictation,” Apple told the Guardian. That involves sending these recordings sans Apple IDs to its international team of contractors to rate these interactions based on Siri’s response, amid other factors. The company further explained that these graded recordings make up less than 1 percent of daily Siri activations and that most only last a few seconds.

That isn’t the case, according to an anonymous Apple contractor the Guardian spoke with. The contractor explained that because these quality control procedures don’t weed out cases where a user has unintentionally triggered Siri, contractors end up overhearing conversations users may not ever have wanted to be recorded in the first place. Not only that, details that could potentially identify a user purportedly accompany the recording so contractors can check whether a request was handled successfully.

“There have been countless instances of recordings featuring private discussions between doctors and patients, business deals, seemingly criminal dealings, sexual encounters and so on. These recordings are accompanied by user data showing location, contact details, and app data,” the whistleblower told the Guardian.

And it’s frighteningly easy to activate Siri by accident. Most anything that sounds remotely like “Hey Siri” is likely to do the trick, as UK’s Secretary of Defense Gavin Williamson found out last year when the assistant piped up as he spoke to Parliament about Syria. The sound of a zipper may even be enough to activate it, according to the contractor. They said that of Apple’s devices, the Apple Watch and HomePod smart speaker most frequently pick up accidental Siri triggers, and recordings can last as long as 30 seconds.

While Apple told the Guardian the information collected from Siri isn’t connected to other data Apple may have on a user, the contractor told a different story:

“There’s not much vetting of who works there, and the amount of data that we’re free to look through seems quite broad. It wouldn’t be difficult to identify the person that you’re listening to, especially with accidental triggers—addresses, names and so on.”

Staff were told to report these accidental activations as technical problems, the worker told the paper, but there wasn’t guidance on what to do if these recordings captured confidential information.

All this makes Siri’s cutesy responses to users questions seem far less innocent, particularly its answer when you ask if it’s always listening: “I only listen when you’re talking to me.”

Fellow tech giants Amazon and Google have faced similar privacy scandals recently over recordings from their devices. But while these companies also have employees who monitor each’s respective voice assistant, users can revoke permissions for some uses of these recordings. Apple provides no such option in its products.

[The Guardian]

Source: Apple Contractors Reportedly Overhear Sensitive Information and Sexy Times Thanks to Siri

UK cops want years of data from victims phones for no real reason, but it is being misused

A report (PDF), released today by Big Brother Watch and eight other civil rights groups, has argued that complainants are being subjected to “suspicion-less, far-reaching digital interrogations when they report crimes to police”.

It added: “Our research shows that these digital interrogations have been used almost exclusively for complainants of rape and serious sexual offences so far. But since police chiefs formalised this new approach to victims’ data through a national policy in April 2019, they claim they can also be used for victims and witnesses of potentially any crime.”

The policy referred to relates to the Digital Processing Notices instituted by forces earlier this year, which victims of crime are asked to sign, allowing police to download large amounts of data, potentially spanning years, from their phones. You can see what one of the forms looks like here (PDF).

[…]

The form is 9 pages long and states ‘if you refused permission… it may not be possible for the investigation or prosecution to continue’. Someone in a vulnerable position is unlikely to feel that they have any real choice. This does not constitute informed consent either.

Rape cases dropped over cops’ demands for search

The report described how “Kent Police gave the entire contents of a victim’s phone to the alleged perpetrator’s solicitor, which was then handed to the defendant”. It also outlined a situation where a 12-year-old rape survivor’s phone was trawled, despite a confession from the perpetrator. The child’s case was delayed for months while the Crown Prosecution Service “insisted on an extensive digital review of his personal mobile phone data”.

Another case mentioned related to a complainant who reported being attacked by a group of strangers. “Despite being willing to hand over relevant information, police asked for seven years’ worth of phone data, and her case was then dropped after she refused.”

Yet another individual said police had demanded her mobile phone after she was raped by a stranger eight years ago, even after they had identified the attacker using DNA evidence.

Source: UK cops blasted over ‘disproportionate’ slurp of years of data from crime victims’ phones • The Register

Researchers Reveal That Anonymized Data Is Easy To Reverse Engineer

Researchers at Imperial College London published a paper in Nature Communications on Tuesday that explored how inadequate current techniques to anonymize datasets are. Before a company shares a dataset, they will remove identifying information such as names and email addresses, but the researchers were able to game this system.

Using a machine learning model and datasets that included up to 15 identifiable characteristics—such as age, gender, and marital status—the researchers were able to accurately reidentify 99.98 percent of Americans in an anonymized dataset, according to the study. For their analyses, the researchers used 210 different data sets that were gathered from five sources including the U.S. government that featured information on more than 11 million individuals. Specifically, the researchers define their findings as a successful effort to propose and validate “a statistical model to quantify the likelihood for a re-identification attempt to be successful, even if the disclosed dataset is heavily incomplete.”

[…]Even the hypothetical illustrated by the researchers in the study isn’t a distant fiction. In June of this year, a patient at the University of Chicago Medical Center filed a class-action lawsuit against both the private research university and Google for the former sharing his data with the latter without his consent. The medical center allegedly de-identified the dataset, but still gave Google records with the patient’s height, weight, vital signs, information on diseases they have, medical procedures they’ve undergone, medications they are on, and date stamps. The complaint pointed out that aside from the breach of privacy in sharing intimate data without a patient’s consent, that even if it was in some way anonymized, the tools available to a powerful tech corporation make it pretty easy for them to reverse engineer that information and identify a patient.

“Companies and governments have downplayed the risk of re-identication by arguing that the datasets they sell are always incomplete,” de Montjoye said in a statement. “Our findings contradict this and demonstrate that an attacker could easily and accurately estimate the likelihood that the record they found belongs to the person they are looking for.”

Source: Researchers Reveal That Anonymized Data Is Easy To Reverse Engineer

Google and Facebook might be tracking your porn history, researchers warn

Being able to access porn on the internet might be convenient, but according to researchers it’s not without its security risks. And they’re not just talking about viruses.

Researchers at Microsoft, Carnegie Mellon University and the University of Pennsylvania analyzed 22,484 porn sites and found that 93% leak user data to a third party. Normally, for extra protection when surfing the web, a user might turn to incognito mode. But, the researchers said, incognito mode only ensures that your browsing history is not stored on your computer.

According to a study released Monday, Google was the No. 1 third-party company. The research found that Google, or one of its subsidiaries like the advertising platform DoubleClick, had trackers on 74% of the pornography sites examined. Facebook had trackers on 10% of the sites.

“In the US, many advertising and video hosting platforms forbid ‘adult’ content. For example, Google’s YouTube is the largest video host in the world, but does not allow pornography,” the researchers wrote. “However, Google has no policies forbidding websites from using their code hosting (Google APIs) or audience measurement tools (Google Analytics). Thus, Google refuses to host porn, but has no limits on observing the porn consumption of users, often without their knowledge.”

Google didn’t immediately respond to requests for comment.

“We don’t want adult websites using our business tools since that type of content is a violation of our Community Standards. When we learn that these types of sites or apps use our tools, we enforce against them,” Facebook spokesperson Joe Osborne said in an email Thursday.

Elena Maris, a Microsoft researcher who worked on the study, told The New York Times the “fact that the mechanism for adult site tracking” is so similar to online retail should be “a huge red flag.”

“This isn’t picking out a sweater and seeing it follow you across the web,” Maris said. “This is so much more specific and deeply personal.”

Source: Google and Facebook might be tracking your porn history, researchers warn – CNET

Permission-greedy apps delayed Android 6 upgrade so they could harvest more user data

Android app developers intentionally delayed updating their applications to work on top of Android 6.0, so they could continue to have access to an older permission-requesting mechanism that granted them easy access to large quantities of user data, research published by the University of Maryland last month has revealed.

The central focus of this research was the release of Android (Marshmallow) 6.0 in October 2015. The main innovation added in Android 6.0 was the ability for users to approve app permissions on a per-permission basis, selecting which permissions they wanted to allow an app to have.

[…]

Google gave app makers three years to update

As the Android ecosystem grew, app developers made a habit of releasing apps that requested a large number of permissions, many of which their apps never used, and which many developers were using to collect user data and later re-selling it to analytics and data tracking firms.

This changed with the release of Android 6.0; however, fearing a major disruption in its app ecosystem, Google gave developers three years to update their apps to work on the newer OS version.

This meant that despite users running a modern Android OS version — like Android 6, 7, or 8 — apps could declare themselves as legacy apps (by declaring an older Android Software Development Kit [SDK]) and work with the older permission-requesting mechanism that was still allowing them to request blanket permissions.

Two-year-long experiment

In research published in June, two University of Maryland academics say they conducted tests between April 2016 and March 2018 to see how many apps initially coded to work on older Android SDKs were updated to work on the newer Android 6.0 SDK.

The research duo says they installed 13,599 of the most popular Android apps on test devices. Each month, the research team would update the apps and scan the apps’ code to see if they were updated for the newer Android 6.0 release.

“We find that an app’s likelihood of delaying upgrade to the latest platform version increases with an increase in the ratio of dangerous permissions sought by the apps, indicating that apps prefer to retain control over access to the users’ private information,” said Raveesh K. Mayya and Siva Viswanathan, the two academics behind the research.

[…]

Additional details about this research can be found in a white paper named “Delaying Informed Consent: An Empirical Investigation of Mobile Apps’ Upgrade Decisions” that was presented in June at the 2019 Workshop on the Economics of Information Security in Boston.

Source: Permission-greedy apps delayed Android 6 upgrade so they could harvest more user data | ZDNet

Microsoft Office 365: Banned in German schools over privacy fears

Schools in the central German state of Hesse have been have been told it’s now illegal to use Microsoft Office 365.

The state’s data-protection commissioner has ruled that using the popular cloud platform’s standard configuration exposes personal information about students and teachers “to possible access by US officials”.

That might sound like just another instance of European concerns about data privacy or worries about the current US administration’s foreign policy.

But in fact the ruling by the Hesse Office for Data Protection and Information Freedom is the result of several years of domestic debate about whether German schools and other state institutions should be using Microsoft software at all.

Besides the details that German users provide when they’re working with the platform, Microsoft Office 365 also transmits telemetry data back to the US.

Last year, investigators in the Netherlands discovered that that data could include anything from standard software diagnostics to user content from inside applications, such as sentences from documents and email subject lines. All of which contravenes the EU’s General Data Protection Regulation, or GDPR, the Dutch said.

Germany’s own Federal Office for Information Security also recently expressed concerns about telemetry data that the Windows operating system sends.

To allay privacy fears in Germany, Microsoft invested millions in a German cloud service, and in 2017 Hesse authorities said local schools could use Office 365. If German data remained in the country, that was fine, Hesse’s data privacy commissioner, Michael Ronellenfitsch, said.

But in August 2018 Microsoft decided to shut down the German service. So once again, data from local Office 365 users would be data transmitted over the Atlantic. Several US laws, including 2018’s CLOUD Act and 2015’s USA Freedom Act, give the US government more rights to ask for data from tech companies.

It’s actually simple, Austrian digital-rights advocate Max Schrems, who took a case on data transfers between the EU and US to the highest European court this week, tells ZDNet.

School pupils are usually not able to give consent, he points out. “And if data is sent to Microsoft in the US, it is subject to US mass-surveillance laws. This is illegal under EU law.”

Source: Microsoft Office 365: Banned in German schools over privacy fears | ZDNet

FTC Fines Facebook $5 Billion for Cambridge Analytica – not  very much considering earnings – and does not curtail future breaches

The Federal Trade Commission, which has been investigating Facebook in the wake of its massive Cambridge Analytica scandal, has voted to approve levying a massive $5 billion fine against the social media giant, according to reporting in both the Wall Street Journal and the Washington Post. It’s the single largest fine against a tech company by the FTC to date, but its inadequacy to curtail future breaches of this sort already has progressive lawmakers furious

Facebook was aware of a fine of this magnitude potentially coming down the pike for some time, and braced for a hit between $3 billion and $5 billion. The approval vote—which reportedly split down party lines, with three Republicans voting in favor and two Democrats against—was on the higher end of the expected spectrum.

This is expected to cap the agency’s investigation into the data-mining scandal that compromised up to 87 million Facebook users’ personal data. The data was originally harvested using a seemingly benign quiz app on the platform but was later potentially used by Cambridge Analytica, a political consultancy, for the unrelated purpose of political ad targeting.

[…]

While massive by the standards of tech companies, which too frequently get off with a slap on the wrist of lax data privacy practices which endanger users, the FTC’s fine still represents less than a third of the company’s $15.08 billion earnings from just the first quarter of this year.

Source: FTC Fines Facebook $5 Billion, Democrats Call It a Failure

Palantir’s Top-Secret User Manual for Cops shows how easily they can find scary amounts of information on you and your friends

Through a public record request, Motherboard has obtained a user manual that gives unprecedented insight into Palantir Gotham (Palantir’s other services, Palantir Foundry, is an enterprise data platform), which is used by law enforcement agencies like the Northern California Regional Intelligence Center. The NCRIC serves around 300 communities in northern California and is what is known as a “fusion center,” a Department of Homeland Security intelligence center that aggregates and investigates information from state, local, and federal agencies, as well as some private entities, into large databases that can be searched using software like Palantir.

Fusion centers have become a target of civil liberties groups in part because they collect and aggregate data from so many different public and private entities. The US Department of Justice’s Fusion Center Guidelines list the following as collection targets:

1562941666896-Screen-Shot-2019-07-12-at-102230-AM
Data via US Department of Justice. Chart via Electronic Information Privacy Center.
1562940862696-Screen-Shot-2019-07-12-at-101110-AM
A flow chart that explains how cops can begin to search for records relating to a single person.

The guide doesn’t just show how Gotham works. It also shows how police are instructed to use the software. This guide seems to be specifically made by Palantir for the California law enforcement because it includes examples specific to California. We don’t know exactly what information is excluded, or what changes have been made since the document was first created. The first eight pages that we received in response to our request is undated, but the remaining twenty-one pages were copyrighted in 2016. (Palantir did not respond to multiple requests for comment.)

The Palantir user guide shows that police can start with almost no information about a person of interest and instantly know extremely intimate details about their lives. The capabilities are staggering, according to the guide:

  • If police have a name that’s associated with a license plate, they can use automatic license plate reader data to find out where they’ve been, and when they’ve been there. This can give a complete account of where someone has driven over any time period.
  • With a name, police can also find a person’s email address, phone numbers, current and previous addresses, bank accounts, social security number(s), business relationships, family relationships, and license information like height, weight, and eye color, as long as it’s in the agency’s database.
  • The software can map out a person’s family members and business associates of a suspect, and theoretically, find the above information about them, too.

All of this information is aggregated and synthesized in a way that gives law enforcement nearly omniscient knowledge over any suspect they decide to surveil.

[…]

In order for Palantir to work, it has to be fed data. This can mean public records like business registries, birth certificates, and marriage records, or police records like warrants and parole sheets. Palantir would need other data sources to give police access to information like emails and bank account numbers.

“Palantir Law Enforcement supports existing case management systems, evidence management systems, arrest records, warrant data, subpoenaed data, RMS or other crime-reporting data, Computer Aided Dispatch (CAD) data, federal repositories, gang intelligence, suspicious activity reports, Automated License Plate Reader (ALPR) data, and unstructured data such as document repositories and emails,” Palantir’s website says.

Some data sources—like marriage, divorce, birth, and business records—also implicate other people that are associated with a person personally or through family. So when police are investigating a person, they’re not just collecting a dragnet of emails, phone numbers, business relationships, travel histories, etc. about one suspect. They’re also collecting information for people who are associated with this suspect.

Source: Revealed: This Is Palantir’s Top-Secret User Manual for Cops – VICE

Microsoft stirs suspicions by adding telemetry spyware to security-only update

Under Microsoft’s rules, what it calls “Security-only updates” are supposed to include, well, only security updates, not quality fixes or diagnostic tools. Nearly three years ago, Microsoft split its monthly update packages for Windows 7 and Windows 8.1 into two distinct offerings: a monthly rollup of updates and fixes and, for those who are want only those patches that are absolutely essential, a Security-only update package.

What was surprising about this month’s Security-only update, formally titled the “July 9, 2019—KB4507456 (Security-only update),” is that it bundled the Compatibility Appraiser, KB2952664, which is designed to identify issues that could prevent a Windows 7 PC from updating to Windows 10.

Among the fierce corps of Windows Update skeptics, the Compatibility Appraiser tool is to be shunned aggressively. The concern is that these components are being used to prepare for another round of forced updates or to spy on individual PCs. The word telemetry appears in at least one file, and for some observers it’s a short step from seemingly innocuous data collection to outright spyware.

My longtime colleague and erstwhile co-author, Woody Leonhard, noted earlier today that Microsoft appeared to be “surreptitiously adding telemetry functionality” to the latest update:

With the July 2019-07 Security Only Quality Update KB4507456, Microsoft has slipped this functionality into a security-only patch without any warning, thus adding the “Compatibility Appraiser” and its scheduled tasks (telemetry) to the update. The package details for KB4507456 say it replaces KB2952664 (among other updates).

Come on Microsoft. This is not a security-only update. How do you justify this sneaky behavior? Where is the transparency now.

I had the same question, so I spent the afternoon poking through update files and security bulletins and trying to get an on-the-record response from Microsoft. I got a terse “no comment” from Redmond.

Source: Microsoft stirs suspicions by adding telemetry files to security-only update | ZDNet

Once installed, a new scheduled task is added to the system under Microsoft > Windows > Application Experience

Google admits leaked private voice conversations, decides to clamp down on whistleblowers, not improve privacy

Google admitted on Thursday that more than 1,000 sound recordings of customer conversations with the Google Assistant were leaked by some of its partners to a Belgian news site.

[…]

“We just learned that one of these language reviewers has violated our data security policies by leaking confidential Dutch audio data,” Google product manager of search David Monsees said in a blog post. “Our Security and Privacy Response teams have been activated on this issue, are investigating, and we will take action. We are conducting a full review of our safeguards in this space to prevent misconduct like this from happening again”

Monsees said its partners only listen to “around 0.2 percent of all audio snippets” and said they are “not associated with user accounts,” even though VRT was able to figure out who was speaking in some of the clips.

Source: Google admits leaked private voice conversations

NB the CNBC  article states that you can delete old conversations, but we know that’s not the case for transcribed Alexa conversations and we know that if you delete your shopping emails from Gmail, Google keeps your shopping history.

How American Corporations Are Policing Online Speech Worldwide

In the winter of 2010, a 19-year-old Moroccan man named Kacem Ghazzali logged into his email to find a message from Facebook informing him that a group he had created just a few days prior had been removed from the platform without explanation. The group, entitled “Jeunes pour la séparation entre Religion et Enseignement” (or “Youth for the separation of religion and education”), was an attempt by Ghazzali to organize with other secularist youth in the pious North African kingdom, but it was quickly thwarted. When Ghazzali wrote to Facebook to complain about the censorship, he found his personal profile taken down as well.

Back then, there was no appeals system, but after I wrote about the story, Ghazzali was able to get his accounts back. Others haven’t been so lucky. In the years since, I’ve heard from hundreds of activists, artists, and average folks who found their social media posts or accounts deleted—sometimes for violating some arcane proprietary rule, sometimes at the order of a government or court, other times for no discernible reason at all.

The architects of Silicon Valley’s big social media platforms never imagined they’d someday be the global speech police. And yet, as their market share and global user bases have increased over the years, that’s exactly what they’ve become. Today, the number of people who tweet is nearly the population of the United States. About a quarter of the internet’s total users watch YouTube videos, and nearly one-third of the entire world uses Facebook. Regardless of the intent of their founders, none of these platforms were ever merely a means of connecting people; from their early days, they fulfilled greater needs. They are the newspaper, the marketplace, the television. They are the billboard, the community newsletter, and the town square.

And yet, they are corporations, with their own speech rights and ability to set the rules as they like—rules that more often than not reflect the beliefs, however misguided, of their founders.

Source: How American Corporations Are Policing Online Speech Worldwide

T-Mobile Says Customers Can’t Sue Because It Violates Its ToS

T-Mobile screwed over millions of customers when it collected their geolocation data and sold it to third parties without their consent. Now, two of these customers are trying to pursue a class-action lawsuit against the company for the shady practice, but the telecom giant is using another shady practice to force them to settle their dispute behind closed doors.

On Monday, T-Mobile filed a motion to compel the plaintiffs into arbitration, which would keep the complaint out of a public courtroom. See, when you sign a contract or agree to a company’s terms of service with a forced arbitration clause, you are waiving your right to a trial by jury and oftentimes to pursue a class-action lawsuit at all. Settling a dispute in arbitration means having it heard by a third party behind closed doors. And an arbitration clause is buried in T-Mobile’s fine print.

T-Mobile’s terms of service state that customers do have the option to opt out of arbitration, which is buried within the agreement and states that they “must either complete the opt out form on this website or call toll-free 1-866-323-4405 and provide the information requested.” They also only have 30 days to do so after they have activated their service. After that brief time period, users are no longer eligible to opt out.

The plaintiffs, Shawnay Ray and Kantice Joyner of Maryland, filed the class-action complaint against T-Mobile in May. Verizon, Sprint, and AT&T were all also hit with lawsuits that same month for selling customer location data. “The telecommunications carriers are the beginning of a dizzying chain of data selling, where data goes from company to company, and ultimately ends up in the hands of literally anybody who is looking,” the complaint against T-Mobile states. The comment is largely referring to a Vice investigation that found that the phone carriers sold real-time location data to middlemen and that this data sometimes eventually ended up with bounty hunters.

Source: T-Mobile Says Customers Can’t Sue Because It Violates Its ToS