Popular Soccer App Spied on Fans Through Phone Microphone to Catch Bars Pirating Game Streams

Spain’s data protection agency has fined La Liga, the nation’s top professional soccer league, 250,000 euros ($283,000 USD) for using the league’s phone app to spy on its fans. With millions of downloads, the app was reportedly being used to surveil bars in an effort to catch establishments playing matches on television without a license.

The La Liga app provides users with schedules, player rankings, statistics, and league news. It also knows when they’re watching games and where.

According to Spanish newspaper El País, the league told authorities that when its apps detected users were in bars the apps would record audio through phone microphones. The apps would then use the recording to determine if the user was watching a soccer game, using technology that’s similar to the Shazam app. If a game was playing in the vicinity, officials would then be able to determine if that bar location had a license to play the game.

So not only was the app spying on fans, but it was also turning those fans into unwitting narcs. El Diario reports that the app has been downloaded 10 million times.

Source: Popular Soccer App Spied on Fans Through Phone Microphone to Catch Bars Pirating Game Streams

The fine is insanely low, especially considering it’s the Spanish billionaires club that has to pay it.

The Russian Government Now Requires Tinder to Hand Over People’s Sexts

Tinder users in Russia may now have to decide whether the perks of dating apps outweigh a disconcerting invasion of privacy. Russian authorities are now requiring that the dating app hand over a wealth of intimate user data, including private messages, if and when it asks for them.

Tinder is the fourth dating app in the nation to be forced to comply with the Russian government’s request for user data, Moscow Times reports, and it’s among 175 services that have already consented to share information with the nation’s Federal Security Service, according to a registry online.

Tinder was added to the list of services that have to comply with the Russian data requests last Friday, May 31. The data Tinder must collect and provide to Russia upon request includes user data and all communications including audio and video. According to Tinder’s privacy policy, it does collect all your basic profile details, such as your date of birth and gender as well as the content you publish and your chats with other users, among other information. Which means the Russian government could get its hands on your sexts, your selfies, and even details on where you’ve been or where you might be going if it wants to.

It’s unclear if the possible data requests will apply to just Tinder users within Russia or any users of the dating app, regardless of where they are. If it’s the latter, it points to an unsettling reality in which one nation is able to extend its reach into the intimate data of people all over the world by simply making the request to any complying service that happens to also operate in Russia.

We have reached out to Tinder about which users this applies to, whether it will comply with this request, and what type of data it will share with the Russian authorities. We will update when we hear back. According to the Associated Press, Russian’s communications regulator confirmed on Monday that the company had shared information with it.

The Russian government is not only targeting Tinder. As the lengthy registry online indicates, a large and diverse range of services are already on the list and have been for years. This includes Snap, Wechat, Vimeo, and Badoo, another popular dating app in Russia.

Telegram famously objected to the Russian authorities’ request for its encryption keys last year, which resulted in the government banning the encrypted messaging app. It was an embarrassing mess for Russian internet service providers, which in their attempt to block workarounds for the messaging app, disrupted a litany of services online.

Source: The Russian Government Now Requires Tinder to Hand Over People’s Sexts

EU countries and car manufacturers, navigation systems will share information between everyone

Advanced Driver Assistance Systems (ADAS) in cars such as automatic braking systems, systems that detect the state of the road, if there is anything in your blind spot and navigation systems will be sharing their data with European countries, car manufacturers and presumably insurers under the cloak of making driving safer. I’m sure it will, but I still don’t feel comfortable having the government know where I am at all times and what my driving style is like.

The link below is in Dutch.

Source: EU-landen en autofabrikanten delen informatie voor meer verkeersveiligheid – Emerce

Apple’s privacy schtick is just an act, say folks suing the iGiant: iTunes ‘purchase histories sold’ to highest bidders

Apple has been hit with a class-action complaint in the US accusing the iGiant of playing fast and loose with the privacy of its customers.

The lawsuit [PDF], filed this month in a northern California federal district court, claims the Cupertino music giant gathers data from iTunes – including people’s music purchase history and personal information – then hands that info over to marketers in order to turn a quick buck.

“To supplement its revenues and enhance the formidability of its brand in the eyes of mobile application developers, Apple sells, rents, transmits, and/or otherwise discloses, to various third parties, information reflecting the music that its customers purchase from the iTunes Store application that comes pre-installed on their iPhones,” the filing alleged.

“The data Apple discloses includes the full names and home addresses of its customers, together with the genres and, in some cases, the specific titles of the digitally-recorded music that its customers have purchased via the iTunes Store and then stored in their devices’ Apple Music libraries.”

What’s more, the lawsuit goes on to claim that the data Apple sells is then combined by the marketers with information purchased from other sources to create detailed profiles on individuals that allow for even more targeted advertising.

Additionally, the lawsuit alleges the Music APIs Apple includes in its developer kit can allow third-party devs to harvest similarly detailed logs of user activity for their own use, further violating the privacy of iTunes customers.

The end result, the complaint states, is that Cook and Co are complacent in the illegal harvesting and reselling of personal data, all while pitching iOS and iTunes as bastions of personal privacy and data security.

“Apple’s disclosures of the personal listening information of plaintiffs and the other unnamed Class members were not only unlawful, they were also dangerous because such disclosures allow for the targeting of particularly vulnerable members of society,” the complaint reads.

“For example, any person or entity could rent a list with the names and addresses of all unmarried, college-educated women over the age of 70 with a household income of over $80,000 who purchased country music from Apple via its iTunes Store mobile application. Such a list is available for sale for approximately $136 per thousand customers listed.”

Source: Apple’s privacy schtick is just an act, say folks suing the iGiant: iTunes ‘purchase histories sold’ to highest bidders • The Register

Newly Released Amazon Patent Shows Just How Much Creepier Alexa Can Get

A newly revealed patent application filed by Amazon is raising privacy concerns over an envisaged upgrade to the company’s smart speaker systems. This change would mean that, by default, the devices end up listening to and recording everything you say in their presence.

Alexa, Amazon’s virtual assistant system that runs on the company’s Echo series of smart speakers, works by listening out for a ‘wakeword’ that tells the device to turn on its extended speech recognition systems in order to respond to spoken commands.

[…]

In theory, Alexa-enabled devices will only record what you say directly after the wakeword, which is then uploaded to Amazon, where remote servers use speech recognition to deduce your meaning, then relay commands back to your local speaker.

But one issue in this flow of events, as Amazon’s recently revealed patent application argues, is it means that anything you say before the wakeword isn’t actually heard.

“A user may not always structure a spoken command in the form of a wakeword followed by a command (eg. ‘Alexa, play some music’),” the Amazon authors explain in their patent application, which was filed back in January, but only became public last week.

“Instead, a user may include the command before the wakeword (eg. ‘Play some music, Alexa’) or even insert the wakeword in the middle of a command (eg. ‘Play some music, Alexa, the Beatles please’). While such phrasings may be natural for a user, current speech processing systems are not configured to handle commands that are not preceded by a wakeword.”

To overcome this barrier, Amazon is proposing an effective workaround: simply record everything the user says all the time, and figure it out later.

Rather than only record what is said after the wakeword is spoken, the system described in the patent application would effectively continuously record all speech, then look for instances of commands issued by a person.

Source: Newly Released Amazon Patent Shows Just How Much Creepier Alexa Can Get

wow – a continuous spy in your home

Bose headphones spy on listeners, sell that information on without consent or knowledge: lawsuit

Bose Corp spies on its wireless headphone customers by using an app that tracks the music, podcasts and other audio they listen to, and violates their privacy rights by selling the information without permission, a lawsuit charged.

The complaint filed on Tuesday by Kyle Zak in federal court in Chicago seeks an injunction to stop Bose’s “wholesale disregard” for the privacy of customers who download its free Bose Connect app from Apple Inc or Google Play stores to their smartphones.

[…]

After paying $350 for his QuietComfort 35 headphones, Zak said he took Bose’s suggestion to “get the most out of your headphones” by downloading its app, and providing his name, email address and headphone serial number in the process.

But the Illinois resident said he was surprised to learn that Bose sent “all available media information” from his smartphone to third parties such as Segment.io, whose website promises to collect customer data and “send it anywhere.”

Audio choices offer “an incredible amount of insight” into customers’ personalities, behavior, politics and religious views, citing as an example that a person who listens to Muslim prayers might “very likely” be a Muslim, the complaint said.

“Defendants’ conduct demonstrates a wholesale disregard for consumer privacy rights,” the complaint said.

Zak is seeking millions of dollars of damages for buyers of headphones and speakers, including QuietComfort 35, QuietControl 30, SoundLink Around-Ear Wireless Headphones II, SoundLink Color II, SoundSport Wireless and SoundSport Pulse Wireless.

He also wants a halt to the data collection, which he said violates the federal Wiretap Act and Illinois laws against eavesdropping and consumer fraud.

Dore, a partner at Edelson PC, said customers do not see the Bose app’s user service and privacy agreements when signing up, and the privacy agreement says nothing about data collection.

Edelson specializes in suing technology companies over alleged privacy violations.

The case is Zak v Bose Corp, U.S. District Court, Northern District of Illinois, No. 17-02928.

Source: Bose headphones spy on listeners: lawsuit | Article [AMP] | Reuters

Phone makers and carriers receive your location data, friends and more that Facebook pulls from your phone

A confidential Facebook document reviewed by The Intercept shows that the social network courts carriers, along with phone makers — some 100 different companies in 50 countries — by offering the use of even more surveillance data, pulled straight from your smartphone by Facebook itself.

Offered to select Facebook partners, the data includes not just technical information about Facebook members’ devices and use of Wi-Fi and cellular networks, but also their past locations, interests, and even their social groups. This data is sourced not just from the company’s main iOS and Android apps, but from Instagram and Messenger as well. The data has been used by Facebook partners to assess their standing against competitors, including customers lost to and won from them, but also for more controversial uses like racially targeted ads.

[…]

Facebook’s cellphone partnerships are particularly worrisome because of the extensive surveillance powers already enjoyed by carriers like AT&T and T-Mobile: Just as your internet service provider is capable of watching the data that bounces between your home and the wider world, telecommunications companies have a privileged vantage point from which they can glean a great deal of information about how, when, and where you’re using your phone. AT&T, for example, states plainly in its privacy policy that it collects and stores information “about the websites you visit and the mobile applications you use on our networks.” Paired with carriers’ calling and texting oversight, that accounts for just about everything you’d do on your smartphone.

[…]

the Facebook mobile app harvests and packages eight different categories of information […] These categories include use of video, demographics, location, use of Wi-Fi and cellular networks, personal interests, device information, and friend homophily, an academic term of art. A 2017 article on social media friendship from the Journal of the Society of Multivariate Experimental Psychology defined “homophily” in this context as “the tendency of nodes to form relations with those who are similar to themselves.” In other words, Facebook is using your phone to not only provide behavioral data about you to cellphone carriers, but about your friends as well.

Source: Facebook’s Work With Phone Carriers Alarms Legal Experts

Bits of Freedom cries to halt the shocking personal data sent out to everyone using Real Time Bidding advertising

During RTB, personal data such as what you read online, what you watch, your location, your sexual orientation, etc is sent to a whole slew of advertisers so they can select you as an object to show their adverts do. This, together with other profiling information sent, can be used to build up a long term profile of you and to identify you. There is no control about what happens to this data once it has been sent. This is clearly contrary to the spirit of the AVG / GDPR. The two standard RTB frameworks – Google’s Authorized Buyers and IAB’s OpenRTB both refuse to accept any responsibility about personal information, whilst both are encouraging and facilitating the trade of it.

Source: Bits of Freedom: stop met grootschalig lekken van persoonsgegevens bij real time bidding – Emerce

Google Gmail tracks purchase history through gmail, puts them on https://myaccount.google.com/purchases

Google tracks a lot of what you buy, even if you purchased it elsewhere, like in a store or from Amazon.

Last week, CEO Sundar Pichai wrote a New York Times op-ed that said “privacy cannot be a luxury good.” But behind the scenes, Google is still collecting a lot of personal information from the services you use, such as Gmail, and some of it can’t be easily deleted.

A page called “Purchases ” shows an accurate list of many — though not all — of the things I’ve bought dating back to at least 2012. I made these purchases using online services or apps such as Amazon, DoorDash or Seamless, or in stores such as Macy’s, but never directly through Google.

But because the digital receipts went to my Gmail account, Google has a list of info about my buying habits.

[…]

But there isn’t an easy way to remove all of this. You can delete all the receipts in your Gmail inbox and archived messages. But, if you’re like me, you might save receipts in Gmail in case you need them later for returns. There is no way to delete them from Purchases without also deleting them from Gmail — when you click on the “Delete” option in Purchases, it simply guides you back to the Gmail message.

[…]

Google’s privacy page says that only you can view your purchases. But it says “Information about your orders may also be saved with your activity in other Google services ” and that you can see and delete this information on a separate “My Activity” page.

Except you can’t. Google’s activity controls page doesn’t give you any ability to manage the data it stores on Purchases.

Google told CNBC you can turn off the tracking entirely, but you have to go to another page for search setting preferences. However, when CNBC tried this, it didn’t work — there was no such option to fully turn off the tracking. It’s weird this isn’t front and center on Google’s new privacy pages or even in Google’s privacy checkup feature.

Google says it doesn’t use your Gmail to show you ads and promises it “does not sell your personal information, which includes your Gmail and Google Account information,” and does “not share your personal information with advertisers, unless you have asked us to.”

But, for reasons that still aren’t clear, it’s pulling that information out of your Gmail and dumping it into a “Purchases” page most people don’t seem to know exists.

Source: Google Gmail tracks purchase history — how to delete it

Your Kid’s Echo Dot May Be Storing Data Even After You ‘Delete’ It

When Amazon launched its kid’s version of the Echo Dot smart speaker a year ago, we hoped it would be a technological blessing, rather than a curse. But as further proof that private information is no longer sacred, a complaint filed yesterday with the Federal Trade Commission alleges that the devices are unlawfully storing kids’ data—even after parents attempt to delete it.

Child and privacy advocacy groups—most notably the Campaign for a Commercial-Free Childhood (CCFC) and the Center for Digital Democracy—submitted a 96-page complaint with the FTC that alleges, in part, that:

  • Amazon’s process for reviewing personal information places undue burden on parents. (Parents cannot search through the information and must instead read or listen to every voice recording of their child’s interaction with the device in order to review.)
  • Amazon’s parental consent mechanism does not provide assurance that the person giving consent is the parent of the child.
  • Amazon does not disclose which “kid skills”—developed by third parties—collect child personal information or what they collect. It tells parents to read the privacy policy of each kid skill, but the vast majority did not provide individual privacy policies.
  • Amazon does not give notice or obtain parental consent before recording the voices of children who do not live in the home (visiting friends, family, etc.) with the owner of the device. They advertise having the technology to create voice profiles for customized user experiences but fail to use it to stop information collection from unrecognized children.
  • Amazon’s website and literature directs parents trying to delete information collected about their child to the voice recording deletion page and fails to disclose that deleting voice recordings does not delete the underlying information.
  • Amazon keeps children’s personal information longer than reasonably necessary. It only deletes information if a parent explicitly requests deletion by contacting customer service; otherwise it is retained forever.

To further prove its point, the CCFC performed a test in which a child told Alexa to “remember” a fake name, social security number, telephone number, address and food allergy. Alexa remembered and repeated the information, despite several attempts by an adult to delete or edit it.

In response to the complaint, an Amazon spokesperson said in an email, “FreeTime on Alexa and Echo Dot Kids Edition are compliant with the Children’s Online Privacy Protection Act (COPPA),” and directed users to more information on its privacy practices here.

Source: Your Kid’s Echo Dot May Be Storing Data Even After You ‘Delete’ It

China’s Mass Surveillance App Hacked; Code Reveals Specific Criteria For Illegal Oppression of specific minorities

Human Rights Watch got their hands on an app used by Chinese authorities in the western Xinjiang region to surveil, track and categorize the entire local population – particularly the 13 million or so Turkic Muslims subject to heightened scrutiny, of which around one million are thought to live in cultural ‘reeducation’ camps.

By “reverse engineering” the code in the “Integrated Joint Operations Platform” (IJOP) app, HRW was able to identify the exact criteria authorities rely on to ‘maintain social order.’ Of note, IJOP is “central to a larger ecosystem of social monitoring and control in the region,” and similar to systems being deployed throughout the entire country.

The platform targets 36 types of people for data collection, from those who have “collected money or materials for mosques with enthusiasm,” to people who stop using smartphones.

[A]uthorities are collecting massive amounts of personal information—from the color of a person’s car to their height down to the precise centimeter—and feeding it into the IJOP central system, linking that data to the person’s national identification card number. Our analysis also shows that Xinjiang authorities consider many forms of lawful, everyday, non-violent behavior—such as “not socializing with neighbors, often avoiding using the front door”—as suspicious. The app also labels the use of 51 network tools as suspicious, including many Virtual Private Networks (VPNs) and encrypted communication tools, such as WhatsApp and Viber. –Human Rights Watch

Another method of tracking is the “Four Associations”

The IJOP app suggests Xinjiang authorities track people’s personal relationships and consider broad categories of relationship problematic. One category of problematic relationships is called “Four Associations” (四关联), which the source code suggests refers to people who are “linked to the clues of cases” (关联案件线索), people “linked to those on the run” (关联在逃人员), people “linked to those abroad” (关联境外人员), and people “linked to those who are being especially watched” (关联关注人员). –HRW

*An extremely detailed look at the data collected and how the app works can be found in the actual report.

[…]

When IJOP detects a deviation from normal parameters, such as when a person uses a phone not registered to them, or when they use more electricity than what would be considered “normal,” or when they travel to an unauthorized area without police permission, the system flags them as “micro-clues” which authorities use to gauge the level of suspicion a citizen should fall under.

IJOP also monitors personal relationships – some of which are deemed inherently suspicious, such as relatives who have obtained new phone numbers or who maintain foreign links.

Chinese authorities justify the surveillance as a means to fight terrorism. To that end, IJOP checks for terrorist content and “violent audio-viusual content” when surveilling phones and software. It also flags “adherents of Wahhabism,” the ultra-conservative form of Islam accused of being a “source of global terrorism.

[…]

Meanwhile, under the broader “Strike Hard Campaign, authorities in Xinjiang are also collecting “biometrics, including DNA samples, fingerprints, iris scans, and blood types of all residents in the region ages 12 to 65,” according to the report, which adds that “the authorities require residents to give voice samples when they apply for passports.

The Strike Hard Campaign has shown complete disregard for the rights of Turkic Muslims to be presumed innocent until proven guilty. In Xinjiang, authorities have created a system that considers individuals suspicious based on broad and dubious criteria, and then generates lists of people to be evaluated by officials for detention. Official documents state that individuals “who ought to be taken, should be taken,” suggesting the goal is to maximize the number of people they find “untrustworthy” in detention. Such people are then subjected to police interrogation without basic procedural protections. They have no right to legal counsel, and some are subjected to torture and mistreatment, for which they have no effective redress, as we have documented in our September 2018 report. The result is Chinese authorities, bolstered by technology, arbitrarily and indefinitely detaining Turkic Muslims in Xinjiang en masse for actions and behavior that are not crimes under Chinese law.

Read the entire report from Human Rights Watch here.

Source: China’s Mass Surveillance App Hacked; Code Reveals Specific Criteria For Illegal Oppression | Zero Hedge

Google gives Chrome 3rd party cookie control – which allows it to track you better, but rivals to not be able to do so

Google I/O Google, the largest handler of web cookies, plans to change the way its Chrome browser deals with the tokens, ostensibly to promote greater privacy, following similar steps taken by rival browser makers Apple, Brave, and Mozilla.

At Google I/O 2019 on Tuesday, Google’s web platform director Ben Galbraith announced the plan, which has begun to appear as a hidden opt-in feature in Chrome Canary – a version of Chrome for developer testing – and is expected to evolve over the coming months.

When a website creates a cookie on a visitor’s device for its own domain, it’s called a first-party cookie. Websites may also send responses to visitor page requests that refer to resources on a third-party domain, like a one-pixel tracking image hosted by an advertising site. By attempting to load that invisible image, the visitor enables the ad site to set a third-party cookie, if the user’s browser allows it.

Third-party cookies can have legitimate uses. They can help maintain states across sessions. For example, they can provide a way to view an embedded YouTube video (the third party in someone else’s website) without forcing a site visitor already logged into YouTube to navigate to YouTube, login and return.

But they can also be abused, which is why browser makers have implemented countermeasures. Apple uses WebKit’s Intelligent Tracking Protection for example to limit third-party cookies. Brave and Firefox block third party requests and cookies by default.

[…]

Augustine Fou, a cybersecurity and ad fraud researcher who advises companies about online marketing, told The Register that while Google’s cookie changes will benefit consumer privacy, they’ll be devastating for the rest of the ad tech business.

“It’s really great for Google’s own bottom line because all their users are logged in to various Google services anyway, and Google has consent/permission to advertise and personalize ads with the data,” he said.

In a phone interview with The Register, Johnny Ryan, chief policy and industry relations officer at browser maker Brave, expressed disbelief that Google makes it sound as if it’s opposed to tracking.

“Google isn’t just the biggest tracker, it’s the biggest workaround actor of tracking prevention yet,” he said, pointing to the company’s efforts to bypass tracking protection in Apple’s Safari browser.

In 2012, Google agreed to pay $22.5m to settle Federal Trade Commission charges that it “placed advertising tracking cookies on consumers’ computers, in many cases by circumventing the Safari browser’s default cookie-blocking setting.”

Ryan explained that last year Google implemented a forced login system that automatically allows Chrome into the user’s Google account whenever the user signs into a single Google application like Gmail.

“When the browser knows everything you’re doing, you don’t need to track anything else,” he said. “If you’re signed into Chrome, everything goes to Google.”

But other ad companies will know less, which will make them less competitive. “In real-time ad bidding, where Google’s DoubleClick is already by far the biggest player, Google will have a huge advantage because the Google cookie, the only cookie across websites, will have so much more valuable bid responses from advertisers.”

Source: Google puts Chrome on a cookie diet (which just so happens to starve its rivals, cough, cough…) • The Register

EU Votes to Amass a Giant Centralised Database of Biometric Data with 350m people in it

The European Parliament has voted by a significant margin to streamline its systems for managing everything from travel to border security by amassing an enormous information database that will include biometric data and facial images—an issue that has raised significant alarm among privacy advocates.

This system, called the Common Identity Repository (CIR), streamlines a number of functions, including the ability for officials to search a single database rather than multiple ones, with shared biometric data like fingerprints and images of faces, as well as a repository with personally identifying information like date of birth, passport numbers, and more. According to ZDNet, CIR comprises one of the largest tracking databases on the planet.

The CIR will also amass the records of more than 350 million people into a single database containing the identifying information on both citizens and non-citizens of the EU, ZDNet reports. According to Politico Europe, the new system “will grant officials access to a person’s verified identity with a single fingerprint scan.”

This system has received significant criticism from those who argue there are serious privacy rights at stake, with civil liberties advocacy group Statewatch asserting last year that it would lead to the “creation of a Big Brother centralised EU state database.”

The European Parliament has said the system “will make EU information systems used in security, border and migration management interoperable enabling data exchange between the systems.” The idea is that it will also make obtaining information a faster and more effective process, which is either great or nightmarish depending on your trust in government data collection and storage.

[…]

The CIR was approved through two separate votes: one for merging systems used for things related to visas and borders was approved 511 to 123 (with nine abstentions), and the other for streamlining systems users for law enforcement, judicial, migration, and asylum matters, which was approved 510 to 130 (also with nine abstentions). If this sounds like the handiwork of some serious lobbying, you might be correct, as one European Parliament official told Politico Europe.

A European Commission official told the outlet that they didn’t “think anyone understands what they’re voting for.” So that’s reassuring.

Source: EU Votes to Amass a Giant Database of Biometric Data

Because centralised databases are never leaked or hacked. Wait…

Is Alexa Listening? Amazon Employees Can Access Home Addresses, telephone numbers, contacts

An Amazon.com Inc. team auditing Alexa users’ commands has access to location data and can, in some cases, easily find a customer’s home address, according to five employees familiar with the program.

The team, spread across three continents, transcribes, annotates and analyzes a portion of the voice recordings picked up by Alexa. The program, whose existence Bloomberg revealed earlier this month, was set up to help Amazon’s digital voice assistant get better at understanding and responding to commands.

Team members with access to Alexa users’ geographic coordinates can easily type them into third-party mapping software and find home residences, according to the employees, who signed nondisclosure agreements barring them from speaking publicly about the program.

While there’s no indication Amazon employees with access to the data have attempted to track down individual users, two members of the Alexa team expressed concern to Bloomberg that Amazon was granting unnecessarily broad access to customer data that would make it easy to identify a device’s owner.

[…]

Some of the workers charged with analyzing recordings of Alexa customers use an Amazon tool that displays audio clips alongside data about the device that captured the recording. Much of the information stored by the software, including a device ID and customer identification number, can’t be easily linked back to a user.

However, Amazon also collects location data so Alexa can more accurately answer requests, for example suggesting a local restaurant or giving the weather in nearby Ashland, Oregon, instead of distant Ashland, Michigan.

[…]

It’s unclear how many people have access to that system. Two Amazon employees said they believed the vast majority of workers in the Alexa Data Services group were, until recently, able to use the software.

[…]

A second internal Amazon software tool, available to a smaller pool of workers who tag transcripts of voice recordings to help Alexa categorize requests, stores more personal data, according to one of the employees.

After punching in a customer ID number, those workers, called annotators and verifiers, can see the home and work addresses and phone numbers customers entered into the Alexa app when they set up the device, the employee said. If a user has chosen to share their contacts with Alexa, their names, numbers and email addresses also appear in the dashboard.

[…]

Amazon appears to have been restricting the level of access employees have to the system.

One employee said that, as recently as a year ago, an Amazon dashboard detailing a user’s contacts displayed full phone numbers. Now, in that same panel, some digits are obscured.

Amazon further limited access to data after Bloomberg’s April 10 report, two of the employees said. Some data associates, who transcribe, annotate and verify audio recordings, arrived for work to find that they no longer had access to software tools they had previously used in their jobs, these people said. As of press time, their access had not been restored.

Source: Is Alexa Listening? Amazon Employees Can Access Home Addresses – Bloomberg

‘They’re Basically Lying’ – (Mental) Health Apps Caught Secretly Sharing Data

“Free apps marketed to people with depression or who want to quit smoking are hemorrhaging user data to third parties like Facebook and Google — but often don’t admit it in their privacy policies, a new study reports…” writes The Verge.

“You don’t have to be a user of Facebook’s or Google’s services for them to have enough breadcrumbs to ID you,” warns Slashdot schwit1. From the article: By intercepting the data transmissions, they discovered that 92 percent of the 36 apps shared the data with at least one third party — mostly Facebook- and Google-run services that help with marketing, advertising, or data analytics. (Facebook and Google did not immediately respond to requests for comment.) But about half of those apps didn’t disclose that third-party data sharing, for a few different reasons: nine apps didn’t have a privacy policy at all; five apps did but didn’t say the data would be shared this way; and three apps actively said that this kind of data sharing wouldn’t happen. Those last three are the ones that stood out to Steven Chan, a physician at Veterans Affairs Palo Alto Health Care System, who has collaborated with Torous in the past but wasn’t involved in the new study. “They’re basically lying,” he says of the apps.

Part of the problem is the business model for free apps, the study authors write: since insurance might not pay for an app that helps users quit smoking, for example, the only ways for free app developer to stay afloat is to either sell subscriptions or sell data. And if that app is branded as a wellness tool, the developers can skirt laws intended to keep medical information private.
A few apps even shared what The Verge calls “very sensitive information” like self reports about substance use and user names.

Source: ‘They’re Basically Lying’ – Mental Health Apps Caught Secretly Sharing Data – Slashdot

Personal information on sites about faith, illness, sexual orientation, addiction, schools in NL is directly passed on to advertisers without GDPR consent.

Websites met informatie over gevoelige onderwerpen lappen de privacywet massaal aan hun laars. Dat zegt de Consumentenbond. Veel sites plaatsen zonder toestemming cookies van advertentienetwerken, waardoor die zeer persoonlijke informatie over de bezoekers in handen krijgen.

Onderzoekers van de Consumentenbond zochten in maart en april op onderwerpen binnen de categorieën geloof, jeugd, medisch en geaardheid. Via zoekvragen over onder meer depressie, verslaving, seksuele geaardheid en kanker kwamen zij op 106 websites.

Bijna de helft van die sites plaatste bij bezoek direct, dus zonder toestemming van de bezoeker, een of meer advertentiecookies, bijna altijd van Google. Websites als CIP.nl, Refoweb.nl en scholieren.com plaatsten er zelfs tientallen. Ouders.nl maakte het helemaal bont en plaatste maar liefst 37 cookies.

Ook een flink aantal instellingen voor geestelijke gezondheidszorg viel op. Onder andere ggzdrenthe.nl, connection-sggz.nl, parnassiagroep.nl en lentis.nl volgden ongevraagd het surfgedrag van hun bezoekers en speelden deze informatie door naar Google.

De privacywet AVG is nu een jaar van kracht, maar het is volgens de bond zorgwekkend hoe slecht de wet wordt nageleefd.

Source: ‘Persoonlijke informatie niet veilig bij sites over geloof, ziekte en geaardheid’ – Emerce

Security lapse exposed a Chinese smart city surveillance system

Smart cities are designed to make life easier for their residents: better traffic management by clearing routes, making sure the public transport is running on time and having cameras keeping a watchful eye from above.

But what happens when that data leaks? One such database was open for weeks for anyone to look inside.

Security researcher John Wethington found a smart city database accessible from a web browser without a password. He passed details of the database to TechCrunch in an effort to get the data secured.

[…]

he system monitors the residents around at least two small housing communities in eastern Beijing, the largest of which is Liangmaqiao, known as the city’s embassy district. The system is made up of several data collection points, including cameras designed to collect facial recognition data.

The exposed data contains enough information to pinpoint where people went, when and for how long, allowing anyone with access to the data — including police — to build up a picture of a person’s day-to-day life.

A portion of the database containing facial recognition scans (Image: supplied)

The database processed various facial details, such as if a person’s eyes or mouth are open, if they’re wearing sunglasses, or a mask — common during periods of heavy smog — and if a person is smiling or even has a beard.

The database also contained a subject’s approximate age as well as an “attractive” score, according to the database fields.

But the capabilities of the system have a darker side, particularly given the complicated politics of China.

The system also uses its facial recognition systems to detect ethnicities and labels them — such as “汉族” for Han Chinese, the main ethnic group of China — and also “维族” — or Uyghur Muslims, an ethnic minority under persecution by Beijing.

Where ethnicities can help police identify suspects in an area even if they don’t have a name to match, the data can be used for abuse.

The Chinese government has detained more than a million Uyghurs in internment camps in the past year, according to a United Nations human rights committee. It’s part of a massive crackdown by Beijing on the ethnic minority group. Just this week, details emerged of an app used by police to track Uyghur Muslims.

We also found that the customer’s system also pulls in data from the police and uses that information to detect people of interest or criminal suspects, suggesting it may be a government customer.

Facial recognition scans would match against police records in real time (Image: supplied)

Each time a person is detected, the database would trigger a “warning” noting the date, time, location and a corresponding note. Several records seen by TechCrunch include suspects’ names and their national identification card number.

Source: Security lapse exposed a Chinese smart city surveillance system – TechCrunch

Facebook uploaded the contacts of 1.5m people without permission

On Thursday, at just about the same time as the most highly anticipated government document of the decade was released in Washington D.C., Facebook updated a month-old blog post to note that actually a security incident impacted “millions” of Instagram users and not “tens of thousands” as they said at first.

Last month, Facebook announced that hundreds of millions of Facebook and Facebook Lite account passwords were stored in plaintext in a database exposed to over 20,000 employees.

https://www.theregister.co.uk/2019/04/18/facebook_hoovered_up_15m_address_books_without_permission/

Pregnancy and parenting club Bounty fined £400,000 for shady data sharing practices of more than 14 million people

The Information Commissioner’s Office has fined commercial pregnancy and parenting club Bounty some £400,000 for illegally sharing personal details of more than 14 million people.

The organisation, which dishes out advice to expectant and inexperienced parents, has faced criticism over the tactics it uses to sign up new members and was the subject of a campaign to boot its reps from maternity wards.

[…]

the business had also worked as a data brokering service until April last year, distributing data to third parties to then pester unsuspecting folk with electronic direct marketing. By sharing this information and not being transparent about its uses while it was extracting the stuff, Bounty broke the Data Protection Act 1998.

Bounty shared roughly 34.4 million records from June 2017 to April 2018 with credit reference and marketing agencies. Acxiom, Equifax, Indicia and Sky were the four biggest of the 39 companies that Bounty told the ICO it sold stuff to.

This data included details of new mother and mothers-to-be but also of very young children’s birth dates and their gender.

Source: Pregnancy and parenting club Bounty fined £400,000 for shady data sharing practices • The Register

Sonos finally blasted in complaint to UK privacy watchdog – lets hope they do something with it

Sonos stands accused of seeking to obtain “excessive” amounts of personal data without valid consent in a complaint filed with the UK’s data watchdog.

The complaint, lodged by tech lawyer George Gardiner in a personal capacity, challenges the Sonos privacy policy’s compliance with the General Data Protection Regulation and the UK’s implementation of that law.

It argues that Sonos had not obtained valid consent from users who were asked to agree to a new privacy policy and had failed to meet privacy-by-design requirements.

The company changed its terms in summer 2017 to allow it to collect more data from its users – ostensibly because it was launching voice services. Sonos said that anyone who didn’t accept the fresh Ts&Cs would no longer be able to download future software updates.

Sonos denied at the time that this was effectively bricking the system, but whichever way you cut it, the move would deprecate the kit of users that didn’t accept the terms. The app controlling the system would also eventually become non-functional.

Gardiner pointed out, however, that security risks and an interest in properly maintaining an expensive system meant there was little practical alternative other than to update the software.

This resulted in a mandatory acceptance of the terms of the privacy policy, rendering any semblance of consent void.

“I have no option but to consent to its privacy policy otherwise I will have over £3,000 worth of useless devices,” he said in a complaint sent to the ICO and shared with The Register.

Users setting up accounts are told: “By clicking on ‘Submit’ you agree to Sonos’ Terms and Conditions and Privacy Policy.” This all-or-nothing approach is contrary to data protection law, he argued.

Sonos collects personal data in the form of name, email address, IP addresses and “information provided by cookies or similar technology”.

The system also collects data on room names assigned by users, the controller device, the operating system of the device a person uses and content source.

Sonos said that collecting and processing this data – a slurp that users cannot opt out of – is necessary for the “ongoing functionality and performance of the product and its ability to interact with various services”.

But Gardiner questioned whether it was really necessary for Sonos to collect this much data, noting that his system worked without it prior to August 2017. He added that he does not own a product that requires voice recognition.

Source: Turn me up some: Smart speaker outfit Sonos blasted in complaint to UK privacy watchdog • The Register

I am in the exact same position – suddenly I had to accept an invasive change of privacy policy and earlier in March I also had to log in with a Sonos account in order to get the kit working (it wouldn’t update without logging in and the app only showed the login and update page). This is not what I signed up for when I bought the (expensive!) products.

A Team At Amazon Is Listening To Recordings Captured By Alexa

Seven people, described as having worked in Amazon’s voice review program, told Bloomberg that they sometimes listen to as many as 1,000 recordings per shift, and that the recordings are associated with the customer’s first name, their device’s serial number, and an account number. Among other clips, these employees and contractors said they’ve reviewed recordings of what seemed to be a woman singing in the shower, a child screaming, and a sexual assault. Sometimes, when recordings were difficult to understand — or when they were amusing — team members shared them in an internal chat room, according to Bloomberg.

In an emailed statement to BuzzFeed News, an Amazon spokesperson wrote that “an extremely small sample of Alexa voice recordings” is annotated, and reviewing the audio “helps us train our speech recognition and natural language understanding systems, so Alexa can better understand your requests, and ensure the service works well for everyone.”

[…]

Amazon’s privacy policy says that Alexa’s software provides a variety of data to the company (including your use of Alexa, your Alexa Interactions, and other Alexa-enabled products), but doesn’t explicitly state how employees themselves interact with the data.

Apple and Google, which make two other popular voice-enabled assistants, also employ humans who review audio commands spoken to their devices; both companies say that they anonymize the recordings and don’t associate them with customers’ accounts. Apple’s Siri sends a limited subset of encrypted, anonymous recordings to graders, who label the quality of Siri’s responses. The process is outlined on page 69 of the company’s security white paper. Google also saves and reviews anonymized audio snippets captured by Google Home or Assistant, and distorts the audio.

On an FAQ page, Amazon states that Alexa is not recording all your conversations. Amazon’s Echo smart speakers and the dozens of other Alexa-enabled devices are designed to capture and process audio, but only when a “wake word” — such as “Alexa,” “Amazon,” “Computer,” or “Echo” — is uttered. However, Alexa devices do occasionally capture audio inadvertently and send that audio to Amazon servers or respond to it with triggered actions. In May 2018, an Echo unintentionally sent audio recordings of a woman’s private conversation to one of her husband’s employees.

Source: A Team At Amazon Is Listening To Recordings Captured By Alexa

Does Google meet its users’ expectations around consumer privacy? This news industry research says no

While the ethics around data collection and consumer privacy have been questioned for years, it wasn’t until Facebook’s Cambridge Analytics scandal that people began to realize how frequently their personal data is shared, transferred, and monetized without their permission.

Cambridge Analytica was by no means an isolated case. Last summer, an AP investigation found that Google’s location tracking remains on even if you turn it off in Google Maps, Search, and other apps. Research from Vanderbilt professor Douglas Schmidt found that Google engages in “passive” data collection, often without the user’s knowledge. His research also showed that Google utilizes data collected from other sources to de-anonymize existing user data.

That’s why we at Digital Content Next, the trade association of online publishers I lead, wrote this Washington Post op-ed, “It isn’t just about Facebook, it’s about Google, too” when Facebook first faced Capitol Hill. It’s also why the descriptor surveillance advertising is increasingly being used to describe Google and Facebook’s advertising businesses, which use personal data to tailor and micro-target ads.

[…]

The results of the study are consistent with our Facebook study: People don’t want surveillance advertising. A majority of consumers indicated they don’t expect to be tracked across Google’s services, let alone be tracked across the web in order to make ads more targeted.

Do you expect Google to collect data about a person’s activities on Google platforms (e.g. Android and Chrome) and apps (e.g. Search, YouTube, Maps, Waze)?

Yes: 48%
No: 52%
Do you expect Google to track a person’s browsing across the web in order to make ads more targeted?

Yes: 43%
No: 57%

Nearly two out of three consumers don’t expect Google to track them across non-Google apps, offline activities from data brokers, or via their location history.

Do you expect Google to collect data about a person’s locations when a person is not using a Google platform or app?

Yes: 34%
No: 66%
Do you expect Google to track a person’s usage of non-Google apps in order to make ads more targeted?

Yes: 36%
No: 64%
Do you expect Google to buy personal information from data companies and merge it with a person’s online usage in order to make ads more targeted?

Yes: 33%
No: 67%

There was only one question where a small majority of respondents felt that Google was acting according to their expectations. That was about Google merging data from search queries with other data it collects on its own services. They also don’t expect Google to connect the data back to the user’s personal account, but only by a small majority. Google began doing both of these in 2016 after previously promising it wouldn’t.

Do you expect Google to collect and merge data about a person’s search activities with activities on its other applications?

Yes: 57%
No: 43%
Do you expect Google to connect a variety of user data from Google apps, non-Google apps, and across the web with that user’s personal Google account?

Yes: 48%
No: 52%

Google’s personal data collection practices affect the more than 2 billion people who use devices running their Android operating software and hundreds of millions more iPhone users who rely on Google for browsing, maps, or search. Most of them expect Google to collect some data about them in exchange for use of services. However, as our research shows, a significant majority of consumers do not expect Google to track their activities across their lives, their locations, on other sites, and on other platforms. And as the AP discovered, Google continues to do some of this even after consumers explicitly turn off tracking.

Source: Does Google meet its users’ expectations around consumer privacy? This news industry research says no » Nieman Journalism Lab

Dutch  medical patient files moved to Google Cloud – MPs want to know if US intelligence agencies can view them

Of course the US can look in, under CLOUD rules, because Google is an American company. The move of the files has been done without consent from the patients by Medical Research Data Management, a commercial company, because (they say), the hospitals have given permission. Also, hospitals don’t need to ask for patient permission, because patients have given hospitals permission through accepting the electronic patient filing system.

Another concern is the pseudo-anonymisation of the data. For a company like Google, it’s won’t be particularly hard to match the data to real people.

Source: Kamerleden eisen duidelijkheid over opslag patiëntgegevens bij Google – Emerce

D.E.A. Secretly Collected Bulk Records of Money-Counter Purchases

WASHINGTON — The Drug Enforcement Administration secretly collected data in bulk about Americans’ purchases of money-counting machines — and took steps to hide the effort from defendants and courts — before quietly shuttering the program in 2013 amid the uproar over the disclosures by the National Security Agency contractor Edward Snowden, an inspector general report found.

Seeking leads about who might be a drug trafficker, the D.E.A. started in 2008 to issue blanket administrative subpoenas to vendors to learn who was buying money counters. The subpoenas involved no court oversight and were not pegged to any particular investigation. The agency collected tens of thousands of records showing the names and addresses of people who bought the devices.

The public version of the report, which portrayed the program as legally questionable, blacked out the device whose purchase the D.E.A. had tracked. But in a slip-up, the report contained one uncensored reference in a section about how D.E.A. policy called for withholding from official case files the fact that agents first learned the names of suspects from its database of its money-counter purchases.

[…]

The report cited field offices’ complaints that the program had wasted time with a high volume of low-quality leads, resulting in agents scrutinizing people “without any connection to illicit activity.” But the D.E.A. eventually refined its analysis to produce fewer but higher-quality leads, and the D.E.A. said it had led to arrests and seizures of drugs, guns, cars and illicit cash.

The idea for the nationwide program originated in a D.E.A. operation in Chicago, when a subpoena for three months of purchase records from a local store led to two arrests and “significant seizures of drugs and related proceeds,” it said.

But Sarah St. Vincent, a Human Rights Watch researcher who flagged the slip-up on Twitter, argued that it was an abuse to suck Americans’ names into a database that would be analyzed to identify criminal suspects, based solely upon their purchase of a lawful product.

[…]

In the spring of 2013, the report said, the D.E.A. submitted its database to a joint operations hub where law enforcement agencies working together on organized crime and drug enforcement could mine it. But F.B.I. agents questioned whether the data had been lawfully acquired, and the bureau banned its officials from gaining access to it.

The F.B.I. agents “explained that running all of these names, which had been collected without foundation, through a massive government database and producing comprehensive intelligence products on any ‘hits,’ which included detailed information on family members and pictures, ‘didn’t sit right,’” the report said.

Source: D.E.A. Secretly Collected Bulk Records of Money-Counter Purchases

Tesla Model 3 records data unknown to you, sends it to Tesla without your knowledge and keeps a whole load of other data  too.

Many other cars download and store data from users, particularly information from paired cellphones, such as contact information. The practice is widespread enough that the US Federal Trade Commission has issued advisories to drivers warning them about pairing devices to rental cars, and urging them to learn how to wipe their cars’ systems clean before returning a rental or selling a car they owned.

But the researchers’ findings highlight how Tesla is full of contradictions on privacy and cybersecurity. On one hand, Tesla holds car-generated data closely, and has fought customers in court to refrain from giving up vehicle data. Owners must purchase $995 cables and download a software kit from Tesla to get limited information out of their cars via “event data recorders” there, should they need this for legal, insurance or other reasons.

At the same time, crashed Teslas that are sent to salvage can yield unencrypted and personally revealing data to anyone who takes possession of the car’s computer and knows how to extract it.

[…]

In general, cars have become rolling computers that slurp up personal data from users’ mobile devices to enable “infotainment” features or services. Additional data generated by the car enables and trains advanced driver-assistance systems. Major auto-makers that compete with Tesla’s Autopilot include GM’s Cadillac Super Cruise, Nissan Infiniti’s ProPilot Assist and Volvo’s Pilot Assist system.

But GreenTheOnly and Theo noted that in Teslas, dashboard cameras and selfie cameras can record while the car is parked, even in your garage, and there is no way for an owner to know when they may be doing so. The cameras enable desirable features like “sentry mode.” They also enable wipers to “see” raindrops and switch on automatically, for example.

GreenTheOnly explained, “Tesla is not super transparent about what and when they are recording, and storing on internal systems. You can opt out of all data collection. But then you lose [over-the-air software updates] and a bunch of other functionality. So, understandably, nobody does that, and I also begrudgingly accepted it.”

Theo and GreenTheOnly also said Model 3, Model S and Model X vehicles try to upload autopilot and other data to Tesla in the event of a crash. The cars have the capability to upload other data, but the researchers don’t know if and under what circumstances they attempt to do so.

[…]

The company is one of a handful of large corporations to openly court cybersecurity professionals to its networks, urging those who find flaws in Tesla systems to report them in an orderly process — one that gives the company time to fix the problem before it is disclosed. Tesla routinely pays out five-figure sums to individuals who find and successfully report these flaws.

[…]

However, according to two former Tesla service employees who requested anonymity, when owners try to analyze or modify their own vehicles’ systems, the company may flag them as hackers, alerting Telsa of their skills. Tesla then ensures that these flagged people are not among the first to get new software updates.

Source: Tesla Model 3 keeps data like crash videos, location, phone contacts