China’s Mass Surveillance App Hacked; Code Reveals Specific Criteria For Illegal Oppression of specific minorities

Human Rights Watch got their hands on an app used by Chinese authorities in the western Xinjiang region to surveil, track and categorize the entire local population – particularly the 13 million or so Turkic Muslims subject to heightened scrutiny, of which around one million are thought to live in cultural ‘reeducation’ camps.

By “reverse engineering” the code in the “Integrated Joint Operations Platform” (IJOP) app, HRW was able to identify the exact criteria authorities rely on to ‘maintain social order.’ Of note, IJOP is “central to a larger ecosystem of social monitoring and control in the region,” and similar to systems being deployed throughout the entire country.

The platform targets 36 types of people for data collection, from those who have “collected money or materials for mosques with enthusiasm,” to people who stop using smartphones.

[A]uthorities are collecting massive amounts of personal information—from the color of a person’s car to their height down to the precise centimeter—and feeding it into the IJOP central system, linking that data to the person’s national identification card number. Our analysis also shows that Xinjiang authorities consider many forms of lawful, everyday, non-violent behavior—such as “not socializing with neighbors, often avoiding using the front door”—as suspicious. The app also labels the use of 51 network tools as suspicious, including many Virtual Private Networks (VPNs) and encrypted communication tools, such as WhatsApp and Viber. –Human Rights Watch

Another method of tracking is the “Four Associations”

The IJOP app suggests Xinjiang authorities track people’s personal relationships and consider broad categories of relationship problematic. One category of problematic relationships is called “Four Associations” (四关联), which the source code suggests refers to people who are “linked to the clues of cases” (关联案件线索), people “linked to those on the run” (关联在逃人员), people “linked to those abroad” (关联境外人员), and people “linked to those who are being especially watched” (关联关注人员). –HRW

*An extremely detailed look at the data collected and how the app works can be found in the actual report.

[…]

When IJOP detects a deviation from normal parameters, such as when a person uses a phone not registered to them, or when they use more electricity than what would be considered “normal,” or when they travel to an unauthorized area without police permission, the system flags them as “micro-clues” which authorities use to gauge the level of suspicion a citizen should fall under.

IJOP also monitors personal relationships – some of which are deemed inherently suspicious, such as relatives who have obtained new phone numbers or who maintain foreign links.

Chinese authorities justify the surveillance as a means to fight terrorism. To that end, IJOP checks for terrorist content and “violent audio-viusual content” when surveilling phones and software. It also flags “adherents of Wahhabism,” the ultra-conservative form of Islam accused of being a “source of global terrorism.

[…]

Meanwhile, under the broader “Strike Hard Campaign, authorities in Xinjiang are also collecting “biometrics, including DNA samples, fingerprints, iris scans, and blood types of all residents in the region ages 12 to 65,” according to the report, which adds that “the authorities require residents to give voice samples when they apply for passports.

The Strike Hard Campaign has shown complete disregard for the rights of Turkic Muslims to be presumed innocent until proven guilty. In Xinjiang, authorities have created a system that considers individuals suspicious based on broad and dubious criteria, and then generates lists of people to be evaluated by officials for detention. Official documents state that individuals “who ought to be taken, should be taken,” suggesting the goal is to maximize the number of people they find “untrustworthy” in detention. Such people are then subjected to police interrogation without basic procedural protections. They have no right to legal counsel, and some are subjected to torture and mistreatment, for which they have no effective redress, as we have documented in our September 2018 report. The result is Chinese authorities, bolstered by technology, arbitrarily and indefinitely detaining Turkic Muslims in Xinjiang en masse for actions and behavior that are not crimes under Chinese law.

Read the entire report from Human Rights Watch here.

Source: China’s Mass Surveillance App Hacked; Code Reveals Specific Criteria For Illegal Oppression | Zero Hedge

Google gives Chrome 3rd party cookie control – which allows it to track you better, but rivals to not be able to do so

Google I/O Google, the largest handler of web cookies, plans to change the way its Chrome browser deals with the tokens, ostensibly to promote greater privacy, following similar steps taken by rival browser makers Apple, Brave, and Mozilla.

At Google I/O 2019 on Tuesday, Google’s web platform director Ben Galbraith announced the plan, which has begun to appear as a hidden opt-in feature in Chrome Canary – a version of Chrome for developer testing – and is expected to evolve over the coming months.

When a website creates a cookie on a visitor’s device for its own domain, it’s called a first-party cookie. Websites may also send responses to visitor page requests that refer to resources on a third-party domain, like a one-pixel tracking image hosted by an advertising site. By attempting to load that invisible image, the visitor enables the ad site to set a third-party cookie, if the user’s browser allows it.

Third-party cookies can have legitimate uses. They can help maintain states across sessions. For example, they can provide a way to view an embedded YouTube video (the third party in someone else’s website) without forcing a site visitor already logged into YouTube to navigate to YouTube, login and return.

But they can also be abused, which is why browser makers have implemented countermeasures. Apple uses WebKit’s Intelligent Tracking Protection for example to limit third-party cookies. Brave and Firefox block third party requests and cookies by default.

[…]

Augustine Fou, a cybersecurity and ad fraud researcher who advises companies about online marketing, told The Register that while Google’s cookie changes will benefit consumer privacy, they’ll be devastating for the rest of the ad tech business.

“It’s really great for Google’s own bottom line because all their users are logged in to various Google services anyway, and Google has consent/permission to advertise and personalize ads with the data,” he said.

In a phone interview with The Register, Johnny Ryan, chief policy and industry relations officer at browser maker Brave, expressed disbelief that Google makes it sound as if it’s opposed to tracking.

“Google isn’t just the biggest tracker, it’s the biggest workaround actor of tracking prevention yet,” he said, pointing to the company’s efforts to bypass tracking protection in Apple’s Safari browser.

In 2012, Google agreed to pay $22.5m to settle Federal Trade Commission charges that it “placed advertising tracking cookies on consumers’ computers, in many cases by circumventing the Safari browser’s default cookie-blocking setting.”

Ryan explained that last year Google implemented a forced login system that automatically allows Chrome into the user’s Google account whenever the user signs into a single Google application like Gmail.

“When the browser knows everything you’re doing, you don’t need to track anything else,” he said. “If you’re signed into Chrome, everything goes to Google.”

But other ad companies will know less, which will make them less competitive. “In real-time ad bidding, where Google’s DoubleClick is already by far the biggest player, Google will have a huge advantage because the Google cookie, the only cookie across websites, will have so much more valuable bid responses from advertisers.”

Source: Google puts Chrome on a cookie diet (which just so happens to starve its rivals, cough, cough…) • The Register

EU Votes to Amass a Giant Centralised Database of Biometric Data with 350m people in it

The European Parliament has voted by a significant margin to streamline its systems for managing everything from travel to border security by amassing an enormous information database that will include biometric data and facial images—an issue that has raised significant alarm among privacy advocates.

This system, called the Common Identity Repository (CIR), streamlines a number of functions, including the ability for officials to search a single database rather than multiple ones, with shared biometric data like fingerprints and images of faces, as well as a repository with personally identifying information like date of birth, passport numbers, and more. According to ZDNet, CIR comprises one of the largest tracking databases on the planet.

The CIR will also amass the records of more than 350 million people into a single database containing the identifying information on both citizens and non-citizens of the EU, ZDNet reports. According to Politico Europe, the new system “will grant officials access to a person’s verified identity with a single fingerprint scan.”

This system has received significant criticism from those who argue there are serious privacy rights at stake, with civil liberties advocacy group Statewatch asserting last year that it would lead to the “creation of a Big Brother centralised EU state database.”

The European Parliament has said the system “will make EU information systems used in security, border and migration management interoperable enabling data exchange between the systems.” The idea is that it will also make obtaining information a faster and more effective process, which is either great or nightmarish depending on your trust in government data collection and storage.

[…]

The CIR was approved through two separate votes: one for merging systems used for things related to visas and borders was approved 511 to 123 (with nine abstentions), and the other for streamlining systems users for law enforcement, judicial, migration, and asylum matters, which was approved 510 to 130 (also with nine abstentions). If this sounds like the handiwork of some serious lobbying, you might be correct, as one European Parliament official told Politico Europe.

A European Commission official told the outlet that they didn’t “think anyone understands what they’re voting for.” So that’s reassuring.

Source: EU Votes to Amass a Giant Database of Biometric Data

Because centralised databases are never leaked or hacked. Wait…

Is Alexa Listening? Amazon Employees Can Access Home Addresses, telephone numbers, contacts

An Amazon.com Inc. team auditing Alexa users’ commands has access to location data and can, in some cases, easily find a customer’s home address, according to five employees familiar with the program.

The team, spread across three continents, transcribes, annotates and analyzes a portion of the voice recordings picked up by Alexa. The program, whose existence Bloomberg revealed earlier this month, was set up to help Amazon’s digital voice assistant get better at understanding and responding to commands.

Team members with access to Alexa users’ geographic coordinates can easily type them into third-party mapping software and find home residences, according to the employees, who signed nondisclosure agreements barring them from speaking publicly about the program.

While there’s no indication Amazon employees with access to the data have attempted to track down individual users, two members of the Alexa team expressed concern to Bloomberg that Amazon was granting unnecessarily broad access to customer data that would make it easy to identify a device’s owner.

[…]

Some of the workers charged with analyzing recordings of Alexa customers use an Amazon tool that displays audio clips alongside data about the device that captured the recording. Much of the information stored by the software, including a device ID and customer identification number, can’t be easily linked back to a user.

However, Amazon also collects location data so Alexa can more accurately answer requests, for example suggesting a local restaurant or giving the weather in nearby Ashland, Oregon, instead of distant Ashland, Michigan.

[…]

It’s unclear how many people have access to that system. Two Amazon employees said they believed the vast majority of workers in the Alexa Data Services group were, until recently, able to use the software.

[…]

A second internal Amazon software tool, available to a smaller pool of workers who tag transcripts of voice recordings to help Alexa categorize requests, stores more personal data, according to one of the employees.

After punching in a customer ID number, those workers, called annotators and verifiers, can see the home and work addresses and phone numbers customers entered into the Alexa app when they set up the device, the employee said. If a user has chosen to share their contacts with Alexa, their names, numbers and email addresses also appear in the dashboard.

[…]

Amazon appears to have been restricting the level of access employees have to the system.

One employee said that, as recently as a year ago, an Amazon dashboard detailing a user’s contacts displayed full phone numbers. Now, in that same panel, some digits are obscured.

Amazon further limited access to data after Bloomberg’s April 10 report, two of the employees said. Some data associates, who transcribe, annotate and verify audio recordings, arrived for work to find that they no longer had access to software tools they had previously used in their jobs, these people said. As of press time, their access had not been restored.

Source: Is Alexa Listening? Amazon Employees Can Access Home Addresses – Bloomberg

‘They’re Basically Lying’ – (Mental) Health Apps Caught Secretly Sharing Data

“Free apps marketed to people with depression or who want to quit smoking are hemorrhaging user data to third parties like Facebook and Google — but often don’t admit it in their privacy policies, a new study reports…” writes The Verge.

“You don’t have to be a user of Facebook’s or Google’s services for them to have enough breadcrumbs to ID you,” warns Slashdot schwit1. From the article: By intercepting the data transmissions, they discovered that 92 percent of the 36 apps shared the data with at least one third party — mostly Facebook- and Google-run services that help with marketing, advertising, or data analytics. (Facebook and Google did not immediately respond to requests for comment.) But about half of those apps didn’t disclose that third-party data sharing, for a few different reasons: nine apps didn’t have a privacy policy at all; five apps did but didn’t say the data would be shared this way; and three apps actively said that this kind of data sharing wouldn’t happen. Those last three are the ones that stood out to Steven Chan, a physician at Veterans Affairs Palo Alto Health Care System, who has collaborated with Torous in the past but wasn’t involved in the new study. “They’re basically lying,” he says of the apps.

Part of the problem is the business model for free apps, the study authors write: since insurance might not pay for an app that helps users quit smoking, for example, the only ways for free app developer to stay afloat is to either sell subscriptions or sell data. And if that app is branded as a wellness tool, the developers can skirt laws intended to keep medical information private.
A few apps even shared what The Verge calls “very sensitive information” like self reports about substance use and user names.

Source: ‘They’re Basically Lying’ – Mental Health Apps Caught Secretly Sharing Data – Slashdot

Personal information on sites about faith, illness, sexual orientation, addiction, schools in NL is directly passed on to advertisers without GDPR consent.

Websites met informatie over gevoelige onderwerpen lappen de privacywet massaal aan hun laars. Dat zegt de Consumentenbond. Veel sites plaatsen zonder toestemming cookies van advertentienetwerken, waardoor die zeer persoonlijke informatie over de bezoekers in handen krijgen.

Onderzoekers van de Consumentenbond zochten in maart en april op onderwerpen binnen de categorieën geloof, jeugd, medisch en geaardheid. Via zoekvragen over onder meer depressie, verslaving, seksuele geaardheid en kanker kwamen zij op 106 websites.

Bijna de helft van die sites plaatste bij bezoek direct, dus zonder toestemming van de bezoeker, een of meer advertentiecookies, bijna altijd van Google. Websites als CIP.nl, Refoweb.nl en scholieren.com plaatsten er zelfs tientallen. Ouders.nl maakte het helemaal bont en plaatste maar liefst 37 cookies.

Ook een flink aantal instellingen voor geestelijke gezondheidszorg viel op. Onder andere ggzdrenthe.nl, connection-sggz.nl, parnassiagroep.nl en lentis.nl volgden ongevraagd het surfgedrag van hun bezoekers en speelden deze informatie door naar Google.

De privacywet AVG is nu een jaar van kracht, maar het is volgens de bond zorgwekkend hoe slecht de wet wordt nageleefd.

Source: ‘Persoonlijke informatie niet veilig bij sites over geloof, ziekte en geaardheid’ – Emerce

Security lapse exposed a Chinese smart city surveillance system

Smart cities are designed to make life easier for their residents: better traffic management by clearing routes, making sure the public transport is running on time and having cameras keeping a watchful eye from above.

But what happens when that data leaks? One such database was open for weeks for anyone to look inside.

Security researcher John Wethington found a smart city database accessible from a web browser without a password. He passed details of the database to TechCrunch in an effort to get the data secured.

[…]

he system monitors the residents around at least two small housing communities in eastern Beijing, the largest of which is Liangmaqiao, known as the city’s embassy district. The system is made up of several data collection points, including cameras designed to collect facial recognition data.

The exposed data contains enough information to pinpoint where people went, when and for how long, allowing anyone with access to the data — including police — to build up a picture of a person’s day-to-day life.

A portion of the database containing facial recognition scans (Image: supplied)

The database processed various facial details, such as if a person’s eyes or mouth are open, if they’re wearing sunglasses, or a mask — common during periods of heavy smog — and if a person is smiling or even has a beard.

The database also contained a subject’s approximate age as well as an “attractive” score, according to the database fields.

But the capabilities of the system have a darker side, particularly given the complicated politics of China.

The system also uses its facial recognition systems to detect ethnicities and labels them — such as “汉族” for Han Chinese, the main ethnic group of China — and also “维族” — or Uyghur Muslims, an ethnic minority under persecution by Beijing.

Where ethnicities can help police identify suspects in an area even if they don’t have a name to match, the data can be used for abuse.

The Chinese government has detained more than a million Uyghurs in internment camps in the past year, according to a United Nations human rights committee. It’s part of a massive crackdown by Beijing on the ethnic minority group. Just this week, details emerged of an app used by police to track Uyghur Muslims.

We also found that the customer’s system also pulls in data from the police and uses that information to detect people of interest or criminal suspects, suggesting it may be a government customer.

Facial recognition scans would match against police records in real time (Image: supplied)

Each time a person is detected, the database would trigger a “warning” noting the date, time, location and a corresponding note. Several records seen by TechCrunch include suspects’ names and their national identification card number.

Source: Security lapse exposed a Chinese smart city surveillance system – TechCrunch

Facebook uploaded the contacts of 1.5m people without permission

On Thursday, at just about the same time as the most highly anticipated government document of the decade was released in Washington D.C., Facebook updated a month-old blog post to note that actually a security incident impacted “millions” of Instagram users and not “tens of thousands” as they said at first.

Last month, Facebook announced that hundreds of millions of Facebook and Facebook Lite account passwords were stored in plaintext in a database exposed to over 20,000 employees.

https://www.theregister.co.uk/2019/04/18/facebook_hoovered_up_15m_address_books_without_permission/

Pregnancy and parenting club Bounty fined £400,000 for shady data sharing practices of more than 14 million people

The Information Commissioner’s Office has fined commercial pregnancy and parenting club Bounty some £400,000 for illegally sharing personal details of more than 14 million people.

The organisation, which dishes out advice to expectant and inexperienced parents, has faced criticism over the tactics it uses to sign up new members and was the subject of a campaign to boot its reps from maternity wards.

[…]

the business had also worked as a data brokering service until April last year, distributing data to third parties to then pester unsuspecting folk with electronic direct marketing. By sharing this information and not being transparent about its uses while it was extracting the stuff, Bounty broke the Data Protection Act 1998.

Bounty shared roughly 34.4 million records from June 2017 to April 2018 with credit reference and marketing agencies. Acxiom, Equifax, Indicia and Sky were the four biggest of the 39 companies that Bounty told the ICO it sold stuff to.

This data included details of new mother and mothers-to-be but also of very young children’s birth dates and their gender.

Source: Pregnancy and parenting club Bounty fined £400,000 for shady data sharing practices • The Register

Sonos finally blasted in complaint to UK privacy watchdog – lets hope they do something with it

Sonos stands accused of seeking to obtain “excessive” amounts of personal data without valid consent in a complaint filed with the UK’s data watchdog.

The complaint, lodged by tech lawyer George Gardiner in a personal capacity, challenges the Sonos privacy policy’s compliance with the General Data Protection Regulation and the UK’s implementation of that law.

It argues that Sonos had not obtained valid consent from users who were asked to agree to a new privacy policy and had failed to meet privacy-by-design requirements.

The company changed its terms in summer 2017 to allow it to collect more data from its users – ostensibly because it was launching voice services. Sonos said that anyone who didn’t accept the fresh Ts&Cs would no longer be able to download future software updates.

Sonos denied at the time that this was effectively bricking the system, but whichever way you cut it, the move would deprecate the kit of users that didn’t accept the terms. The app controlling the system would also eventually become non-functional.

Gardiner pointed out, however, that security risks and an interest in properly maintaining an expensive system meant there was little practical alternative other than to update the software.

This resulted in a mandatory acceptance of the terms of the privacy policy, rendering any semblance of consent void.

“I have no option but to consent to its privacy policy otherwise I will have over £3,000 worth of useless devices,” he said in a complaint sent to the ICO and shared with The Register.

Users setting up accounts are told: “By clicking on ‘Submit’ you agree to Sonos’ Terms and Conditions and Privacy Policy.” This all-or-nothing approach is contrary to data protection law, he argued.

Sonos collects personal data in the form of name, email address, IP addresses and “information provided by cookies or similar technology”.

The system also collects data on room names assigned by users, the controller device, the operating system of the device a person uses and content source.

Sonos said that collecting and processing this data – a slurp that users cannot opt out of – is necessary for the “ongoing functionality and performance of the product and its ability to interact with various services”.

But Gardiner questioned whether it was really necessary for Sonos to collect this much data, noting that his system worked without it prior to August 2017. He added that he does not own a product that requires voice recognition.

Source: Turn me up some: Smart speaker outfit Sonos blasted in complaint to UK privacy watchdog • The Register

I am in the exact same position – suddenly I had to accept an invasive change of privacy policy and earlier in March I also had to log in with a Sonos account in order to get the kit working (it wouldn’t update without logging in and the app only showed the login and update page). This is not what I signed up for when I bought the (expensive!) products.

A Team At Amazon Is Listening To Recordings Captured By Alexa

Seven people, described as having worked in Amazon’s voice review program, told Bloomberg that they sometimes listen to as many as 1,000 recordings per shift, and that the recordings are associated with the customer’s first name, their device’s serial number, and an account number. Among other clips, these employees and contractors said they’ve reviewed recordings of what seemed to be a woman singing in the shower, a child screaming, and a sexual assault. Sometimes, when recordings were difficult to understand — or when they were amusing — team members shared them in an internal chat room, according to Bloomberg.

In an emailed statement to BuzzFeed News, an Amazon spokesperson wrote that “an extremely small sample of Alexa voice recordings” is annotated, and reviewing the audio “helps us train our speech recognition and natural language understanding systems, so Alexa can better understand your requests, and ensure the service works well for everyone.”

[…]

Amazon’s privacy policy says that Alexa’s software provides a variety of data to the company (including your use of Alexa, your Alexa Interactions, and other Alexa-enabled products), but doesn’t explicitly state how employees themselves interact with the data.

Apple and Google, which make two other popular voice-enabled assistants, also employ humans who review audio commands spoken to their devices; both companies say that they anonymize the recordings and don’t associate them with customers’ accounts. Apple’s Siri sends a limited subset of encrypted, anonymous recordings to graders, who label the quality of Siri’s responses. The process is outlined on page 69 of the company’s security white paper. Google also saves and reviews anonymized audio snippets captured by Google Home or Assistant, and distorts the audio.

On an FAQ page, Amazon states that Alexa is not recording all your conversations. Amazon’s Echo smart speakers and the dozens of other Alexa-enabled devices are designed to capture and process audio, but only when a “wake word” — such as “Alexa,” “Amazon,” “Computer,” or “Echo” — is uttered. However, Alexa devices do occasionally capture audio inadvertently and send that audio to Amazon servers or respond to it with triggered actions. In May 2018, an Echo unintentionally sent audio recordings of a woman’s private conversation to one of her husband’s employees.

Source: A Team At Amazon Is Listening To Recordings Captured By Alexa

Does Google meet its users’ expectations around consumer privacy? This news industry research says no

While the ethics around data collection and consumer privacy have been questioned for years, it wasn’t until Facebook’s Cambridge Analytics scandal that people began to realize how frequently their personal data is shared, transferred, and monetized without their permission.

Cambridge Analytica was by no means an isolated case. Last summer, an AP investigation found that Google’s location tracking remains on even if you turn it off in Google Maps, Search, and other apps. Research from Vanderbilt professor Douglas Schmidt found that Google engages in “passive” data collection, often without the user’s knowledge. His research also showed that Google utilizes data collected from other sources to de-anonymize existing user data.

That’s why we at Digital Content Next, the trade association of online publishers I lead, wrote this Washington Post op-ed, “It isn’t just about Facebook, it’s about Google, too” when Facebook first faced Capitol Hill. It’s also why the descriptor surveillance advertising is increasingly being used to describe Google and Facebook’s advertising businesses, which use personal data to tailor and micro-target ads.

[…]

The results of the study are consistent with our Facebook study: People don’t want surveillance advertising. A majority of consumers indicated they don’t expect to be tracked across Google’s services, let alone be tracked across the web in order to make ads more targeted.

Do you expect Google to collect data about a person’s activities on Google platforms (e.g. Android and Chrome) and apps (e.g. Search, YouTube, Maps, Waze)?

Yes: 48%
No: 52%
Do you expect Google to track a person’s browsing across the web in order to make ads more targeted?

Yes: 43%
No: 57%

Nearly two out of three consumers don’t expect Google to track them across non-Google apps, offline activities from data brokers, or via their location history.

Do you expect Google to collect data about a person’s locations when a person is not using a Google platform or app?

Yes: 34%
No: 66%
Do you expect Google to track a person’s usage of non-Google apps in order to make ads more targeted?

Yes: 36%
No: 64%
Do you expect Google to buy personal information from data companies and merge it with a person’s online usage in order to make ads more targeted?

Yes: 33%
No: 67%

There was only one question where a small majority of respondents felt that Google was acting according to their expectations. That was about Google merging data from search queries with other data it collects on its own services. They also don’t expect Google to connect the data back to the user’s personal account, but only by a small majority. Google began doing both of these in 2016 after previously promising it wouldn’t.

Do you expect Google to collect and merge data about a person’s search activities with activities on its other applications?

Yes: 57%
No: 43%
Do you expect Google to connect a variety of user data from Google apps, non-Google apps, and across the web with that user’s personal Google account?

Yes: 48%
No: 52%

Google’s personal data collection practices affect the more than 2 billion people who use devices running their Android operating software and hundreds of millions more iPhone users who rely on Google for browsing, maps, or search. Most of them expect Google to collect some data about them in exchange for use of services. However, as our research shows, a significant majority of consumers do not expect Google to track their activities across their lives, their locations, on other sites, and on other platforms. And as the AP discovered, Google continues to do some of this even after consumers explicitly turn off tracking.

Source: Does Google meet its users’ expectations around consumer privacy? This news industry research says no » Nieman Journalism Lab

Dutch  medical patient files moved to Google Cloud – MPs want to know if US intelligence agencies can view them

Of course the US can look in, under CLOUD rules, because Google is an American company. The move of the files has been done without consent from the patients by Medical Research Data Management, a commercial company, because (they say), the hospitals have given permission. Also, hospitals don’t need to ask for patient permission, because patients have given hospitals permission through accepting the electronic patient filing system.

Another concern is the pseudo-anonymisation of the data. For a company like Google, it’s won’t be particularly hard to match the data to real people.

Source: Kamerleden eisen duidelijkheid over opslag patiëntgegevens bij Google – Emerce

D.E.A. Secretly Collected Bulk Records of Money-Counter Purchases

WASHINGTON — The Drug Enforcement Administration secretly collected data in bulk about Americans’ purchases of money-counting machines — and took steps to hide the effort from defendants and courts — before quietly shuttering the program in 2013 amid the uproar over the disclosures by the National Security Agency contractor Edward Snowden, an inspector general report found.

Seeking leads about who might be a drug trafficker, the D.E.A. started in 2008 to issue blanket administrative subpoenas to vendors to learn who was buying money counters. The subpoenas involved no court oversight and were not pegged to any particular investigation. The agency collected tens of thousands of records showing the names and addresses of people who bought the devices.

The public version of the report, which portrayed the program as legally questionable, blacked out the device whose purchase the D.E.A. had tracked. But in a slip-up, the report contained one uncensored reference in a section about how D.E.A. policy called for withholding from official case files the fact that agents first learned the names of suspects from its database of its money-counter purchases.

[…]

The report cited field offices’ complaints that the program had wasted time with a high volume of low-quality leads, resulting in agents scrutinizing people “without any connection to illicit activity.” But the D.E.A. eventually refined its analysis to produce fewer but higher-quality leads, and the D.E.A. said it had led to arrests and seizures of drugs, guns, cars and illicit cash.

The idea for the nationwide program originated in a D.E.A. operation in Chicago, when a subpoena for three months of purchase records from a local store led to two arrests and “significant seizures of drugs and related proceeds,” it said.

But Sarah St. Vincent, a Human Rights Watch researcher who flagged the slip-up on Twitter, argued that it was an abuse to suck Americans’ names into a database that would be analyzed to identify criminal suspects, based solely upon their purchase of a lawful product.

[…]

In the spring of 2013, the report said, the D.E.A. submitted its database to a joint operations hub where law enforcement agencies working together on organized crime and drug enforcement could mine it. But F.B.I. agents questioned whether the data had been lawfully acquired, and the bureau banned its officials from gaining access to it.

The F.B.I. agents “explained that running all of these names, which had been collected without foundation, through a massive government database and producing comprehensive intelligence products on any ‘hits,’ which included detailed information on family members and pictures, ‘didn’t sit right,’” the report said.

Source: D.E.A. Secretly Collected Bulk Records of Money-Counter Purchases

Tesla Model 3 records data unknown to you, sends it to Tesla without your knowledge and keeps a whole load of other data  too.

Many other cars download and store data from users, particularly information from paired cellphones, such as contact information. The practice is widespread enough that the US Federal Trade Commission has issued advisories to drivers warning them about pairing devices to rental cars, and urging them to learn how to wipe their cars’ systems clean before returning a rental or selling a car they owned.

But the researchers’ findings highlight how Tesla is full of contradictions on privacy and cybersecurity. On one hand, Tesla holds car-generated data closely, and has fought customers in court to refrain from giving up vehicle data. Owners must purchase $995 cables and download a software kit from Tesla to get limited information out of their cars via “event data recorders” there, should they need this for legal, insurance or other reasons.

At the same time, crashed Teslas that are sent to salvage can yield unencrypted and personally revealing data to anyone who takes possession of the car’s computer and knows how to extract it.

[…]

In general, cars have become rolling computers that slurp up personal data from users’ mobile devices to enable “infotainment” features or services. Additional data generated by the car enables and trains advanced driver-assistance systems. Major auto-makers that compete with Tesla’s Autopilot include GM’s Cadillac Super Cruise, Nissan Infiniti’s ProPilot Assist and Volvo’s Pilot Assist system.

But GreenTheOnly and Theo noted that in Teslas, dashboard cameras and selfie cameras can record while the car is parked, even in your garage, and there is no way for an owner to know when they may be doing so. The cameras enable desirable features like “sentry mode.” They also enable wipers to “see” raindrops and switch on automatically, for example.

GreenTheOnly explained, “Tesla is not super transparent about what and when they are recording, and storing on internal systems. You can opt out of all data collection. But then you lose [over-the-air software updates] and a bunch of other functionality. So, understandably, nobody does that, and I also begrudgingly accepted it.”

Theo and GreenTheOnly also said Model 3, Model S and Model X vehicles try to upload autopilot and other data to Tesla in the event of a crash. The cars have the capability to upload other data, but the researchers don’t know if and under what circumstances they attempt to do so.

[…]

The company is one of a handful of large corporations to openly court cybersecurity professionals to its networks, urging those who find flaws in Tesla systems to report them in an orderly process — one that gives the company time to fix the problem before it is disclosed. Tesla routinely pays out five-figure sums to individuals who find and successfully report these flaws.

[…]

However, according to two former Tesla service employees who requested anonymity, when owners try to analyze or modify their own vehicles’ systems, the company may flag them as hackers, alerting Telsa of their skills. Tesla then ensures that these flagged people are not among the first to get new software updates.

Source: Tesla Model 3 keeps data like crash videos, location, phone contacts

Researchers Create Fake Profiles on 24 Health Apps and Learn Most Are Sharing Your Data

Researchers in Canada, the U.S., and Australia teamed up for the study, published Wednesday in the BMJ. They tested 24 popular health-related apps used by patients and doctors in those three countries on an Android smartphone (the Google Pixel 1). Among the more popular apps were medical reference site Medscape, symptom-checker Ada, and the drug guide Drugs.com. Some of the apps reminded users when to take their prescriptions, while others provided information on drugs or symptoms of illness.

They then created four fake profiles that used each of the apps as intended. To establish a baseline of where network traffic related to user data was relayed during the use of the app, they used each app 14 times with the same profile information. Then, prior to the 15th use, they made a subtle change to this user information. On this final use, they looked for differences in network traffic, which would indicate that user data obtained by the app was being shared with third parties, and where exactly it was going to.

Overall, they found 79 percent of apps, including the three listed above, shared at least some user data outside of the app itself. While some of the unique entities that had access to the data used it to improve the app’s functions, like maintaining the cloud where data could be uploaded by users or handling error reports, others were likely using it to create tailored advertisements for other companies. When looking at these third parties, the researchers also found that many marketed their ability to bundle together user data and share it with fourth-party companies even further removed from the health industry, such as credit reporting agencies. And while this data is said to be made completely anonymous and de-identified, the authors found that certain companies were given enough data to easily piece together the identity of users if they wanted to.

Source: Researchers Create Fake Profiles on 24 Health Apps and Learn Most Are Sharing Your Data

Hundreds of South Korean motel guests were secretly filmed and live-streamed online

About 1,600 people have been secretly filmed in motel rooms in South Korea, with the footage live-streamed online for paying customers to watch, police said Wednesday.

Two men have been arrested and another pair investigated in connection with the scandal, which involved 42 rooms in 30 accommodations in 10 cities around the country. Police said there was no indication the businesses were complicit in the scheme.
In South Korea, small hotels of the type involved in this case are generally referred to as motels or inns.
Cameras were hidden inside digital TV boxes, wall sockets and hairdryer holders and the footage was streamed online, the Cyber Investigation Department at the National Police Agency said in a statement.
Cameras found by police hidden inside a hotel wall outlet (left) and hair dryer stand (right).

The site had more than 4,000 members, 97 of whom paid a $44.95 monthly fee to access extra features, such as the ability to replay certain live streams. Between November 2018 and this month, police said, the service brought in upward of $6,000.
“There was a similar case in the past where illegal cameras were (secretly installed) and were consistently and secretly watched, but this is the first time the police caught where videos were broadcast live on the internet,” police said.
South Korea has a serious problem with spy cameras and illicit filming. In 2017, more than 6,400 cases of illegal filming were reported to police, compared to around 2,400 in 2012.

Source: Hundreds of South Korean motel guests were secretly filmed and live-streamed online – CNN

When 2FA means sweet FA privacy: Facebook admits it slurps mobe numbers for more than just profile security

This time, the Silicon Valley giant has been caught red-handed using people’s cellphone numbers, provided exclusively for two-factor authentication, for targeted advertising and search – after it previously insinuated it wouldn’t do that.

Folks handing over their mobile numbers to protect their accounts from takeovers and hijackings thought the contact detail would be used for just that: security. Instead, Facebook is using the numbers to link netizens to other people, and target them with online ads.

For example, if someone you know – let’s call her Sarah – has given her number to Facebook for two-factor authentication purposes, and you allow the Facebook app to access your smartphone’s contacts book, and it sees Sarah’s number in there, it will offer to connect you two up, even though Sarah thought her number was being used for security only, and not for search. This is not a particularly healthy scenario, for instance, if you and Sarah are no longer, or never were, friends in real life, and yet Facebook wants to wire you up anyway.

Following online outcry over the weekend, a Facebook spokesperson told us today: “We appreciate the feedback we’ve received about these settings, and will take it into account.”

Source: When 2FA means sweet FA privacy: Facebook admits it slurps mobe numbers for more than just profile security • The Register

Anyone surprised much?

Apples’ Shazam for iOS Sheds 3rd Party SDKs. Keeps pumping your data through on Android.

Shazam, the song identification app Apple bought for $400M, recently released an update to its iOS app that got rid of all 3rd party SDKs the app was using except for one.

The SDKs that were removed include ad networks, analytics trackers, and even open-source utilities. Why, you ask? Because all of those SDKs leak usage data to 3rd parties one way or another, something Apple really really dislikes.

Here are all the SDKs that were uninstalled in the latest update:

AdMob
Bolts
DoubleClick
FB Ads
FB Analytics
FB Login
InMobi
IAS
Moat
MoPub

Right now, the app only has one 3rd party SDK installed and that’s HockeyApp. Microsoft’s version of TestFlight. It’s unclear why it’s still there, but we don’t expect it to stick around for too long.

Looking across Apple’s entire app portfolio it’s very uncommon to see 3rd party SDKs at all. Exceptions exist. One such example is Apple’s Support app which has the Adobe Analytics SDK installed.

Things Are Different on Android

Since Shazam is also available for Android we expected to see the same behavior. A mass uninstall of 3rd party SDKs. At first glance it seems to be the case, but not exactly.

Here are all the SDKs that were uninstalled in the last update:

AdColony
AdMob
Amazon Ads
Ads
FB Analytics
Gimbal
Google IMA
MoPub

Here are all the SDKs that are still installed in Shazam for Android:

Bolts
FB Analytics
Butter Knife
Crashlytics
Fabric
Firebase
Google Maps
OKHttp
Otto

On Android, Apple seems to be ok with leaking usage data to both Facebook through the Facebook Login SDK and Google through Fabric and Google Maps, indicating Apple hasn’t built out its internal set of tools for Android.

It’s also worth noting that HockeyApp was removed from Shazam from Android more than a year ago.

Want to see which SDKs apps have installed? Check out Explorer, the most comprehensive SDK Intelligence platform for iOS and Android apps.

Source: Shazam for iOS Sheds 3rd Party SDKs | App store Insights from Appfigures

Facebook receives personal health data from apps, even if you don’t have a FB account

Facebook receives highly personal information from apps that track your health and help you find a new home, testing by The Wall Street Journal found. Facebook can receive this data from certain apps even if the user does not have a Facebook account, according to the Journal.

Facebook has already been in hot water concerning issues of consent and user data.

Most recently, a TechCrunch report revealed in January that Facebook paid users as young as teenagers to install an app that would allow the company to collect all phone and web activity. Following the report, Apple revoked some developer privileges from Facebook, saying Facebook violated its terms by distributing the app through a program meant only for employees to test apps prior to release.

The new report said Facebook is able to receive data from a variety of apps. Of more than 70 popular apps tested by the Journal, they found at least 11 apps that sent potentially sensitive information to Facebook.

The apps included the period-tracking app Flo Period & Ovulation Tracker, which reportedly shared with Facebook when users were having their periods or when they indicated they were trying to get pregnant. Real estate app Realtor reportedly sent Facebook the listing information viewed by users, and the top heart-rate app on Apple’s iOS, Instant Heart Rate: HR Monitor, sent users’ heart rates to the company, the Journal’s testing found.

The apps reportedly send the data using Facebook’s software-development kit, or SDK, which help developers integrate certain features into their apps. Facebook’s SDK includes an analytics service that helps app developers understand its users’ trends. The Journal said developers who sent sensitive information to Facebook used “custom app events” to send data like ovulation times and homes that users had marked as favorites on some apps.

A Facebook spokesperson told CNBC, “Sharing information across apps on your iPhone or Android device is how mobile advertising works and is industry standard practice. The issue is how apps use information for online advertising. We require app developers to be clear with their users about the information they are sharing with us, and we prohibit app developers from sending us sensitive data. We also take steps to detect and remove data that should not be shared with us.”

Source: Facebook receives personal health data from apps: WSJ

As China frightens Europe’s data protectors, America does too with Cloud Act

A foreign power with possible unbridled access to Europe’s data is causing alarm in the region. No, it’s not China. It’s the United States.

As the US pushes ahead with the “Cloud Act” it enacted about a year ago, Europe is scrambling to curb its reach. Under the act, all US cloud service providers, from Microsoft and IBM to Amazon – when ordered – have to provide American authorities with data stored on their servers, regardless of where it’s housed. With those providers controlling much of the cloud market in Europe, the act could potentially give the US the right to access information on large swaths of the region’s people and companies.

The US says the act is aimed at aiding investigations. But some people are drawing parallels between the legislation and the National Intelligence Law that China put in place in 2017 requiring all its organisations and citizens to assist authorities with access to information. The Chinese law, which the US says is a tool for espionage, is cited by President Donald Trump’s administration as a reason to avoid doing business with companies like Huawei Technologies.

“I don’t mean to compare US and Chinese laws, because obviously they aren’t the same, but what we see is that on both sides, Chinese and American, there is clearly a push to have extraterritorial access to data,” said Ms Laure de la Raudiere, a French lawmaker who co-heads a parliamentary cyber-security and sovereignty group.

“This must be a wake up call for Europe to accelerate its own, sovereign offer in the data sector.”

Source: As Huawei frightens Europe’s data protectors, America does too, Europe News & Top Stories – The Straits Times

Some American Airlines In-Flight TVs Have Cameras In Them watching you, just like Singapore Airlines and Google Nest

A viral photo showing a camera in a Singapore Airlines in-flight TV display recently caused an uproar online. The image was retweeted hundreds of times, with many people expressing concern about the privacy implications. As it turns out, some seat-back screens in American Airlines’ premium economy class have them, too.

Sri Ray was aboard an American Airlines Boeing 777-200 flight to Tokyo in September 2018 when he noticed something strange: a camera embedded in the seat back of his entertainment system.

Courtesy of Sri Ray

“I am what one would call security paranoid,” said Ray, who was formerly a site reliability engineer at BuzzFeed. “I observe tech in day-to-day life and wonder how a malicious person can use it in bad ways. When I looked at the shiny new screens in the new premium economy cabin of AA, I noticed a small circle at the bottom. Upon closer inspection, it was definitely a camera.”

The cameras are also visible in this June 2017 review of the airline’s premium economy offering by the Points Guy, as well as this YouTube video by Business Traveller magazine.

American Airlines spokesperson Ross Feinstein confirmed to BuzzFeed News that cameras are present on some of the airlines’ in-flight entertainment systems, but said “they have never been activated, and American is not considering using them.” Feinstein added, “Cameras are a standard feature on many in-flight entertainment systems used by multiple airlines. Manufacturers of those systems have included cameras for possible future uses, such as hand gestures to control in-flight entertainment.”

Source: Some American Airlines In-Flight TVs Have Cameras In Them

why does Singapore Airlines have an embedded camera looking at you on the inflight entertainment system? Just like the Google Nest spy, they say it’s ummm all ok, nothing to see here.

Given Singapore’s reputation for being an unabashed surveillance state, a passenger on a Singapore Airlines (SIA) flight could be forgiven for being a little paranoid.

Vitaly Kamluk, an information security expert and a high-ranking executive of cybersecurity company Kaspersky Lab, went on Twitter with concerns about an embedded camera in SIA’s inflight entertainment systems. He tagged SIA in his post on Sunday, asking the airline to clarify how the camera is being used.

SIA quickly responded, telling Kamluk that the cameras have been disabled, with no plans to use them in the future. While not all of their devices sport the camera, SIA said that some of its newer inflight entertainment systems come with cameras embedded in the hardware. Left unexplained was how the camera-equipped entertainment systems had come to be purchased in the first place.

In another tweet, SIA affirmed that the cameras were already built in by the original equipment manufacturers in newer inflight entertainment systems.

Kamluk recommended that it’s best to disable the cameras physically — with stickers, for example — to provide better peace of mind.

Could cameras built into inflight entertainment systems actually be used as a feature though? It’s possible, according to Panasonic Avionics. Back in 2017, the inflight entertainment device developer mentioned that it was studying how eye tracking can be used for a better passenger experience. Cameras can be used for identity recognition on planes, which in turn, would allow for in-flight biometric payment (much like Face ID on Apple devices) and personalized services.

It’s a long shot, but SIA could actually utilize such systems in the future. The camera’s already there, anyway.

Source: Cybersecurity expert questions existence of embedded camera on SIA’s inflight entertainment systems

Many popular iPhone apps secretly record your screen without asking

Many major companies, like Air Canada, Hollister and Expedia, are recording every tap and swipe you make on their iPhone apps. In most cases you won’t even realize it. And they don’t need to ask for permission.

You can assume that most apps are collecting data on you. Some even monetize your data without your knowledge. But TechCrunch has found several popular iPhone apps, from hoteliers, travel sites, airlines, cell phone carriers, banks and financiers, that don’t ask or make it clear — if at all — that they know exactly how you’re using their apps.

Worse, even though these apps are meant to mask certain fields, some inadvertently expose sensitive data.

Apps like Abercrombie & Fitch, Hotels.com and Singapore Airlines also use Glassbox, a customer experience analytics firm, one of a handful of companies that allows developers to embed “session replay” technology into their apps. These session replays let app developers record the screen and play them back to see how its users interacted with the app to figure out if something didn’t work or if there was an error. Every tap, button push and keyboard entry is recorded — effectively screenshotted — and sent back to the app developers.

Or, as Glassbox said in a recent tweet: “Imagine if your website or mobile app could see exactly what your customers do in real time, and why they did it?”

Source: Many popular iPhone apps secretly record your screen without asking | TechCrunch

The “Do Not Track” Setting Doesn’t Stop You from Being Tracked – by Google, Facebook and Twitter, among many more

Most browsers have a “Do Not Track” (DNT) setting that sends “a special signal to websites, analytics companies, ad networks, plug in providers, and other web services you encounter while browsing, to stop tracking your activity.” Sounds good, right? Sadly, it’s not effective. That’s because this Do Not Track setting is only a voluntary signal sent to websites, which websites don’t have to respect 😧.

Screenshot showing the Do Not Track setting in the Chrome browser

Nevertheless, a hefty portion of users across many browsers use the Do Not Track setting. While DNT is disabled by default in most major web browsers, in a survey we conducted of 503 U.S. adults in Nov 2018, 23.1% (±3.7) of respondents have consciously enabled the DNT setting on their desktop browsers. (Note: Apple is in the process of removing the DNT setting from Safari.)

Graph showing survey responses about the current status of the Do Not Track setting in respondent's primary desktop browser

We also looked at DNT usage on DuckDuckGo (across desktop and mobile browsers), finding that 24.4% of DuckDuckGo requests during a one day period came from browsers with the Do Not Track setting enabled. This is within the margin of error from the survey, thus lending more credibility to its results.

[…]

It can be alarming to realize that Do Not Track is about as foolproof as putting a sign on your front lawn that says “Please, don’t look into my house” while all of your blinds remain open. In fact, most major tech companies, including Google, Facebook, and Twitter, do not respect the Do Not Track setting when you visit and use their sites – a fact of which 77.3% (±3.6) of U.S. adults overall weren’t aware.

There is simply a huge discrepancy between the name of the setting and what it actually does. It’s inherently misleading. When educated about the true function and limitation of the DNT setting, 75.5% (±3.8) of U.S. adults say it’s “important” or “very important” that these companies “respect the Do Not Track signal when it is enabled.” So, in shocking news, when people say they don’t want to be tracked, they really don’t want to be tracked.

Pie chart showing 75.5 percent of respondents believe it's important that major tech companies respect the Do Not Track signal.

As a matter of fact, 71.9% (±3.9) of U.S. adults “somewhat favor” or “strongly favor” a federal regulation requiring companies to respect the Do Not Track signal.

Pie chart showing 71.9 percent of respondents would favor federal regulation requiring companies and their websites to respect the Do Not Track signal when enabled.

We agree and hope that governments will focus this year on efforts to enforce adherence to the Do Not Track setting when users enable it. As we’ve seen here and in our private browsing research, many people seek the most readily available (though often, unfortunately, ineffective) methods to protect their privacy.

Source: The “Do Not Track” Setting Doesn’t Stop You from Being Tracked