Millions of Americans’ medical images and data are available on the Internet

Medical images and health data belonging to millions of Americans, including X-rays, MRIs, and CT scans, are sitting unprotected on the Internet and available to anyone with basic computer expertise.

The records cover more than 5 million patients in the United States and millions more around the world. In some cases, a snoop could use free software programs—or just a typical Web browser—to view the images and private data, an investigation by ProPublica and the German broadcaster Bayerischer Rundfunk found.

We identified 187 servers—computers that are used to store and retrieve medical data—in the US that were unprotected by passwords or basic security precautions. The computer systems, from Florida to California, are used in doctors’ offices, medical-imaging centers, and mobile X-ray services.

The insecure servers we uncovered add to a growing list of medical records systems that have been compromised in recent years. Unlike some of the more infamous recent security breaches, in which hackers circumvented a company’s cyber defenses, these records were often stored on servers that lacked the security precautions that long ago became standard for businesses and government agencies.

“It’s not even hacking. It’s walking into an open door,” said Jackie Singh, a cybersecurity researcher and chief executive of the consulting firm Spyglass Security. Some medical providers started locking down their systems after we told them of what we had found.

Our review found that the extent of the exposure varies, depending on the health provider and what software they use. For instance, the server of US company MobilexUSA displayed the names of more than a million patients—all by typing in a simple data query. Their dates of birth, doctors, and procedures were also included.

[…]

All told, medical data from more than 16 million scans worldwide was available online, including names, birthdates, and, in some cases, Social Security numbers.

[…]

The issue should not be a surprise to medical providers. For years, one expert has tried to warn about the casual handling of personal health data. Oleg Pianykh, the director of medical analytics at Massachusetts General Hospital’s radiology department, said medical imaging software has traditionally been written with the assumption that patients’ data would be secured by the customers’ computer security systems.

But as those networks at hospitals and medical centers became more complex and connected to the Internet, the responsibility for security shifted to network administrators who assumed safeguards were in place. “Suddenly, medical security has become a do-it-yourself project,” Pianykh wrote in a 2016 research paper he published in a medical journal.

ProPublica’s investigation built upon findings from Greenbone Networks, a security firm based in Germany that identified problems in at least 52 countries on every inhabited continent. Greenbone’s Dirk Schrader first shared his research with Bayerischer Rundfunk after discovering some patients’ health records were at risk. The German journalists then approached ProPublica to explore the extent of the exposure in the United States.

Source: Millions of Americans’ medical images and data are available on the Internet | Ars Technica

Period Tracker Apps: Maya And MIA Fem Are telling Facebook when you last had sex and more

Period tracker apps are sending deeply personal information about women’s health and sexual practices to Facebook, new research has found.

UK-based advocacy group Privacy International, sharing its findings exclusively with BuzzFeed News, discovered period-tracking apps including MIA Fem and Maya sent women’s use of contraception, the timings of their monthly periods, symptoms like swelling and cramps, and more, directly to Facebook.

Women use such apps for a range of purposes, from tracking their period cycles to maximizing their chances of conceiving a child. On the Google Play store, Maya, owned by India-based Plackal Tech, has more than 5 million downloads. Period Tracker MIA Fem: Ovulation Calculator, owned by Cyprus-based Mobapp Development Limited, says it has more than 2 million users around the world. They are also available on the App Store.

The data sharing with Facebook happens via Facebook’s Software Development Kit (SDK), which helps app developers incorporate particular features and collect user data so Facebook can show them targeted ads, among other functions. When a user puts personal information into an app, that information may also be sent by the SDK to Facebook.

Asked about the report, Facebook told BuzzFeed News it had gotten in touch with the apps Privacy International identified to discuss possible violations of its terms of service, including sending prohibited types of sensitive information.

Maya informs Facebook whenever you open the app and starts sharing some data with Facebook even before the user agrees to the app’s privacy policy, Privacy International found.

“When Maya asks you to enter how you feel and offers suggestions of symptoms you might have — suggestions like blood pressure, swelling or acne — one would hope this data would be treated with extra care,” the report said. “But no, that information is shared with Facebook.”

The app also shares data users enter about their use of contraception, the analysis found, as well as their moods. It also asks users to enter information about when they’ve had sex and what kind of contraception they used, and also includes a diarylike section for users to write their own notes. That information is also shared with Facebook.

Source: Period Tracker Apps: Maya And MIA Fem Are Sharing Deeply Personal Data With Facebook

UK Government Plans to Collect ‘Targeted and Personalized’ Data on Internet Users to Prepare For Brexit: Report

The UK government is planning to collect “targeted and personalized information,” on anyone who visits the government’s various websites, according to a new report from BuzzFeed News. Politicians in the UK are being told that it’s a “top priority” and that the information is needed to prepare for Brexit, the UK’s departure from the European Union, which is still scheduled for October 31.

BuzzFeed obtained two top secret government directives from August directed at members of Prime Minister Boris Johnson’s cabinet about an “accelerated implementation plan” for tracking “digital identity.” A UK government spokesperson in contact with Buzzfeed denied that it was collecting personal data and insisted that “all activity is fully compliant with our legal and ethical obligations.”

The government’s main web portal, Gov.UK, is used for a wide range of online services from health care to passports to taxes, and also includes services that would typically be handled by individual states in the U.S., including renewing your driver’s license. Thus, any attempt to politicize the kind of information collected is highly controversial in the UK.

From BuzzFeed:

At present, usage of GOV.UK is tracked by individual departments, not collected centrally. According to the documents seen by BuzzFeed News, the Cabinet Office’s digital unit, the government digital service (GDS), will add an additional layer of tracking that “will enable GDS to have data for the entire journey of a user as they land on GOV.UK from a Google advert or an email link, read content on GOV.UK, click on a link taking them from GOV.UK to a service and then onwards through the service journey to completion”.

One of the memos was from Prime Minister Johnson himself telling staff that the information would “support key decision making” for Brexit, though it’s not clear what that means in practice.

British citizens are rightly skeptical of any massive digital data collection programs, especially as we learn more about how Big Data was used to manipulate the British people before the public referendum in 2016 on whether or not to leave the EU. The campaigners who wanted people to vote “Leave” used the disgraced political data firm Cambridge Analytica, best known in the U.S. for misusing Facebook data in an effort to get Donald Trump elected.

The UK is currently in the middle of a self-imposed crisis as the deadline for Brexit is less than two months away. And while no one knows for sure what Boris Johnson and his government will do with a new centralized data collection plan, you can see why people would think that’s a bad idea.

But much like President Trump’s attitude in the U.S., it may not matter what the people think—Johnson suspended parliament last night, sending politicians home until October 14, and he’s going to do whatever he feels he needs to do to make Brexit happen.

Source: UK Government Plans to Collect ‘Targeted and Personalized’ Data on Internet Users to Prepare For Brexit: Report

Facebook: Remember how we promised we weren’t tracking your location? Psych! Can’t believe you fell for that

For years the antisocial media giant has claimed it doesn’t track your location, insisting to suspicious reporters and privacy advocates that its addicts “have full control over their data,” and that it does not gather or sell that data unless those users agree to it.

No one believed it. So, when it (and Google) were hit with lawsuits trying to get to the bottom of the issue, Facebook followed its well-worn path to avoiding scrutiny: it changed its settings and pushed out carefully worded explanations that sounded an awful lot like it wasn’t tracking you anymore. But it was. Because location data is valuable.

Then, late on Monday, Facebook emitted a blog post in which it kindly offered to help users “understand updates” to their “device’s location settings.”

It begins: “Facebook is better with location. It powers features like check-ins and makes planning events easier. It helps improve ads and keep you and the Facebook community safe. Features like Find Wi-Fi and Nearby Friends use precise location even when you’re not using the app to make sure that alerts and tools are accurate and personalized for you.”

You may have missed the critical part amid the glowing testimony so we’ll repeat it: “… use precise location even when you’re not using the app…”

Huh, fancy that. It sounds an awful lot like tracking. After all, why would you want Facebook to know your precise location at all times, even when you’re not using its app? And didn’t Facebook promise it wasn’t doing that?

Timing

Well, yes it did, and it was being economical with the truth. But perhaps the bigger question is: why now? Why has Facebook decided to come clean all of a sudden? Is it because of the newly announced antitrust and privacy investigations into tech giants? Well, yes, in a roundabout way.

Surprisingly, in a moment of almost honesty which must have felt quite strange for Facebook’s execs, the web giant actually explains why it has stopped pretending it doesn’t track users: because soon it won’t be able to keep up the pretense.

“Android and iOS have released new versions of their operating systems, which include updates to how you can view and manage your location,” the blog post reveals.

That’s right, under pressure from lawmakers and users, both Google and Apple have added new privacy features to their upcoming mobile operating systems – Android and iOS – that will make it impossible for Facebook to hide its tracking activity.

Source: Facebook: Remember how we promised we weren’t tracking your location? Psych! Can’t believe you fell for that • The Register

The Windows 10 Privacy Settings You Should Check Right Now

If you’re at all concerned about the privacy of your data, you don’t want to leave the default settings in place on your devices—and that includes anything that runs Windows 10.

Microsoft’s operating system comes with a variety of controls and options you can modify to lock down the use of your data, from the information you share with Microsoft to the access that individual apps have to your location, camera, and microphone. Check these privacy-related settings as soon as you’ve got your Windows 10 computer set up—or now, in case you’re a longtime user who hasn’t gotten around to it yet.

Source: The Windows 10 Privacy Settings You Should Check Right Now | WIRED

Cops did hand over photos for King’s Cross facial-recog CCTV to 3rd parties after all – a property developer, between 2016-2018

London cops have admitted they gave photos of people to a property developer to use in a facial-recognition system in the heart of the UK capital.

Back in July, Siân Berry, co-leader of the Green Party of England and Wales, asked London Mayor Sadiq Khan whether the Met Police had collaborated with any retailers or other private companies in the operation of facial-recognition systems. A month later, Khan replied that the police force had not worked with any organisations on face-scanning tech in the capital beyond its own experiments.

However, that turned out to be incorrect. On Wednesday this week, the mayor revealed the cops had in actual fact handed over snaps of people to the private landlord for most of the busy King’s Cross area – which, it emerged last month, had set up facial-recognition cameras to snoop on thousands of Brits going about their day.

“The MPS [Metropolitan Police Service] has just now brought it to my attention that the original information they provided … was incorrect and they have in fact shared images related to facial recognition with King’s Cross Central Limited Partnership,” Khan said in an update, adding that this handover of photos ended sometime in 2018.

Source: Oops, wait, yeah, we did hand over photos for King’s Cross facial-recog CCTV, cops admit • The Register

Google has secret webpages that feed your personal data to advertisers, report to EU says

New evidence submitted for an investigation into Google’s collection of personal data in the European Union reportedly accuses the search giant of stealthy sending your personal user data to advertisers. The company allegedly relays this information to advertisers using hidden webpages, allowing it to circumvent EU privacy regulations.

The evidence was submitted to Ireland’s Data Protection Commission, the main watchdog over the company in the European Union, by Johnny Ryan, chief policy officer for privacy-focused browser maker Brave, according to a Financial Times report Wednesday. Ryan reportedly said he discovered that Google used a tracker containing web browsing information, location and other data and sent it to ad companies via webpages that “showed no content,” according to FT. This could allow companies buying ads to match a user’s Google profile and web activity to profiles from other companies, which is against Google’s own ad buying rules, according to the FT.

In response, Google said Wednesday it doesn’t serve “personalized ads or send bid requests to bidders without user consent.”

The process laid out by Ryan could potentially be “cookie matching” or “cookie syncing,” an ad industry practice of matching ads across multiple sites based on a user’s browsing history. A Google developer page on cookie matching explains the process and the privacy principles the search engine follows, such as not allowing the info to be harvested by multiple companies.

The Data Protection Commission began an investigation into Google’s practices in May after it received a complaint from Brave that Google was allegedly violating the EU’s General Data Protection Regulation.

Source: Google has secret webpages that feed your personal data to advertisers, report says – CNET

Online Depression Tests Are Collecting and Sharing Your Data

This week, Privacy International published a report—Your mental health for sale—which explored how mental health websites handle user data. The digital rights nonprofit looked at 136 mental health webpages across Google France, Google Germany and the UK version of Google, according to the report. They chose websites based on advertised links and featured page search results for depression-related terms in French, German, and English, and also included the most visited sites according to web analytics service SimilarWeb.

According to the report, the organization used the open-source software webxray to identify third-party HTTP requests and cookies. It then analyzed the websites on July 8th of this year. The analysis found that 97.78 percent of the webpages had a third-party element, which might include cookies, JavaScript, or an image hosted on an outside server. And Privacy International also pointed out that its research found that the main reason for these third-party elements was for advertising.

Webxray’s analysis found that 76.04 percent of the webpages had trackers for marketing purposes—80.49 percent of the pages in France, 61.36 percent of the pages in Germany, and 86.27 percent of them in the UK. Among the third-party trackers also included the likes of advertising services from Google, Facebook, and Amazon, with Google trackers being the most present, followed by Facebook and Amazon.

A deeper dive into a subset of these websites—the first three Google search results for “depression test” in the three countries—also indicated some more specific and egregious ways in which these trackers are shilling some of our most intimate data. For instance, among the findings from that additional analysis, Privacy International found that some of the depression test websites stored user’s responses and shared them along with their test results with third parties. They also found that two depression test websites use Hotjar, an online feedback tool that can record what someone types and clicks on a webpage. It’s not difficult to imagine how such data—responses to a depression test—can be exploited.

Source: Online Depression Tests Are Collecting and Sharing Your Data

Mozilla says Firefox won’t defang ad blockers – unlike Google Chrome, which is steadily removing your privacy from 3rd parties

On Tuesday, Mozilla said it is not planning to change the ad-and-content blocking capabilities of Firefox to match what Google is doing in Chrome.

Google’s plan to revise its browser extension APIs, known as Manifest v3, follows from the web giant’s recognition that many of its products and services can be abused by unscrupulous developers. The search king refers to its product security and privacy audit as Project Strobe, “a root-and-branch review of third-party developer access to your Google account and Android device data.”

In a Chrome extension, the manifest file (manifest.json) tells the browser which files and capabilities (APIs) will be used. Manifest v3, proposed last year and still being hammered out, will alter and limit the capabilities available to extensions.

Developers who created extensions under Manifest v2 may have to revise their code to keep it working with future versions of Chrome. That may not be practical or possible in all cases, though. The developer of uBlock Origin, Raymond Hill, has said his web-ad-and-content-blocking extension will break under Manifest v3. It’s not yet clear whether uBlock Origin can or will be adapted to the revised API.

The most significant change under Manifest v3 is the deprecation of the blocking webRequest API (except for enterprise users), which lets extensions intercept incoming and outgoing browser data, so that the traffic can be modified, redirected or blocked.

Firefox not following

“In its place, Google has proposed an API called declarativeNetRequest,” explains Caitlin Neiman, community manager for Mozilla Add-ons (extensions), in a blog post.

“This API impacts the capabilities of content blocking extensions by limiting the number of rules, as well as available filters and actions. These limitations negatively impact content blockers because modern content blockers are very sophisticated and employ layers of algorithms to not only detect and block ads, but to hide from the ad networks themselves.”

Mozilla offers Firefox developers the Web Extensions API, which is mostly compatible with the Chrome extensions platform and is supported by Chromium-based browsers Brave, Opera and Vivaldi. Those other three browser makers have said they intend to work around Google’s changes to the blocking webRequest API. Now, Mozilla says as much.

“We have no immediate plans to remove blocking webRequest and are working with add-on developers to gain a better understanding of how they use the APIs in question to help determine how to best support them,” said Neiman.

[…]

Google maintains, “We are not preventing the development of ad blockers or stopping users from blocking ads,” even as it acknowledges “these changes will require developers to update the way in which their extensions operate.”

Yet Google’s related web technology proposal two weeks ago to build a “privacy sandbox,” through a series of new technical specifications that would hinder anti-tracking mechanisms, has been dismissed as disingenuous “privacy gaslighting.”

On Friday, EFF staff technologist Bennett Cyphers, lambasted the ad biz for its self-serving specs. “Google not only doubled down on its commitment to targeted advertising, but also made the laughable claim that blocking third-party cookies – by far the most common tracking technology on the Web, and Google’s tracking method of choice – will hurt user privacy,” he wrote in a blog post.

Source: Mozilla says Firefox won’t defang ad blockers – unlike a certain ad-giant browser • The Register

PowerShell 7 ups the telemetry but… hey… is that an off switch?

Microsoft emitted a fresh preview of command-line darling PowerShell 7 last night, highlighting some additional slurping – and how to shut it off.

PowerShell 7 Preview 3, which is built on .NET Core 3.0 Preview 8, is the latest step on the way to final release at the end of 2019 and a potential replacement for the venerable PowerShell 5.1.

The first preview dropped back in May and the gang has made solid progress since. This time around, the team has opted to switch on all experimental features of the command-line shell by default in order to get more feedback on whether those features are worth the extra effort to gain “stable” status.

[…]

there are a number of useful features, some targeted squarely at Windows (stripping away reasons to stay with PowerShell 7’s more Windows-focused ancestors) and others that simply make life easy for script fans. The ability to stick a -Parallel parameter to ForEach-Object in order to execute scriptblocks in parallel is a good example, as is a -ThrottleLimit parameter to keep the thread usage under control.

Preview 3 and Telemetry

However, it’s not all good news. Lee, with impressive openness, highlighted the extra telemetry PowerShell would be capturing with this release. Microsoft’s Sydney Smith provided further details and, perhaps more importantly for some users, explained how to turn the slurping off.

New data points being collected include counts of application types such as Cmdlets and Functions, hosted sessions and PowerShell starts by type (API vs Console).

[…]

for the benefit of those who get twitchy about the slurping of data, Smith highlighted the POWERSHELL_TELEMETRY_OPTOUT environment variable, which can be set to the true, yes or 1 to stop PowerShell squirting anything back at Redmond’s servers.

Source: Latest sneak peek at PowerShell 7 ups the telemetry but… hey… is that an off switch? • The Register

Microsoft Contractors Listened to Xbox Owners (mainly kids) in Their Homes – since 2013

Contractors working for Microsoft have listened to audio of Xbox users speaking in their homes in order to improve the console’s voice command features, Motherboard has learned. The audio was supposed to be captured following a voice command like “Xbox” or “Hey Cortana,” but contractors said that recordings were sometimes triggered and recorded by mistake.

The news is the latest in a string of revelations that show contractors working on behalf of Microsoft listen to audio captured by several of its products. Motherboard previously reported that human contractors were listening to some Skype calls as well as audio recorded by Cortana, Microsoft’s Siri-like virtual assistant.

“Xbox commands came up first as a bit of an outlier and then became about half of what we did before becoming most of what we did,” one former contractor who worked on behalf of Microsoft told Motherboard. Motherboard granted multiple sources in this story anonymity as they had signed non-disclosure agreements.

The former contractor said they worked on Xbox audio data from 2014 to 2015, before Cortana was implemented into the console in 2016. When it launched in November 2013, the Xbox One had the capability to be controlled via voice commands with the Kinect system.

[…]

The former contractor said most of the voices they heard were of children.

“The Xbox stuff was actually a bit of a welcome respite, honestly. It was frequently the same games. Same DLCs. Same types of commands,” they added. “‘Xbox give me all the games for free’ or ‘Xbox download [newest Minecraft skins pack]’ or whatever,” they added. The former contractor was paid $10 an hour for their work, according to an employment document shared with Motherboard.

“Occasionally I heard ‘Xbox, tell Solas to heal,’ or something similar, which would be a command for Dragon Age: Inquisition,” the former contractor said, referring to hearing audio of in-game commands.

And that listening continued as the Xbox moved from using Kinect for voice commands over to Cortana. A current contractor provided a document that describes how workers should work with different types of Cortana audio, including commands given to control an Xbox.

Source: Microsoft Contractors Listened to Xbox Owners in Their Homes – VICE

All these guys are using this kind of voice data to improve their AI, so there’s nothing really particularly sinister in that (although they could probably turn on targeted microphones if they want and listen to YOU) but the fact that they lied about it, withheld the information from us and didn’t even mention it in their privacy statements, don’t allow you to opt out – THAT’s a problem.

BTW SONOS is also involved in this…

Google, Apple, Mozilla end Kazakhstan internet by blocking root CA

On Wednesday, Google, Apple, and Mozilla said their web browsers will block the Kazakhstan root Certificate Authority (CA) certificate – following reports that ISPs in the country have required customers to install a government-issued certificate that enables online spying.

According to the University of Michigan’s Censored Planet project, the country’s snoops “recently began using a fake root CA to perform a man-in-the-middle (MitM) attack against HTTPS connections to websites including Facebook, Twitter, and Google.”

A root CA certificate can, to put it simply, be abused to intercept and access otherwise protected communication between internet users and websites.

The Censored Planet report indicates that researchers first detected data interception on July 17, a practice that has continued intermittently since then (though discussions of Kazakhstan’s possible abuse of root CA certificates date back several years).

The interception does not appear to be widespread – it’s said to affect only 459 (7 per cent) of the country’s 6,736 HTTPS servers. But it affects 37 domains, largely social media and communications services linked to Google, Facebook, and Twitter domains, among others.

Kazakhstan has a population of 18m and 76 per cent internet penetration, according to advocacy group Freedom House, which rates it 62 on a scale of 100 for lack of internet freedom – 100 means no internet access.

Two weeks ago, the government of Kazakhstan said it had discontinued its internet surveillance scheme, initially justified as a way to improve cybersecurity, after lawyers in the country criticized the move.

In notifications to Kazakhstani telecom customers, mobile operators maintained that the government-mandated security certificate represented a lawful demand. Yet, in a statement on August 6, the National Security Committee of the Republic of Kazakhstan said the certificate requirement was just a test, and a successful one at that. And the committee provided instructions for removing the certificate from Android, iOS and Windows devices.

In 2015, Kazakhstan tried to get its root CA certificate into Mozilla trusted root store program but was rebuffed, and then tried to get its citizens to install the cert themselves until thwarted by legal action.

“As far as we know, the installation of the certificate is not legally required in Kazakhstan at this time,” a Mozilla spokesperson said in an email to The Register.

Source: Finally. Thanks so much, nerds. Google, Apple, Mozilla end government* internet spying for good • The Register

facial recognition ‘epidemic’ across UK private sites in conjunction with the police

Facial recognition is being extensively deployed on privately owned sites across the UK, according to an investigation by civil liberties group Big Brother Watch.

It found an “epidemic” of the controversial technology across major property developers, shopping centres, museums, conference centres and casinos in the UK.

The investigation uncovered live facial recognition in Sheffield’s major shopping centre Meadowhall.

Site owner British Land said: “We do not operate facial recognition at any of our assets. However, over a year ago we conducted a short trial at Meadowhall, in conjunction with the police, and all data was deleted immediately after the trial.”

The investigation also revealed that Liverpool’s World Museum scanned visitors with facial recognition surveillance during its exhibition, “China’s First Emperor and the Terracotta Warriors” in 2018.

The museum’s operator, National Museums Liverpool, said this had been done because there had been a “heightened security risk” at the time. It said it had sought “advice from Merseyside Police and local counter-terrorism advisors” and that use of the technology “was clearly communicated in signage around the venue”.

A spokesperson added: “World Museum did not receive any complaints and it is no longer in use. Any use of similar technology in the future would be in accordance with National Museums Liverpool’s standard operating procedures and with good practice guidance issued by the Information Commissioner’s Office.”

Big Brother Watch said it also found the Millennium Point conference centre in Birmingham was using facial-recognition surveillance “at the request of law enforcement”. In the privacy policy on Millennium Point’s website, it confirms it does “sometimes use facial recognition software at the request of law enforcement authorities”. It has not responded to a request for further comment.

Earlier this week it emerged the privately owned Kings Cross estate in London was using facial recognition, and Canary Wharf is considering following suit.

Information Commissioner Elizabeth Denham has since launched an investigation, saying she remains “deeply concerned about the growing use of facial recognition technology in public spaces, not only by law enforcement agencies but also increasingly by the private sector”.

The Metropolitan Police’s use of the tech was recently slammed as highly inaccurate and “unlawful”, according to an independent report by researchers from the University of Essex.

Silkie Carlo, director of Big Brother Watch, said: “There is an epidemic of facial recognition in the UK.

“The collusion between police and private companies in building these surveillance nets around popular spaces is deeply disturbing. Facial recognition is the perfect tool of oppression and the widespread use we’ve found indicates we’re facing a privacy emergency.

“We now know that many millions of innocent people will have had their faces scanned with this surveillance without knowing about it, whether by police or by private companies.

“The idea of a British museum secretly scanning the faces of children visiting an exhibition on the first emperor of China is chilling. There is a dark irony that this authoritarian surveillance tool is rarely seen outside of China.”

Carlo urged Parliament to follow in the footsteps of legislators in the US and “ban this authoritarian surveillance from public spaces”. ®

Source: And you thought the cops were bad… Civil rights group warns of facial recog ‘epidemic’ across UK private sites • The Register

Also Facebook Admits Yes, It Was Listening To Your Private Conversations via Messenger

“Much like Apple and Google, we paused human review of audio more than a week ago,” Facebook told Bloomberg on Tuesday.

The social media giant said that users could choose the option to have their voice chats on Facebook’s Messenger app transcribed. The contractors were testing artificial intelligence technology to make sure the messages were properly transcribed from voice to text.

Facebook has previously said that they are reading your messages on its Messenger App. Last year, Facebook CEO Mark Zuckerberg said that when “sensational messages” are found, “We stop those messages from going through.”

Zuckerberg also told Bloomberg last year that while conversations in the Messenger app are considered private, Facebook “scans them and uses the same tools to prevent abuse there that it does on the social network more generally.”

Source: Facebook Admits It Was Also Listening To Your Private Conversations | Digital Trends

 

Amazon, Google, Apple, Facebook – the five riders of the apocalypse are almost complete!

Talk about unintended consequences: GDPR is an identity thief’s dream ticket to Europeans’ data

In a presentation at the Black Hat security conference in Las Vegas James Pavur, a PhD student at Oxford University who usually specialises in satellite hacking, explained how he was able to game the GDPR system to get all kinds of useful information on his fiancée, including credit card and social security numbers, passwords, and even her mother’s maiden name.

[…]

For social engineering purposes, GDPR has a number of real benefits, Pavur said. Firstly, companies only have a month to reply to requests and face fines of up to 4 per cent of revenues if they don’t comply, so fear of failure and time are strong motivating factors.

In addition, the type of people who handle GDPR requests are usually admin or legal staff, not security people used to social engineering tactics. This makes information gathering much easier.

Over the space of two months Pavur sent out 150 GDPR requests in his fiancée’s name, asking for all and any data on her. In all, 72 per cent of companies replied back, and 83 companies said that they had information on her.

Interestingly, five per cent of responses, mainly from large US companies, said that they weren’t liable to GDPR rules. They may be in for a rude shock if they have a meaningful presence in the EU and come before the courts.

Of the responses, 24 per cent simply accepted an email address and phone number as proof of identity and sent over any files they had on his fiancée. A further 16 per cent requested easily forged ID information and 3 per cent took the rather extreme step of simply deleting her accounts.

A lot of companies asked for her account login details as proof of identity, which is actually a pretty good idea, Pavur opined. But when one gaming company tried it, he simply said he’d forgotten the login and they sent it anyway.

The range of information the companies sent in is disturbing. An educational software company sent Pavur his fiancée’s social security number, date of birth and her mother’s maiden name. Another firm sent over 10 digits of her credit card number, the expiration date, card type and her postcode.

A threat intelligence company – not Have I been Pwned – sent over a list of her email addresses and passwords which had already been compromised in attacks. Several of these still worked on some accounts – Pavur said he has now set her up with a password manager to avoid repetition of this.

“An organisation she had never heard of, and never interacted with, had some of the most sensitive data about her,” he said. “GDPR provided a pretext for anyone in the world to collect that information.”

Fixing this issue is going to take action from both legislators and companies, Pavur said.

First off, lawmakers need to set a standard for what is a legitimate form of ID for GDPR requests. One rail company was happy to send out personal information, accepting a used envelope addressed to the fiancée as proof of identity.

Source: Talk about unintended consequences: GDPR is an identity thief’s dream ticket to Europeans’ data • The Register

Deep links to opt-out of data sharing by 60+ companies – Simple Opt Out

Simple Opt Out is drawing attention to opt-out data sharing and marketing practices that many people aren’t aware of (and most people don’t want), then making it easier to opt out. For example:

  • Target “may share your personal information with other companies which are not part of Target.”
  • Chase may share your “account balances and transaction history … For nonaffiliates to market to you.”
  • Crate & Barrel may share “your customer information [name, postal address and email address, and transactions you conduct on our Website or offline] with other select companies.”

This site makes it easier to opt out of data sharing by 50+ companies (or add a company, or see opt-out tips). Enjoy!

Source: Deep links to opt-out of data sharing by 60+ companies – Simple Opt Out

Skype, Cortana also have humans listening to you. The fine print says it listens to your audio recordings to improve its AI, but it means humans are listening.

If you use Skype’s AI-powered real-time translator, brief recordings of your calls may be passed to human contractors, who are expected to listen in and correct the software’s translations to improve it.

That means 10-second or so snippets of your sweet nothings, mundane details of life, personal information, family arguments, and other stuff discussed on Skype sessions via the translation feature may be eavesdropped on by strangers, who check the translations for accuracy and feed back any changes into the machine-learning system to retrain it.

An acknowledgement that this happens is buried in an FAQ for the translation service, which states:

To help the translation and speech recognition technology learn and grow, sentences and automatic transcripts are analyzed and any corrections are entered into our system, to build more performant services.

Microsoft reckons it is being transparent in the way it processes recordings of people’s Skype conversations. Yet one thing is missing from that above passage: humans. The calls are analyzed by humans. The more technological among you will have assumed living, breathing people are involved at some point in fine-tuning the code and may therefore have to listen to some call samples. However, not everyone will realize strangers are, so to speak, sticking a cup against the wall of rooms to get an idea of what’s said inside, and so it bears reiterating.

Especially seeing as sample recordings of people’s private Skype calls were leaked to Vice, demonstrating that the Windows giant’s security isn’t all that. “The fact that I can even share some of this with you shows how lax things are in terms of protecting user data,” one of the translation service’s contractors told the digital media monolith.

[…]

The translation contractors use a secure and confidential website provided by Microsoft to access samples awaiting playback and analysis, which are, apparently, scrubbed of any information that could identify those recorded and the devices used. For each recording, the human translators are asked to pick from a list of AI-suggested translations that potentially apply to what was overheard, or they can override the list and type in their own.

Also, the same goes for Cortana, Microsoft’s voice-controlled assistant: the human contractors are expected to listen to people’s commands to appraise the code’s ability to understand what was said. The Cortana privacy policy states:

When you use your voice to say something to Cortana or invoke skills, Microsoft uses your voice data to improve Cortana’s understanding of how you speak.

Buried deeper in Microsoft’s all-encompassing fine print is this nugget (with our emphasis):

We also share data with Microsoft-controlled affiliates and subsidiaries; with vendors working on our behalf; when required by law or to respond to legal process; to protect our customers; to protect lives; to maintain the security of our products; and to protect the rights and property of Microsoft and its customers.

[…]

Separately, spokespeople for the US tech titan claimed in an email to El Reg that users’ audio data is only collected and used after they opt in, however, as we’ve said, it’s not clear folks realize they are opting into letting strangers snoop on multi-second stretches of their private calls and Cortana commands. You can also control what voice data Microsoft obtains, and how to delete it, via a privacy dashboard, we were reminded.

In short, Redmond could just say flat out it lets humans pore over your private and sensitive calls and chats, as well as machine-learning software, but it won’t because it knows folks, regulators, and politicians would freak out if they knew the full truth.

This comes as Apple stopped using human contractors to evaluate people’s conversations with Siri, and Google came under fire in Europe for letting workers snoop on its smart speakers and assistant. Basically, as we’ve said, if you’re talking to or via an AI, you’re probably also talking to a person – and perhaps even the police.

Source: Reminder: When a tech giant says it listens to your audio recordings to improve its AI, it means humans are listening. Right, Skype? Cortana? • The Register

Genealogists running into AVG

The cards that are used to connect families in provinces in the Benelux as well as the family trees are published online are hugely anonymous, which means it’s nearly impossible to connect the dots as you don’t know when someone was born. Pictures and documents are being removed willy nilly from archives, in contravention of the archive laws (or openness laws, as they garauntee publication of data after a certain amount of time). Uncertainty about how far the AVG goes are leading people to take a very heavy handed view of it.

Source: Stamboomonderzoekers lopen tegen AVG aan – Emerce

Amazon’s Ring Is Teaching Cops How to Persuade Customers to Hand Over Surveillance Footage

according to a new report, Ring is also instructing cops on how to persuade customers to hang over surveillance footage even when they aren’t responsive to police requests.

According to a police memo obtained by Gizmodo and reported last week, Ring has partnerships with “over 225 law enforcement agencies,” Ring is actively involved in scripting and approving how police communicate those partnerships. As part of these relationships, Ring helps police obtain surveillance footage both by alerting customers in a given area that footage is needed and by asking to “share videos” with police. In a disclaimer included with the alerts, Ring claims that sharing the footage “is absolutely your choice.”

But according to documents and emails obtained by Motherboard, Ring also instructed police from two departments in New Jersey on how best to coax the footage out of Ring customers through its “neighborhood watch” app Neighbors in situations where police requests for video were not being met, including by providing police with templates for requests and by encouraging them to post often on the Neighbors app as well as on social media.

In one such email obtained by Motherboard, a Bloomfield Police Department detective requested advice from a Ring associate on how best to obtain videos after his requests were not being answered and further asked whether there was “anything that we can blast out to encourage Ring owners to share the videos when requested.”

In this email correspondence, the Ring associate informed the detective that a significant part of customer “opt in for video requests is based on the interaction law enforcement has with the community,” adding that the detective had done a “great job interacting with [community members] and this will be critical in regard to increased opt in rate.”

“The more users you have the more useful information you can collect,” the associate wrote.

Ring did not immediately return our request for comment about the practice of instructing police how to better obtain surveillance footage from its own customers. However, a spokesperson told Motherboard in a statement that the company “offers Neighbors app trainings and best practices for posting and engaging with app users for all law enforcement agencies utilizing the portal tool,” including by providing “templates and educational materials for police departments to utilize at their discretion.”

In addition to Gizmodo’s recent report that Ring is carefully controlling the messaging and implementation of its products with its police departments, a report from GovTech on Friday claimed that Amazon is also helping police work around denied requests by customers to supply their Ring footage. In such instances, according to the report, police can approach Ring’s parent company Amazon, which can provide the footage that police deem vital to an investigation.

“If we ask within 60 days of the recording and as long as it’s been uploaded to the cloud, then Ring can take it out of the cloud and send it to us legally so that we can use it as part of our investigation,” Tony Botti, public information officer for the Fresno County Sheriff’s Office, told GovTech. When contacted by Gizmodo, however, a Ring spokesperson denied this.

Source: Amazon’s Ring Is Teaching Cops How to Persuade Customers to Hand Over Surveillance Footage

Must. Surveill. The. People.

Cops Are Giving Amazon’s Ring Your Real-Time 911 Caller Data, with location info

Amazon-owned home security company Ring is pursuing contracts with police departments that would grant it direct access to real-time emergency dispatch data, Gizmodo has learned.

The California-based company is seeking police departments’ permission to tap into the computer-aided dispatch (CAD) feeds used to automate and improve decisions made by emergency dispatch personnel and cut down on police response times. Ring has requested access to the data streams so it can curate “crime news” posts for its “neighborhood watch” app, Neighbors.

[…]

An internal police email dated April 2019, obtained by Gizmodo last week via a records request, stated that more than 225 police departments have entered into partnerships with Ring. (The company has declined to confirm that, or provide the actual number.) Doing so grants the departments access to a Neighbors “law enforcement portal” through which police can request access to videos captured by Ring doorbell cameras.

Ring says it does not provide the personal information of its customers to the authorities without consent. To wit, the company has positioned itself as an intermediary through which police request access to citizen-captured surveillance footage. When police make a request, they don’t know who receives it, Ring says, until a user chooses to share their video. Users are also prompted with the option to review their footage before turning it over.

[…]

Through its police partnerships, Ring has requested access to CAD, which includes information provided voluntarily by 911 callers, among other types of data automatically collected. CAD data is typically compromised of details such as names, phone numbers, addresses, medical conditions and potentially other types of personally identifiable information, including, in some instances, GPS coordinates.

In an email Thursday, Ring confirmed it does receive location information, including precise addresses from CAD data, which it does not publish to its app. It denied receiving other forms of personal information.

Ring CAD materials provided to police.

According to some internal documents, police CAD data is received by Ring’s “Neighbors News team” and is then reformatted before being posted on Neighbors in the form of an “alert” to users in the vicinity of the alleged incident.

[…]

Earlier this year, when the Seattle Police Department sought access to CAD software, it triggered a requirement for a privacy impact report under a city ordinance concerning the acquisition of any “surveillance technologies.”

According to the definition adopted by the city, a technology has surveillance capability if it can be used “to collect, capture, transmit, or record data that could be used to surveil, regardless of whether the data is obscured, de-identified, or anonymized before or after collection and regardless of whether technology might be used to obscure or prevent the capturing of certain views or types of information.”

Some CAD systems, such as those marketed by Central Square Technologies (formerly known as TriTech), are used to locate cellular callers by sending text messages that force the return of a phone-location service tracking report. CAD systems also pull in data automatically from phone companies, including ALI information—Automatic Location Identification—which is displayed to dispatch personnel whenever a 911 call is placed. CAD uses these details, along with manually entered information provided by callers, to make fast, initial decisions about which police units and first responders should respond to which calls.

According to Ring’s materials, the direct address, or latitude and longitude, of 911 callers is among the information the Neighbors app requires police to provide, along with the time of the incident, and the category and description of the alleged crime.

Ring said that while it uses CAD data to generate its “News Alerts,” sensitive details, such as the direct address of an incident or the number of police units responding, are never included.

Source: Cops Are Giving Amazon’s Ring Your Real-Time 911 Caller Data

Oddly enough no mention is made of voice recordings. Considering Amazon is building a huge database of voices and people through Alexa, cross referencing the two should be trivial and allow Amazon to surveil the population more closely

UK made illegal copies and mismanaged Schengen travelers database, gave it away to unauthorised 3rd parties, both business and countries

Authorities in the United Kingdom have made unauthorized copies of data stored inside a EU database for tracking undocumented migrants, missing people, stolen cars, or suspected criminals.

Named the Schengen Information System (SIS), this is a EU-run database that stores information such as names, personal details, photographs, fingerprints, and arrest warrants for 500,000 non-EU citizens denied entry into Europe, over 100,000 missing people, and over 36,000 criminal suspects.

The database was created for the sole purpose of helping EU countries manage access to the passport-free Schengen travel zone.

The UK was granted access to this database in 2015, even if it’s not an official member of the Schengen zone.

2018 report revealed violations on the UK’s side

In May 2018, reporters from EU Observer obtained a secret EU report that highlighted years of violations in managing the SIS database by UK authorities.

According to the report, UK officials made copies of this database and stored it at airports and ports in unsafe conditions. Furthermore, by making copies, the UK was always working with outdated versions of the database.

This meant UK officials wouldn’t know in time if a person was removed from SIS, resulting in unnecessary detainments, or if a person was added to the database, allowing criminals to move through the UK and into the Schengen travel zone.

Furthermore, they also mismanaged and misused this data by providing unsanctioned access to this highly-sensitive and secret information to third-party contractors, including US companies (IBM, ATOS, CGI, and others).

The report expressed concerns that by doing so, the UK indirecly allowed contractors to copy this data as well, or allow US officials to request the database from a contractor under the US Patriot Act.

Source: UK made illegal copies and mismanaged Schengen travelers database | ZDNet

It’s official: Deploying Facebook’s ‘Like’ button on your website makes you a joint data slurper, puts you in GDPR danger

Organisations that deploy Facebook’s ubiquitous “Like” button on their websites risk falling foul of the General Data Protection Regulation following a landmark ruling by the European Court of Justice.

The EU’s highest court has decided that website owners can be held liable for data collection when using the so-called “social sharing” widgets.

The ruling (PDF) states that employing such widgets would make the organisation a joint data controller, along with Facebook – and judging by its recent record, you don’t want to be anywhere near Zuckerberg’s antisocial network when privacy regulators come a-calling.

‘Purposes of data processing’

According to the court, website owners “must provide, at the time of their collection, certain information to those visitors such as, for example, its identity and the purposes of the [data] processing”.

By extension, the ECJ’s decision also applies to services like Twitter and LinkedIn.

Facebook’s “Like” is far from an innocent expression of affection for a brand or a message: its primary purpose is to track individuals across websites, and permit data collection even when they are not explicitly using any of Facebook’s products.

[…]

On Monday, the ECJ ruled that Fashion ID could be considered a joint data controller “in respect of the collection and transmission to Facebook of the personal data of visitors to its website”.

The court added that it was not, in principle, “a controller in respect of the subsequent processing of those data carried out by Facebook alone”.

‘Consent’

“Thus, with regard to the case in which the data subject has given his or her consent, the Court holds that the operator of a website such as Fashion ID must obtain that prior consent (solely) in respect of operations for which it is the (joint) controller, namely the collection and transmission of the data,” the ECJ said.

The concept of “data controller” – the organisation responsible for deciding how the information collected online will be used – is a central tenet of both DPR and GDPR. The controller has more responsibilities than the data processor, who cannot change the purpose or use of the particular dataset. It is the controller, not the processor, who would be held accountable for any GDPR sins.

Source: It’s official: Deploying Facebook’s ‘Like’ button on your website makes you a joint data slurper • The Register

Dutch ministry of Justice recommends to Dutch gov to stop using office 365 and windows 10

Basically they don’t like data being shared with third parties doing predictive profiling with the data and they don’t like all the telemetry being sent everywhere, nor do they like MS being able to view and running through content such as text, pictures and videos.

Source: Ministerie van justitie: Stop met gebruik Office 365 – Webwereld (Dutch)

Facebook’s answer to the encryption debate: install spyware with content filters! (updated: maybe not)

The encryption debate is typically framed around the concept of an impenetrable link connecting two services whose communications the government wishes to monitor. The reality, of course, is that the security of that encryption link is entirely separate from the security of the devices it connects. The ability of encryption to shield a user’s communications rests upon the assumption that the sender and recipient’s devices are themselves secure, with the encrypted channel the only weak point.

After all, if either user’s device is compromised, unbreakable encryption is of little relevance.

This is why surveillance operations typically focus on compromising end devices, bypassing the encryption debate entirely. If a user’s cleartext keystrokes and screen captures can be streamed off their device in real-time, it matters little that they are eventually encrypted for transmission elsewhere.

[…]

Facebook announced earlier this year preliminary results from its efforts to move a global mass surveillance infrastructure directly onto users’ devices where it can bypass the protections of end-to-end encryption.

In Facebook’s vision, the actual end-to-end encryption client itself such as WhatsApp will include embedded content moderation and blacklist filtering algorithms. These algorithms will be continually updated from a central cloud service, but will run locally on the user’s device, scanning each cleartext message before it is sent and each encrypted message after it is decrypted.

The company even noted that when it detects violations it will need to quietly stream a copy of the formerly encrypted content back to its central servers to analyze further, even if the user objects, acting as true wiretapping service.

Facebook’s model entirely bypasses the encryption debate by globalizing the current practice of compromising devices by building those encryption bypasses directly into the communications clients themselves and deploying what amounts to machine-based wiretaps to billions of users at once.

Asked the current status of this work and when it might be deployed in the production version of WhatsApp, a company spokesperson declined to comment.

Of course, Facebook’s efforts apply only to its own encryption clients, leaving criminals and terrorists to turn to other clients like Signal or their own bespoke clients they control the source code of.

The problem is that if Facebook’s model succeeds, it will only be a matter of time before device manufacturers and mobile operating system developers embed similar tools directly into devices themselves, making them impossible to escape. Embedding content scanning tools directly into phones would make it possible to scan all apps, including ones like Signal, effectively ending the era of encrypted communications.

Governments would soon use lawful court orders to require companies to build in custom filters of content they are concerned about and automatically notify them of violations, including sending a copy of the offending content.

Rather than grappling with how to defeat encryption, governments will simply be able to harness social media companies to perform their mass surveillance for them, sending them real-time alerts and copies of the decrypted content.

Source: The Encryption Debate Is Over – Dead At The Hands Of Facebook

Update 4/8/19 Bruce Schneier is convinced that this story has been concocted from a single source and Facebook is not in fact planning to do this currently. I’m inclined to agree.

source: More on Backdooring (or Not) WhatsApp

Apple Contractors Reportedly Overhear Sensitive Information and Sexy Times Thanks to Siri

First Amazon, then Google, and now Apple have all confirmed that their devices are not only listening to you, but complete strangers may be reviewing the recordings. Thanks to Siri, Apple contractors routinely catch intimate snippets of users’ private lives like drug deals, doctor’s visits, and sexual escapades as part of their quality control duties, the Guardian reported Friday.

As part of its effort to improve the voice assistant, “[a] small portion of Siri requests are analysed to improve Siri and dictation,” Apple told the Guardian. That involves sending these recordings sans Apple IDs to its international team of contractors to rate these interactions based on Siri’s response, amid other factors. The company further explained that these graded recordings make up less than 1 percent of daily Siri activations and that most only last a few seconds.

That isn’t the case, according to an anonymous Apple contractor the Guardian spoke with. The contractor explained that because these quality control procedures don’t weed out cases where a user has unintentionally triggered Siri, contractors end up overhearing conversations users may not ever have wanted to be recorded in the first place. Not only that, details that could potentially identify a user purportedly accompany the recording so contractors can check whether a request was handled successfully.

“There have been countless instances of recordings featuring private discussions between doctors and patients, business deals, seemingly criminal dealings, sexual encounters and so on. These recordings are accompanied by user data showing location, contact details, and app data,” the whistleblower told the Guardian.

And it’s frighteningly easy to activate Siri by accident. Most anything that sounds remotely like “Hey Siri” is likely to do the trick, as UK’s Secretary of Defense Gavin Williamson found out last year when the assistant piped up as he spoke to Parliament about Syria. The sound of a zipper may even be enough to activate it, according to the contractor. They said that of Apple’s devices, the Apple Watch and HomePod smart speaker most frequently pick up accidental Siri triggers, and recordings can last as long as 30 seconds.

While Apple told the Guardian the information collected from Siri isn’t connected to other data Apple may have on a user, the contractor told a different story:

“There’s not much vetting of who works there, and the amount of data that we’re free to look through seems quite broad. It wouldn’t be difficult to identify the person that you’re listening to, especially with accidental triggers—addresses, names and so on.”

Staff were told to report these accidental activations as technical problems, the worker told the paper, but there wasn’t guidance on what to do if these recordings captured confidential information.

All this makes Siri’s cutesy responses to users questions seem far less innocent, particularly its answer when you ask if it’s always listening: “I only listen when you’re talking to me.”

Fellow tech giants Amazon and Google have faced similar privacy scandals recently over recordings from their devices. But while these companies also have employees who monitor each’s respective voice assistant, users can revoke permissions for some uses of these recordings. Apple provides no such option in its products.

[The Guardian]

Source: Apple Contractors Reportedly Overhear Sensitive Information and Sexy Times Thanks to Siri