Dutch ministry of Justice recommends to Dutch gov to stop using office 365 and windows 10

Basically they don’t like data being shared with third parties doing predictive profiling with the data and they don’t like all the telemetry being sent everywhere, nor do they like MS being able to view and running through content such as text, pictures and videos.

Source: Ministerie van justitie: Stop met gebruik Office 365 – Webwereld (Dutch)

Facebook’s answer to the encryption debate: install spyware with content filters! (updated: maybe not)

The encryption debate is typically framed around the concept of an impenetrable link connecting two services whose communications the government wishes to monitor. The reality, of course, is that the security of that encryption link is entirely separate from the security of the devices it connects. The ability of encryption to shield a user’s communications rests upon the assumption that the sender and recipient’s devices are themselves secure, with the encrypted channel the only weak point.

After all, if either user’s device is compromised, unbreakable encryption is of little relevance.

This is why surveillance operations typically focus on compromising end devices, bypassing the encryption debate entirely. If a user’s cleartext keystrokes and screen captures can be streamed off their device in real-time, it matters little that they are eventually encrypted for transmission elsewhere.

[…]

Facebook announced earlier this year preliminary results from its efforts to move a global mass surveillance infrastructure directly onto users’ devices where it can bypass the protections of end-to-end encryption.

In Facebook’s vision, the actual end-to-end encryption client itself such as WhatsApp will include embedded content moderation and blacklist filtering algorithms. These algorithms will be continually updated from a central cloud service, but will run locally on the user’s device, scanning each cleartext message before it is sent and each encrypted message after it is decrypted.

The company even noted that when it detects violations it will need to quietly stream a copy of the formerly encrypted content back to its central servers to analyze further, even if the user objects, acting as true wiretapping service.

Facebook’s model entirely bypasses the encryption debate by globalizing the current practice of compromising devices by building those encryption bypasses directly into the communications clients themselves and deploying what amounts to machine-based wiretaps to billions of users at once.

Asked the current status of this work and when it might be deployed in the production version of WhatsApp, a company spokesperson declined to comment.

Of course, Facebook’s efforts apply only to its own encryption clients, leaving criminals and terrorists to turn to other clients like Signal or their own bespoke clients they control the source code of.

The problem is that if Facebook’s model succeeds, it will only be a matter of time before device manufacturers and mobile operating system developers embed similar tools directly into devices themselves, making them impossible to escape. Embedding content scanning tools directly into phones would make it possible to scan all apps, including ones like Signal, effectively ending the era of encrypted communications.

Governments would soon use lawful court orders to require companies to build in custom filters of content they are concerned about and automatically notify them of violations, including sending a copy of the offending content.

Rather than grappling with how to defeat encryption, governments will simply be able to harness social media companies to perform their mass surveillance for them, sending them real-time alerts and copies of the decrypted content.

Source: The Encryption Debate Is Over – Dead At The Hands Of Facebook

Update 4/8/19 Bruce Schneier is convinced that this story has been concocted from a single source and Facebook is not in fact planning to do this currently. I’m inclined to agree.

source: More on Backdooring (or Not) WhatsApp

Apple Contractors Reportedly Overhear Sensitive Information and Sexy Times Thanks to Siri

First Amazon, then Google, and now Apple have all confirmed that their devices are not only listening to you, but complete strangers may be reviewing the recordings. Thanks to Siri, Apple contractors routinely catch intimate snippets of users’ private lives like drug deals, doctor’s visits, and sexual escapades as part of their quality control duties, the Guardian reported Friday.

As part of its effort to improve the voice assistant, “[a] small portion of Siri requests are analysed to improve Siri and dictation,” Apple told the Guardian. That involves sending these recordings sans Apple IDs to its international team of contractors to rate these interactions based on Siri’s response, amid other factors. The company further explained that these graded recordings make up less than 1 percent of daily Siri activations and that most only last a few seconds.

That isn’t the case, according to an anonymous Apple contractor the Guardian spoke with. The contractor explained that because these quality control procedures don’t weed out cases where a user has unintentionally triggered Siri, contractors end up overhearing conversations users may not ever have wanted to be recorded in the first place. Not only that, details that could potentially identify a user purportedly accompany the recording so contractors can check whether a request was handled successfully.

“There have been countless instances of recordings featuring private discussions between doctors and patients, business deals, seemingly criminal dealings, sexual encounters and so on. These recordings are accompanied by user data showing location, contact details, and app data,” the whistleblower told the Guardian.

And it’s frighteningly easy to activate Siri by accident. Most anything that sounds remotely like “Hey Siri” is likely to do the trick, as UK’s Secretary of Defense Gavin Williamson found out last year when the assistant piped up as he spoke to Parliament about Syria. The sound of a zipper may even be enough to activate it, according to the contractor. They said that of Apple’s devices, the Apple Watch and HomePod smart speaker most frequently pick up accidental Siri triggers, and recordings can last as long as 30 seconds.

While Apple told the Guardian the information collected from Siri isn’t connected to other data Apple may have on a user, the contractor told a different story:

“There’s not much vetting of who works there, and the amount of data that we’re free to look through seems quite broad. It wouldn’t be difficult to identify the person that you’re listening to, especially with accidental triggers—addresses, names and so on.”

Staff were told to report these accidental activations as technical problems, the worker told the paper, but there wasn’t guidance on what to do if these recordings captured confidential information.

All this makes Siri’s cutesy responses to users questions seem far less innocent, particularly its answer when you ask if it’s always listening: “I only listen when you’re talking to me.”

Fellow tech giants Amazon and Google have faced similar privacy scandals recently over recordings from their devices. But while these companies also have employees who monitor each’s respective voice assistant, users can revoke permissions for some uses of these recordings. Apple provides no such option in its products.

[The Guardian]

Source: Apple Contractors Reportedly Overhear Sensitive Information and Sexy Times Thanks to Siri

UK cops want years of data from victims phones for no real reason, but it is being misused

A report (PDF), released today by Big Brother Watch and eight other civil rights groups, has argued that complainants are being subjected to “suspicion-less, far-reaching digital interrogations when they report crimes to police”.

It added: “Our research shows that these digital interrogations have been used almost exclusively for complainants of rape and serious sexual offences so far. But since police chiefs formalised this new approach to victims’ data through a national policy in April 2019, they claim they can also be used for victims and witnesses of potentially any crime.”

The policy referred to relates to the Digital Processing Notices instituted by forces earlier this year, which victims of crime are asked to sign, allowing police to download large amounts of data, potentially spanning years, from their phones. You can see what one of the forms looks like here (PDF).

[…]

The form is 9 pages long and states ‘if you refused permission… it may not be possible for the investigation or prosecution to continue’. Someone in a vulnerable position is unlikely to feel that they have any real choice. This does not constitute informed consent either.

Rape cases dropped over cops’ demands for search

The report described how “Kent Police gave the entire contents of a victim’s phone to the alleged perpetrator’s solicitor, which was then handed to the defendant”. It also outlined a situation where a 12-year-old rape survivor’s phone was trawled, despite a confession from the perpetrator. The child’s case was delayed for months while the Crown Prosecution Service “insisted on an extensive digital review of his personal mobile phone data”.

Another case mentioned related to a complainant who reported being attacked by a group of strangers. “Despite being willing to hand over relevant information, police asked for seven years’ worth of phone data, and her case was then dropped after she refused.”

Yet another individual said police had demanded her mobile phone after she was raped by a stranger eight years ago, even after they had identified the attacker using DNA evidence.

Source: UK cops blasted over ‘disproportionate’ slurp of years of data from crime victims’ phones • The Register

Researchers Reveal That Anonymized Data Is Easy To Reverse Engineer

Researchers at Imperial College London published a paper in Nature Communications on Tuesday that explored how inadequate current techniques to anonymize datasets are. Before a company shares a dataset, they will remove identifying information such as names and email addresses, but the researchers were able to game this system.

Using a machine learning model and datasets that included up to 15 identifiable characteristics—such as age, gender, and marital status—the researchers were able to accurately reidentify 99.98 percent of Americans in an anonymized dataset, according to the study. For their analyses, the researchers used 210 different data sets that were gathered from five sources including the U.S. government that featured information on more than 11 million individuals. Specifically, the researchers define their findings as a successful effort to propose and validate “a statistical model to quantify the likelihood for a re-identification attempt to be successful, even if the disclosed dataset is heavily incomplete.”

[…]Even the hypothetical illustrated by the researchers in the study isn’t a distant fiction. In June of this year, a patient at the University of Chicago Medical Center filed a class-action lawsuit against both the private research university and Google for the former sharing his data with the latter without his consent. The medical center allegedly de-identified the dataset, but still gave Google records with the patient’s height, weight, vital signs, information on diseases they have, medical procedures they’ve undergone, medications they are on, and date stamps. The complaint pointed out that aside from the breach of privacy in sharing intimate data without a patient’s consent, that even if it was in some way anonymized, the tools available to a powerful tech corporation make it pretty easy for them to reverse engineer that information and identify a patient.

“Companies and governments have downplayed the risk of re-identication by arguing that the datasets they sell are always incomplete,” de Montjoye said in a statement. “Our findings contradict this and demonstrate that an attacker could easily and accurately estimate the likelihood that the record they found belongs to the person they are looking for.”

Source: Researchers Reveal That Anonymized Data Is Easy To Reverse Engineer

Google and Facebook might be tracking your porn history, researchers warn

Being able to access porn on the internet might be convenient, but according to researchers it’s not without its security risks. And they’re not just talking about viruses.

Researchers at Microsoft, Carnegie Mellon University and the University of Pennsylvania analyzed 22,484 porn sites and found that 93% leak user data to a third party. Normally, for extra protection when surfing the web, a user might turn to incognito mode. But, the researchers said, incognito mode only ensures that your browsing history is not stored on your computer.

According to a study released Monday, Google was the No. 1 third-party company. The research found that Google, or one of its subsidiaries like the advertising platform DoubleClick, had trackers on 74% of the pornography sites examined. Facebook had trackers on 10% of the sites.

“In the US, many advertising and video hosting platforms forbid ‘adult’ content. For example, Google’s YouTube is the largest video host in the world, but does not allow pornography,” the researchers wrote. “However, Google has no policies forbidding websites from using their code hosting (Google APIs) or audience measurement tools (Google Analytics). Thus, Google refuses to host porn, but has no limits on observing the porn consumption of users, often without their knowledge.”

Google didn’t immediately respond to requests for comment.

“We don’t want adult websites using our business tools since that type of content is a violation of our Community Standards. When we learn that these types of sites or apps use our tools, we enforce against them,” Facebook spokesperson Joe Osborne said in an email Thursday.

Elena Maris, a Microsoft researcher who worked on the study, told The New York Times the “fact that the mechanism for adult site tracking” is so similar to online retail should be “a huge red flag.”

“This isn’t picking out a sweater and seeing it follow you across the web,” Maris said. “This is so much more specific and deeply personal.”

Source: Google and Facebook might be tracking your porn history, researchers warn – CNET

Permission-greedy apps delayed Android 6 upgrade so they could harvest more user data

Android app developers intentionally delayed updating their applications to work on top of Android 6.0, so they could continue to have access to an older permission-requesting mechanism that granted them easy access to large quantities of user data, research published by the University of Maryland last month has revealed.

The central focus of this research was the release of Android (Marshmallow) 6.0 in October 2015. The main innovation added in Android 6.0 was the ability for users to approve app permissions on a per-permission basis, selecting which permissions they wanted to allow an app to have.

[…]

Google gave app makers three years to update

As the Android ecosystem grew, app developers made a habit of releasing apps that requested a large number of permissions, many of which their apps never used, and which many developers were using to collect user data and later re-selling it to analytics and data tracking firms.

This changed with the release of Android 6.0; however, fearing a major disruption in its app ecosystem, Google gave developers three years to update their apps to work on the newer OS version.

This meant that despite users running a modern Android OS version — like Android 6, 7, or 8 — apps could declare themselves as legacy apps (by declaring an older Android Software Development Kit [SDK]) and work with the older permission-requesting mechanism that was still allowing them to request blanket permissions.

Two-year-long experiment

In research published in June, two University of Maryland academics say they conducted tests between April 2016 and March 2018 to see how many apps initially coded to work on older Android SDKs were updated to work on the newer Android 6.0 SDK.

The research duo says they installed 13,599 of the most popular Android apps on test devices. Each month, the research team would update the apps and scan the apps’ code to see if they were updated for the newer Android 6.0 release.

“We find that an app’s likelihood of delaying upgrade to the latest platform version increases with an increase in the ratio of dangerous permissions sought by the apps, indicating that apps prefer to retain control over access to the users’ private information,” said Raveesh K. Mayya and Siva Viswanathan, the two academics behind the research.

[…]

Additional details about this research can be found in a white paper named “Delaying Informed Consent: An Empirical Investigation of Mobile Apps’ Upgrade Decisions” that was presented in June at the 2019 Workshop on the Economics of Information Security in Boston.

Source: Permission-greedy apps delayed Android 6 upgrade so they could harvest more user data | ZDNet

Microsoft Office 365: Banned in German schools over privacy fears

Schools in the central German state of Hesse have been have been told it’s now illegal to use Microsoft Office 365.

The state’s data-protection commissioner has ruled that using the popular cloud platform’s standard configuration exposes personal information about students and teachers “to possible access by US officials”.

That might sound like just another instance of European concerns about data privacy or worries about the current US administration’s foreign policy.

But in fact the ruling by the Hesse Office for Data Protection and Information Freedom is the result of several years of domestic debate about whether German schools and other state institutions should be using Microsoft software at all.

Besides the details that German users provide when they’re working with the platform, Microsoft Office 365 also transmits telemetry data back to the US.

Last year, investigators in the Netherlands discovered that that data could include anything from standard software diagnostics to user content from inside applications, such as sentences from documents and email subject lines. All of which contravenes the EU’s General Data Protection Regulation, or GDPR, the Dutch said.

Germany’s own Federal Office for Information Security also recently expressed concerns about telemetry data that the Windows operating system sends.

To allay privacy fears in Germany, Microsoft invested millions in a German cloud service, and in 2017 Hesse authorities said local schools could use Office 365. If German data remained in the country, that was fine, Hesse’s data privacy commissioner, Michael Ronellenfitsch, said.

But in August 2018 Microsoft decided to shut down the German service. So once again, data from local Office 365 users would be data transmitted over the Atlantic. Several US laws, including 2018’s CLOUD Act and 2015’s USA Freedom Act, give the US government more rights to ask for data from tech companies.

It’s actually simple, Austrian digital-rights advocate Max Schrems, who took a case on data transfers between the EU and US to the highest European court this week, tells ZDNet.

School pupils are usually not able to give consent, he points out. “And if data is sent to Microsoft in the US, it is subject to US mass-surveillance laws. This is illegal under EU law.”

Source: Microsoft Office 365: Banned in German schools over privacy fears | ZDNet

FTC Fines Facebook $5 Billion for Cambridge Analytica – not  very much considering earnings – and does not curtail future breaches

The Federal Trade Commission, which has been investigating Facebook in the wake of its massive Cambridge Analytica scandal, has voted to approve levying a massive $5 billion fine against the social media giant, according to reporting in both the Wall Street Journal and the Washington Post. It’s the single largest fine against a tech company by the FTC to date, but its inadequacy to curtail future breaches of this sort already has progressive lawmakers furious

Facebook was aware of a fine of this magnitude potentially coming down the pike for some time, and braced for a hit between $3 billion and $5 billion. The approval vote—which reportedly split down party lines, with three Republicans voting in favor and two Democrats against—was on the higher end of the expected spectrum.

This is expected to cap the agency’s investigation into the data-mining scandal that compromised up to 87 million Facebook users’ personal data. The data was originally harvested using a seemingly benign quiz app on the platform but was later potentially used by Cambridge Analytica, a political consultancy, for the unrelated purpose of political ad targeting.

[…]

While massive by the standards of tech companies, which too frequently get off with a slap on the wrist of lax data privacy practices which endanger users, the FTC’s fine still represents less than a third of the company’s $15.08 billion earnings from just the first quarter of this year.

Source: FTC Fines Facebook $5 Billion, Democrats Call It a Failure

Palantir’s Top-Secret User Manual for Cops shows how easily they can find scary amounts of information on you and your friends

Through a public record request, Motherboard has obtained a user manual that gives unprecedented insight into Palantir Gotham (Palantir’s other services, Palantir Foundry, is an enterprise data platform), which is used by law enforcement agencies like the Northern California Regional Intelligence Center. The NCRIC serves around 300 communities in northern California and is what is known as a “fusion center,” a Department of Homeland Security intelligence center that aggregates and investigates information from state, local, and federal agencies, as well as some private entities, into large databases that can be searched using software like Palantir.

Fusion centers have become a target of civil liberties groups in part because they collect and aggregate data from so many different public and private entities. The US Department of Justice’s Fusion Center Guidelines list the following as collection targets:

1562941666896-Screen-Shot-2019-07-12-at-102230-AM
Data via US Department of Justice. Chart via Electronic Information Privacy Center.
1562940862696-Screen-Shot-2019-07-12-at-101110-AM
A flow chart that explains how cops can begin to search for records relating to a single person.

The guide doesn’t just show how Gotham works. It also shows how police are instructed to use the software. This guide seems to be specifically made by Palantir for the California law enforcement because it includes examples specific to California. We don’t know exactly what information is excluded, or what changes have been made since the document was first created. The first eight pages that we received in response to our request is undated, but the remaining twenty-one pages were copyrighted in 2016. (Palantir did not respond to multiple requests for comment.)

The Palantir user guide shows that police can start with almost no information about a person of interest and instantly know extremely intimate details about their lives. The capabilities are staggering, according to the guide:

  • If police have a name that’s associated with a license plate, they can use automatic license plate reader data to find out where they’ve been, and when they’ve been there. This can give a complete account of where someone has driven over any time period.
  • With a name, police can also find a person’s email address, phone numbers, current and previous addresses, bank accounts, social security number(s), business relationships, family relationships, and license information like height, weight, and eye color, as long as it’s in the agency’s database.
  • The software can map out a person’s family members and business associates of a suspect, and theoretically, find the above information about them, too.

All of this information is aggregated and synthesized in a way that gives law enforcement nearly omniscient knowledge over any suspect they decide to surveil.

[…]

In order for Palantir to work, it has to be fed data. This can mean public records like business registries, birth certificates, and marriage records, or police records like warrants and parole sheets. Palantir would need other data sources to give police access to information like emails and bank account numbers.

“Palantir Law Enforcement supports existing case management systems, evidence management systems, arrest records, warrant data, subpoenaed data, RMS or other crime-reporting data, Computer Aided Dispatch (CAD) data, federal repositories, gang intelligence, suspicious activity reports, Automated License Plate Reader (ALPR) data, and unstructured data such as document repositories and emails,” Palantir’s website says.

Some data sources—like marriage, divorce, birth, and business records—also implicate other people that are associated with a person personally or through family. So when police are investigating a person, they’re not just collecting a dragnet of emails, phone numbers, business relationships, travel histories, etc. about one suspect. They’re also collecting information for people who are associated with this suspect.

Source: Revealed: This Is Palantir’s Top-Secret User Manual for Cops – VICE

Microsoft stirs suspicions by adding telemetry spyware to security-only update

Under Microsoft’s rules, what it calls “Security-only updates” are supposed to include, well, only security updates, not quality fixes or diagnostic tools. Nearly three years ago, Microsoft split its monthly update packages for Windows 7 and Windows 8.1 into two distinct offerings: a monthly rollup of updates and fixes and, for those who are want only those patches that are absolutely essential, a Security-only update package.

What was surprising about this month’s Security-only update, formally titled the “July 9, 2019—KB4507456 (Security-only update),” is that it bundled the Compatibility Appraiser, KB2952664, which is designed to identify issues that could prevent a Windows 7 PC from updating to Windows 10.

Among the fierce corps of Windows Update skeptics, the Compatibility Appraiser tool is to be shunned aggressively. The concern is that these components are being used to prepare for another round of forced updates or to spy on individual PCs. The word telemetry appears in at least one file, and for some observers it’s a short step from seemingly innocuous data collection to outright spyware.

My longtime colleague and erstwhile co-author, Woody Leonhard, noted earlier today that Microsoft appeared to be “surreptitiously adding telemetry functionality” to the latest update:

With the July 2019-07 Security Only Quality Update KB4507456, Microsoft has slipped this functionality into a security-only patch without any warning, thus adding the “Compatibility Appraiser” and its scheduled tasks (telemetry) to the update. The package details for KB4507456 say it replaces KB2952664 (among other updates).

Come on Microsoft. This is not a security-only update. How do you justify this sneaky behavior? Where is the transparency now.

I had the same question, so I spent the afternoon poking through update files and security bulletins and trying to get an on-the-record response from Microsoft. I got a terse “no comment” from Redmond.

Source: Microsoft stirs suspicions by adding telemetry files to security-only update | ZDNet

Once installed, a new scheduled task is added to the system under Microsoft > Windows > Application Experience

Google admits leaked private voice conversations, decides to clamp down on whistleblowers, not improve privacy

Google admitted on Thursday that more than 1,000 sound recordings of customer conversations with the Google Assistant were leaked by some of its partners to a Belgian news site.

[…]

“We just learned that one of these language reviewers has violated our data security policies by leaking confidential Dutch audio data,” Google product manager of search David Monsees said in a blog post. “Our Security and Privacy Response teams have been activated on this issue, are investigating, and we will take action. We are conducting a full review of our safeguards in this space to prevent misconduct like this from happening again”

Monsees said its partners only listen to “around 0.2 percent of all audio snippets” and said they are “not associated with user accounts,” even though VRT was able to figure out who was speaking in some of the clips.

Source: Google admits leaked private voice conversations

NB the CNBC  article states that you can delete old conversations, but we know that’s not the case for transcribed Alexa conversations and we know that if you delete your shopping emails from Gmail, Google keeps your shopping history.

How American Corporations Are Policing Online Speech Worldwide

In the winter of 2010, a 19-year-old Moroccan man named Kacem Ghazzali logged into his email to find a message from Facebook informing him that a group he had created just a few days prior had been removed from the platform without explanation. The group, entitled “Jeunes pour la séparation entre Religion et Enseignement” (or “Youth for the separation of religion and education”), was an attempt by Ghazzali to organize with other secularist youth in the pious North African kingdom, but it was quickly thwarted. When Ghazzali wrote to Facebook to complain about the censorship, he found his personal profile taken down as well.

Back then, there was no appeals system, but after I wrote about the story, Ghazzali was able to get his accounts back. Others haven’t been so lucky. In the years since, I’ve heard from hundreds of activists, artists, and average folks who found their social media posts or accounts deleted—sometimes for violating some arcane proprietary rule, sometimes at the order of a government or court, other times for no discernible reason at all.

The architects of Silicon Valley’s big social media platforms never imagined they’d someday be the global speech police. And yet, as their market share and global user bases have increased over the years, that’s exactly what they’ve become. Today, the number of people who tweet is nearly the population of the United States. About a quarter of the internet’s total users watch YouTube videos, and nearly one-third of the entire world uses Facebook. Regardless of the intent of their founders, none of these platforms were ever merely a means of connecting people; from their early days, they fulfilled greater needs. They are the newspaper, the marketplace, the television. They are the billboard, the community newsletter, and the town square.

And yet, they are corporations, with their own speech rights and ability to set the rules as they like—rules that more often than not reflect the beliefs, however misguided, of their founders.

Source: How American Corporations Are Policing Online Speech Worldwide

T-Mobile Says Customers Can’t Sue Because It Violates Its ToS

T-Mobile screwed over millions of customers when it collected their geolocation data and sold it to third parties without their consent. Now, two of these customers are trying to pursue a class-action lawsuit against the company for the shady practice, but the telecom giant is using another shady practice to force them to settle their dispute behind closed doors.

On Monday, T-Mobile filed a motion to compel the plaintiffs into arbitration, which would keep the complaint out of a public courtroom. See, when you sign a contract or agree to a company’s terms of service with a forced arbitration clause, you are waiving your right to a trial by jury and oftentimes to pursue a class-action lawsuit at all. Settling a dispute in arbitration means having it heard by a third party behind closed doors. And an arbitration clause is buried in T-Mobile’s fine print.

T-Mobile’s terms of service state that customers do have the option to opt out of arbitration, which is buried within the agreement and states that they “must either complete the opt out form on this website or call toll-free 1-866-323-4405 and provide the information requested.” They also only have 30 days to do so after they have activated their service. After that brief time period, users are no longer eligible to opt out.

The plaintiffs, Shawnay Ray and Kantice Joyner of Maryland, filed the class-action complaint against T-Mobile in May. Verizon, Sprint, and AT&T were all also hit with lawsuits that same month for selling customer location data. “The telecommunications carriers are the beginning of a dizzying chain of data selling, where data goes from company to company, and ultimately ends up in the hands of literally anybody who is looking,” the complaint against T-Mobile states. The comment is largely referring to a Vice investigation that found that the phone carriers sold real-time location data to middlemen and that this data sometimes eventually ended up with bounty hunters.

Source: T-Mobile Says Customers Can’t Sue Because It Violates Its ToS

Google contractors are secretly listening to your Assistant and Home recordings

Not only is your Google Home device listening to you, a new report suggests there might be a Google contractor who’s listening as well. Even if you didn’t ask your device any questions, it’s still sending what you say to the company, who allow an actual person to collect data from it.

[…]

VRT, with the help of a whistleblower, was able to listen to some of these clips and subsequently heard enough to discern the addresses of several Dutch and Belgian people using Google Home — in spite of the fact some hadn’t even uttered the words “Hey Google,” which are supposed to be the device’s listening trigger.

The person who leaked the recordings was working as a subcontractor to Google, transcribing the audio files for subsequent use in improving its speech recognition. They got in touch with VRT after reading about Amazon Alexa keeping recordings indefinitely.

According to the whistleblower, the recordings presented to them are meant to be carefully annotated, with notes included about the speakers presumed identity and age. From the sound of the report, these transcribers have heard just about everything. Personal information? Bedroom activities? Domestic violence? Yes, yes, and yes.

While VRT only listened to recordings from Dutch and Belgian users, the platform the whistleblower showed them had recordings from all over the world – which means there are probably thousands of other contractors listening to Assistant recordings.

The VRT report states that the Google Home Terms of Service don’t mention that recordings might be listened to by other humans.

The report did say the company tries to anonymize the recordings before sending them to contractors, identifying them by numbers rather than user names. But again, VRT was able to pick up enough data from the recordings to find the addresses of the users in question, and even confront some of the users in the recordings – to their great dismay.

Google’s defense to VRT was that the company only transcribes and uses “about 0.2% of all audio clips,” to improve their voice recognition technology.

Source: Google contractors are secretly listening to your Assistant recordings

Prenda Law bosses in jail for seeding porn videos to d/l sites and then suing the downloaders

One of the former attorneys behind dodgy copyright-demand factory Prenda Law has been sentenced to 60 months in prison. Yes, the same Prenda Law that seeded file-sharing networks with smut flicks it owned the rights to in order to extract eye-watering copyright infringement settlements from downloaders.

Judge Joan Ericksen, of a US federal district court in Minnesota, on Tuesday this week handed down the five-year term, along with two years of supervised release and a $1,541,527.37 restitution bill, after Steele copped to one count each of conspiracy to commit money laundering and conspiracy to commit mail and wire fraud. While technically given two 60-month sentences, Steele, 48, is being allowed to serve both terms at the same time.

Steele, who has since been disbarred, admitted that from 2011 to 2014 he and co-conspirator Paul Hansmeier, operating as Prenda Law, set up a series of shell companies and studios that either purchased the rights to existing pornographic films or funded the making of original films with the intent of anonymously sticking the dirty movies on the Pirate Bay.

The duo then tracked down people who had downloaded the films and threatened them with copyright infringement suits unless the target agreed to pay out a $3,000 settlement. When the piracy scam started to flounder, the pair took things a step further by accusing targets of hacking their shell companies’ machines.

“To facilitate their phony ‘hacking’ lawsuits, the defendants recruited individuals who had been caught downloading pornography from a file-sharing website, to act as ruse ‘defendants’,” US prosecutors noted.

“These ruse defendants agreed to be sued and permit Steele and Hansmeier to conduct early discovery against their supposed ‘co-conspirators’ in exchange for Steele and Hansmeier waiving their settlement fees.”

Both lawyers would eventually be found out, and charged with fraud and money laundering for their roles in the scheme. By the time the operation was dismantled, it is estimated the duo was able to extort nearly $3m in payouts from randy web-surfers.

While five years behind bars can hardly be considered a slap on the wrist, Steele’s willingness to cooperate with authorities allowed him to win a considerably lighter term than his co-conspirator 37-year-old Hansmeier, who last month was sentenced to 14 years incarceration for convictions on the same set of charges. ®

Source: Prenda Law boss John Steele to miss 2020 Olympics… unless they show it in prison • The Register

UK data regulator threatens British Airways with 747-sized fine for massive personal data blurt

The UK Information Commissioner’s Office has warned BA it faces a whopping £183.39m following the theft of million customer records from its website and mobile app servers.

The record-breaking fine – more or less the lower end of the price of one of the 747-400s in BA’s fleet – under European General Data Protection Regulation (GDPR), represents 1.5 per cent of BA’s world-wide revenue in 2017.

Information Commissioner Elizabeth Denham said: “People’s personal data is just that – personal. When an organisation fails to protect it from loss, damage or theft it is more than an inconvenience. That’s why the law is clear – when you are entrusted with personal data you must look after it. Those that don’t will face scrutiny from my office to check they have taken appropriate steps to protect fundamental privacy rights.”

The breach hit almost 500,000 people. The ICO statement reveals the breach is believed to have started in June 2018, previous statements from BA said it began in late August. The data watchdog described the attack as diverting user traffic from BA’s site to a fraudulent site.

ICO investigators found a variety of information was compromised including log-in details, card numbers, names, addresses and travel information.

Sophisticated card skimming group Magecart, which also hit Ticketmaster, was blamed for the data slurp. The group is believed to have exploited third party scripts, possibly modified JavaScript, running on BA’s site to gain access to the airline’s payment system.

Such scripts are often used to support marketing and data tracking functions or running external ads.

The Reg revealed that BA parent company IAG was in talks with staff to outsource cyber security to IBM just before the hack was carried out.

Source: UK data regulator threatens British Airways with 747-sized fine for massive personal data blurt • The Register

King’s College London breached GDPR by sharing list of activist students with cops – wait, it has a list of activist students?!

Kings College London breached the General Data Protection Regulations when it shared a list of student activists with the police and barred the activists from campus during a visit by the Queen, an independent report (PDF) has found.

Some 13 students and one member of staff were unable to access any of the campus sites as their cards had been deactivated to prevent access to the Bush House site, which was opened by the Queen on March 19.

In foreword to the report, Professor Evelyn Welch, acting principal at KCL said the university accepts the findings and recommendations in full and is putting in place a plan to address all the issues raised.

One of the findings of the report is that we have breached our own policies regarding protection of personal information and the GDPR regulations. Following the event, we informed the Information Commissioner’s Office that we were undertaking this review. We have now shared the report with them and await their response.

The report also contains recommendations about our security arrangements which we will follow as we bring our operations in house and a new Head of Security joins us.

Welch said that while some have interpreted the actions taken on the day as racial profiling, “this was not the case and I want to reiterate that discrimination on any grounds is unacceptable and is damaging to our community.”

The report’s author, Laura Gibbs, concluded that the security team had “overstepped the boundaries” when it compiled the list of activists and shared it with the Met Police.

She said “the barring of individuals against whom there was neither evidence of criminal activity nor any internal disciplinary findings, from “their campus” was disproportionate and “against King’s stated values.”

One student was blocked from entering a KCL building for an exam in south London, and was only able to enter when the on-site security staff reinstated the card.

Source: King’s College London breached GDPR by sharing list of activist students with cops • The Register

Internet group brands Mozilla ‘internet villain’ for supporting DNS privacy feature which may also allow users access to porn in the UK, make it hard for the great filter there to see where everyone is surfing

An industry group of internet service providers has branded Firefox browser maker Mozilla an “internet villain” for supporting a DNS security standard.

The U.K.’s Internet Services Providers’ Association (ISPA), the trade group for U.K. internet service providers, nominated the browser maker for its proposed effort to roll out the security feature, which they say will allow users to “bypass UK filtering obligations and parental controls, undermining internet safety standards in the UK.”

Mozilla said late last year it was planning to test DNS-over-HTTPS to a small number of users.

Whenever you visit a website — even if it’s HTTPS enabled — the DNS query that converts the web address into an IP address that computers can read is usually unencrypted. The security standard is implemented at the app level, making Mozilla the first browser to use DNS-over-HTTPS. By encrypting the DNS query it also protects the DNS request against man-in-the-middle attacks, which allow attackers to hijack the request and point victims to a malicious page instead.

DNS-over-HTTPS also improves performance, making DNS queries — and the overall browsing experience — faster.

But the ISPA doesn’t think DNS-over-HTTPS is compatible with the U.K.’s current website blocking regime.

Under U.K. law, websites can be blocked for facilitating the infringement of copyrighted or trademarked material or if they are deemed to contain terrorist material or child abuse imagery. In encrypting DNS queries, it’s claimed that it will make it more difficult for internet providers to filter their subscribers’ internet access.

The ISPA isn’t alone. U.K. spy agency GCHQ and the Internet Watch Foundation, which maintains the U.K.’s internet blocklist, have criticized the move to roll out encrypted DNS features to the browser.

The ISPA’s nomination quickly drew ire from the security community. Amid a backlash on social media, the ISPA doubled down on its position. “Bringing in DNS-over-HTTPS by default would be harmful for online safety, cybersecurity and consumer choice,” but said it encourages “further debate.”

One internet provider, Andrews & Arnold, donated £2,940 — around $3,670 — to Mozilla in support of the nonprofit. “The amount was chosen because that is what our fee for ISPA membership would have been, were we a member,” said a tweet from the company.

Mozilla spokesperson Justin O’Kelly told TechCrunch: “We’re surprised and disappointed that an industry association for ISPs decided to misrepresent an improvement to decades old internet infrastructure.”

“Despite claims to the contrary, a more private DNS would not prevent the use of content filtering or parental controls in the UK. DNS-over-HTTPS (DoH) would offer real security benefits to UK citizens. Our goal is to build a more secure internet, and we continue to have a serious, constructive conversation with credible stakeholders in the UK about how to do that,” he said.

“We have no current plans to enable DNS-over-HTTPS by default in the U.K. However, we are currently exploring potential DNS-over-HTTPS partners in Europe to bring this important security feature to other Europeans more broadly,” he added.

Mozilla isn’t the first to roll out DNS-over-HTTPS. Last year Cloudflare released a mobile version of its 1.1.1.1 privacy-focused DNS service to include DNS-over-HTTPS. Months earlier, Google-owned Jigsaw released its censorship-busting app Infra, which aimed to prevent DNS manipulation.

Mozilla has yet to set a date for the full release of DNS-over-HTTPS in Firefox.

Source: Internet group brands Mozilla ‘internet villain’ for supporting DNS privacy feature | TechCrunch

Privacy-first browsers look to take the shine off Google’s Chrome

Before Google, Facebook and Amazon, tech dominance was known by a single name: Microsoft.

And no product was more dominant than Microsoft’s web browser, Internet Explorer. The company’s browser was the gateway to the internet for about 95 percent of users in the early 2000s, which helped land Microsoft at the center of a major government effort to break up the company.

Almost two decades later, Google’s Chrome now reigns as the biggest browser on the block, and the company is facing challenges similar to Microsoft’s from competitors, as well as government scrutiny.

But Google faces a new wrinkle — a growing realization among consumers that their every digital move is tracked.

“I think Cambridge Analytica acted as a catalyst to get people aware that their data could be used in ways they didn’t expect,” said Peter Dolanjski, the product lead for Mozilla’s Firefox web browser, referring to the scandal in which a political consulting firm obtained data on millions of Facebook users and their friends.

[…]

Web browsers, being the primary way the vast majority of people experience the internet, are a crucial choke point in the digital ecosystem. While the browsers are free to users, the companies that operate them can have an outsized impact on how the internet works — especially if they gain a dominant market position. For a company like Google, which makes most of its money from online advertising, that has meant being able to liberally collect user data. For a nonprofit like Mozilla, more users means the chance to convince developers and other tech companies to adopt their privacy-focused standards.

[…]

Chrome, with more than 60 percent market share worldwide, is yet another source of complaints about Google’s power, after its search engine and advertisement businesses. Last year, Chrome changed the system for logging in to the browser, a move that one researcher said could allow Google to collect data much more easily.

Firefox trails Microsoft in corporate size and influence, but it is pressing other browsers on privacy and playing up its status as a nonprofit. Last month, Firefox changed the initial settings for new users so that third-party tracking “cookies” such as those used for ad purposes are blocked — meaning the default is no tracking.

[…]

A technology columnist at the Post wrote in a scathing review last month that he was switching from Chrome to Firefox, calling Google’s product “a lot like surveillance software.” In a week of desktop websurfing, the columnist, Geoffrey Fowler, wrote that he discovered 11,189 requests for tracker cookies that were blocked by Firefox but would have been allowed by Chrome.

[…]

The browser fight has become heated enough to worry the advertising and media industries. Advertisers have become used to filling up websites with sometimes dozens of “cookies” and other forms of online tracking, and they fear a wider backlash against personalized, data-driven ads.

[…]

For now, there are few signs that Google’s browser dominance will end anytime soon, but the tech industry is riddled with examples of companies that appeared to be invincible just before their fall, including with web browsers.

Source: Privacy-first browsers look to take the shine off Google’s Chrome

Google Gmail purchase history can’t be deleted

Google and other tech companies have been under fire recently for a variety of issues, including failing to protect user data, failing to disclose how data is collected and used and failing to police the content posted to their services.

[…]

n May, I wrote up something weird I spotted on Google’s account management page. I noticed that Google uses Gmail to store a list of everything you’ve purchased, if you used Gmail or your Gmail address in any part of the transaction.

If you have a confirmation for a prescription you picked up at a pharmacy that went into your Gmail account, Google logs it. If you have a receipt from Macy’s, Google keeps it. If you bought food for delivery and the receipt went to your Gmail, Google stores that, too.

You get the idea, and you can see your own purchase history by going to Google’s Purchases page.

Google says it does this so you can use Google Assistant to track packages or reorder things, even if that’s not an option for some purchases that aren’t mailed or wouldn’t be reordered, like something you bought a store.

At the time of my original story, Google said users can delete everything by tapping into a purchase and removing the Gmail. It seemed to work if you did this for each purchase, one by one. This isn’t easy — for years worth of purchases, this would take hours or even days of time.

So, since Google doesn’t let you bulk-delete this purchases list, I decided to delete everything in my Gmail inbox. That meant removing every last message I’ve sent or received since I opened my Gmail account more than a decade ago.

Despite Google’s assurances, it didn’t work.

ike a horror movie villain that just won’t die

On Friday, three weeks after I deleted every Gmail, I checked my purchases list.

I still see receipts for things I bought years ago. Prescriptions, food deliveries, books I bought on Amazon, music I purchased from iTunes, a subscription to Xbox Live I bought from Microsoft — it’s all there.

CNBC Tech: Google Purchases
A list of my purchases Google pulled in from Gmail.
Todd Haselton | CNBC

Google continues to show me purchases I’ve made recently, too.

I can’t delete anything and I can’t turn it off.

Source: Google Gmail purchase history can’t be deleted

Top VPNs secretly owned by Chinese firms

Almost a third (30%) of the world’s top virtual private network (VPN) providers are secretly owned by six Chinese companies, according to a study by privacy and security research firm VPNpro.

The study shows that the top 97 VPNs are run by just 23 parent companies, many of which are based in countries with lax privacy laws.

Six of these companies are based in China and collectively offer 29 VPN services, but in many cases, information on the parent company is hidden to consumers.

Researchers at VPNpro have pieced together ownership information through company listings, geolocation data, the CVs of employees and other documentation.

In some instances, ownership of different VPNs is split amongst a number of subsidiaries. For example, Chinese company Innovative Connecting owns three separate businesses that produce VPN apps: Autumn Breeze 2018, Lemon Cove and All Connected. In total, Innovative Connecting produces 10 seemingly unconnected VPN products, the study shows.

Although the ownership of a number of VPN services by one company is not unusual, VPNpro is concerned that so many are based in countries with lax or non-existence privacy laws.

For example, seven of the top VPN services are owned by Gaditek, based in Pakistan. This means the Pakistani government can legally access any data without a warrant and data can also be freely handed over to foreign institutions, according to VPNpro.

The ability to access the data held by VPN providers, the researchers said, could enable governments or other organisations to identify users and their activity online. This potentially puts human rights activists, privacy advocates, investigative journalists and whistleblowers in jeopardy.

This lack of privacy, the study notes, extends to ordinary consumers, who are also coming under greater government surveillance.

“We’re not accusing any of these companies of doing anything underhand. However, we are concerned that so many VPN providers are not fully transparent about who owns them and where they are based,” said Laura Kornelija Inamedinova, research analyst at VPNpro.

Source: Top VPNs secretly owned by Chinese firms

What if All Your Slack Chats Were Leaked?

Slack is one of many Silicon Valley unicorns going public this year, but it’s the only one that has admitted it is at risk for nation-state attacks. In the S-1 forms filed with the Securities and Exchange Commission, Uber, Lyft, Pinterest and Snapchat addressed threats that could lower the price of their stock — including malware, phishing, disgruntled employees and denial-of-service attacks — but only Slack explicitly highlighted “nation-states” as a potential threat.

According to Slack’s S-1 form, the company faces threats from “sophisticated organized crime, nation-state, and nation-state supported actors.” The company acknowledges that its security measures “may not be sufficient to protect Slack and our internal systems and networks against certain attacks,” and correctly assesses that it is “virtually impossible” for the company to completely eliminate the risk of a nation-state attack.

But it is possible for Slack to minimize that risk. Or it would be, if Slack gave all its users the ability to decide which information Slack should keep and which information it should delete.

Right now, Slack stores everything you do on its platform by default — your username and password, every message you’ve sent, every lunch you’ve planned and every confidential decision you’ve made. That data is not end-to-end encrypted, which means Slack can read it, law enforcement can request it, and hackers — including the nation-state actors highlighted in Slack’s S-1 — can break in and steal it.

Slack is widely marketed for and used in business settings, so the company’s servers hold a treasure trove of valuable, proprietary information. Slack’s paying enterprise customers do have a way to mitigate their security risk — they can change their settings to set shorter retention periods and automatically delete old messages — but it’s not just big companies that are at risk.

Slack’s users include community organizers, political organizations, journalists and unions. At the Electronic Frontier Foundation, where I work, we collaborate with activists, reporters and others on their digital privacy and security, and we’ve noticed these users increasingly gravitating toward Slack’s free product.

And that’s what makes the company’s warning to investors particularly alarming: Free customer accounts don’t allow for any changes to data retention. Instead, Slack retains all of your messages but makes only the most recent 10,000 visible to you. Everything beyond that 10,000-message limit remains on Slack’s servers. So while those messages might seem out of sight and out of mind, they are all still indefinitely available to Slack, law enforcement and third-party hackers.

Source: Opinion | What if All Your Slack Chats Were Leaked? – The New York Times

UChicago and Google Sued in Federal Class Action Suit for Patient Data Sharing between 2009 – 2016

A former patient at the University of Chicago Medical Center is suing UChicago, the medical center, and Google, accusing them of violating the privacy rights of patients at UChicago Medicine through the sharing of patient records containing identifiable information.

The class action lawsuit, filed by Matt Dinerstein in the Northern District of Illinois on Wednesday, claims that UChicago violated federal law protecting patient privacy in its partnership with Google to share records of patients from 2009 to 2016. It also claims that Google will be able to use the patient data to develop highly lucrative health-care technologies.

The suit charges that the University breached contracts between UChicago and its patients by allegedly falsely claiming to patients that it would be protecting their medical records. It also charges UChicago for violating an Illinois law dictating that companies cannot engage in deceptive practices with clients.

UChicago spokesperson Jeremy Manier said in a statement e-mailed to The Maroon, “The claims in this lawsuit are without merit. The University of Chicago Medical Center has complied with the laws and regulations applicable to patient privacy.”

“The Medical Center entered into a research partnership with Google as part of the Medical Center’s continuing efforts to improve the lives of its patients,” the statement continues. “That research partnership was appropriate and legal and the claims asserted in this case are baseless and a disservice to the Medical Center’s fundamental mission of improving the lives of its patients. The University and the Medical Center will vigorously defend this action in court.”

A Google spokesperson said in a statement e-mailed to The Maroon, “We believe our healthcare research could help save lives in the future, which is why we take privacy seriously and follow all relevant rules and regulations in our handling of health data.”

UChicago announced in 2017 that it would begin sharing electronic medical records with Google in a partnership to develop machine-learning techniques that could improve the quality of health services. At the time, UChicago said that Google would ensure that “patient data is kept private and secure,” and would be “strictly following HIPAA privacy rule.”

HIPAA, the Health Insurance Portability and Accountability Act, is a federal law mandating that shared patient information must be “de-identified”—stripped of any identifying information such as addresses and photos—to protect patients’ privacy.

The complaint accuses UChicago of making insufficient efforts to scrub patient-identifying data before handing over documents.

Though UChicago and Google claim to have de-identified patients, UChicago’s inclusion of timestamps indicating when patients checked in and out of the medical center makes the records identifiable and thereby violate HIPAA, the suit alleges. It cites an article published last year by Google and researchers from collaborating universities that says, “All EHRs [medical records] were de-identified, except that dates of service were maintained in the UCM [UChicago Medicine] dataset.”

Google’s potential capability to “re-identify” patients with its advanced data mining technologies indicates that “these records were not sufficiently anonymized and put the patients’ privacy at grave risk,” the complaint claims. It notes Google’s possession of geolocation information that can “pinpoint and match exactly when certain people entered and exited the University’s hospital.”

UChicago is not the only university to share health records with Google; other universities with similar partnerships include Stanford University and the University of California, San Francisco, according to the article published by Google and collaborating researchers. Wednesday’s lawsuit rests on the fact that UChicago’s records, as obtained by Google, include timestamps of patient records.

The suit also argues that Google’s acquisition of a British startup called DeepMind in 2014 has allowed Google to possess robust machine-learning technologies that would allow Google to connect medical records to Google users’ data.

DeepMind and Google obtained health records from the British Royal Free Hospital in 2015. The project was accused by a British watchdog organization for not complying with data protection law, the suit claims.

Source: UChicago and Google Sued in Federal Class Action Suit for Data Sharing