Apple’s privacy schtick is just an act, say folks suing the iGiant: iTunes ‘purchase histories sold’ to highest bidders

Apple has been hit with a class-action complaint in the US accusing the iGiant of playing fast and loose with the privacy of its customers.

The lawsuit [PDF], filed this month in a northern California federal district court, claims the Cupertino music giant gathers data from iTunes – including people’s music purchase history and personal information – then hands that info over to marketers in order to turn a quick buck.

“To supplement its revenues and enhance the formidability of its brand in the eyes of mobile application developers, Apple sells, rents, transmits, and/or otherwise discloses, to various third parties, information reflecting the music that its customers purchase from the iTunes Store application that comes pre-installed on their iPhones,” the filing alleged.

“The data Apple discloses includes the full names and home addresses of its customers, together with the genres and, in some cases, the specific titles of the digitally-recorded music that its customers have purchased via the iTunes Store and then stored in their devices’ Apple Music libraries.”

What’s more, the lawsuit goes on to claim that the data Apple sells is then combined by the marketers with information purchased from other sources to create detailed profiles on individuals that allow for even more targeted advertising.

Additionally, the lawsuit alleges the Music APIs Apple includes in its developer kit can allow third-party devs to harvest similarly detailed logs of user activity for their own use, further violating the privacy of iTunes customers.

The end result, the complaint states, is that Cook and Co are complacent in the illegal harvesting and reselling of personal data, all while pitching iOS and iTunes as bastions of personal privacy and data security.

“Apple’s disclosures of the personal listening information of plaintiffs and the other unnamed Class members were not only unlawful, they were also dangerous because such disclosures allow for the targeting of particularly vulnerable members of society,” the complaint reads.

“For example, any person or entity could rent a list with the names and addresses of all unmarried, college-educated women over the age of 70 with a household income of over $80,000 who purchased country music from Apple via its iTunes Store mobile application. Such a list is available for sale for approximately $136 per thousand customers listed.”

Source: Apple’s privacy schtick is just an act, say folks suing the iGiant: iTunes ‘purchase histories sold’ to highest bidders • The Register

Newly Released Amazon Patent Shows Just How Much Creepier Alexa Can Get

A newly revealed patent application filed by Amazon is raising privacy concerns over an envisaged upgrade to the company’s smart speaker systems. This change would mean that, by default, the devices end up listening to and recording everything you say in their presence.

Alexa, Amazon’s virtual assistant system that runs on the company’s Echo series of smart speakers, works by listening out for a ‘wakeword’ that tells the device to turn on its extended speech recognition systems in order to respond to spoken commands.

[…]

In theory, Alexa-enabled devices will only record what you say directly after the wakeword, which is then uploaded to Amazon, where remote servers use speech recognition to deduce your meaning, then relay commands back to your local speaker.

But one issue in this flow of events, as Amazon’s recently revealed patent application argues, is it means that anything you say before the wakeword isn’t actually heard.

“A user may not always structure a spoken command in the form of a wakeword followed by a command (eg. ‘Alexa, play some music’),” the Amazon authors explain in their patent application, which was filed back in January, but only became public last week.

“Instead, a user may include the command before the wakeword (eg. ‘Play some music, Alexa’) or even insert the wakeword in the middle of a command (eg. ‘Play some music, Alexa, the Beatles please’). While such phrasings may be natural for a user, current speech processing systems are not configured to handle commands that are not preceded by a wakeword.”

To overcome this barrier, Amazon is proposing an effective workaround: simply record everything the user says all the time, and figure it out later.

Rather than only record what is said after the wakeword is spoken, the system described in the patent application would effectively continuously record all speech, then look for instances of commands issued by a person.

Source: Newly Released Amazon Patent Shows Just How Much Creepier Alexa Can Get

wow – a continuous spy in your home

Germany thinks about resurrecting the Stasi, getting rid of end-to-end chat app encryption and requiring decrypted plain-text.

Government officials in Germany are reportedly mulling a law to force chat app providers to hand over end-to-end encrypted conversations in plain text on demand.

According to Der Spiegel this month, the Euro nation’s Ministry of the Interior wants a new set of rules that would require operators of services like WhatsApp, Signal, Apple iMessage, and Telegram to cough up plain-text records of people’s private enciphered chats to authorities that obtain a court order.

This would expand German law, which right now only allows communications to be gathered from a suspect’s device itself, to also include the companies providing encrypted chat services and software. True and strong end-to-end encrypted conversations can only be decrypted by those participating in the discussion, so the proposed rules would require app makers to deliberately knacker or backdoor their code in order to comply. Those changes would be needed to allow them to collect messages passing through their systems and decrypt them on demand.

Up until now, German police have opted not to bother with trying to decrypt the contents of messages in transit, opting instead to simply seize and break into the device itself, where the messages are typically stored in plain text.

The new rules are set to be discussed by the members of the interior ministry in an upcoming June conference, and are likely to face stiff opposition not only on privacy grounds, but also in regards to the technical feasibility of the requirements.

Spokespeople for Facebook-owned WhatsApp, and Threema, makers of encrypted messaging software, were not available to comment.

The rules are the latest in an ongoing global feud between the developers of secure messaging apps and the governments. The apps, designed in part to let citizens, journalists, and activists communicate secured from the prying eyes of oppressive government regimes.

The governments, meanwhile, say that the apps also provide a safe haven for criminals and terror groups that want to plan attacks and illegal activities, making it harder for intelligence and police agencies to perform vital monitoring tasks.

The app developers note that even if governments do try to implement mandatory decryption (aka backdoor) capabilities, actually getting those tools to work properly, without opening up a massive new security hole in the platforms that miscreants and criminals could exploit, would be next to impossible.

Source: Germany mulls giving end-to-end chat app encryption das boot: Law requiring decrypted plain-text is in the works • The Register

Whatever happened to mail confidentiality then?

Google Now Forces Microsoft Edge Preview Users to Use Chrome for the Modern YouTube Experience – a bit like they fuck around with Firefox

Microsoft started testing a new Microsoft Edge browser based on Chromium a little while ago. The company has been releasing new canary and dev builds for the browser over the last few weeks, and the preview is actually really great. In fact, I have been using the new Microsoft Edge Canary on my main Windows machine and my MacBook Pro for more than a month, and it’s really good.

But if you watch YouTube quite a lot, you will face a new problem on the new Edge. It turns out, Google has randomly disabled the modern YouTube experience for users of the new Microsoft Edge. Users are now redirected to the old YouTube experience, which lacks the modern design as well as the dark theme for YouTube, as first spotted by Gustave Monce. And when you try to manually access the new YouTube from youtube.com/new, YouTube simply asks users to download Google Chrome, stating that the Edge browser isn’t supported. Ironically, the same page states “We support the latest versions of Chrome, Firefox, Opera, Safari, and Edge.”

The change affects the latest versions of Microsoft Edge Canary and Dev channels. It is worth noting that the classic Microsoft Edge based on EdgeHTML continues to work fine with the modern YouTube experience.

The weird thing here is that Microsoft has been working closely with Google engineers on the new Edge and Chromium. Both the companies engineers are working closely to improve Chromium and introduce new features like ARM64 support to Chromium. So it’s very odd that Google would prevent users of the new Microsoft Edge browser from using the modern YouTube experience. This is most likely an error on Google’s part, but it could be intentional, too — we really don’t know for now.

Source: Google Now Forces Microsoft Edge Preview Users to Use Chrome for the Modern YouTube Experience – Thurrott.com

See also:
Google isn’t the company that we should have handed the Web over to: why MS switching to Chromium is a bad idea

Bose headphones spy on listeners, sell that information on without consent or knowledge: lawsuit

Bose Corp spies on its wireless headphone customers by using an app that tracks the music, podcasts and other audio they listen to, and violates their privacy rights by selling the information without permission, a lawsuit charged.

The complaint filed on Tuesday by Kyle Zak in federal court in Chicago seeks an injunction to stop Bose’s “wholesale disregard” for the privacy of customers who download its free Bose Connect app from Apple Inc or Google Play stores to their smartphones.

[…]

After paying $350 for his QuietComfort 35 headphones, Zak said he took Bose’s suggestion to “get the most out of your headphones” by downloading its app, and providing his name, email address and headphone serial number in the process.

But the Illinois resident said he was surprised to learn that Bose sent “all available media information” from his smartphone to third parties such as Segment.io, whose website promises to collect customer data and “send it anywhere.”

Audio choices offer “an incredible amount of insight” into customers’ personalities, behavior, politics and religious views, citing as an example that a person who listens to Muslim prayers might “very likely” be a Muslim, the complaint said.

“Defendants’ conduct demonstrates a wholesale disregard for consumer privacy rights,” the complaint said.

Zak is seeking millions of dollars of damages for buyers of headphones and speakers, including QuietComfort 35, QuietControl 30, SoundLink Around-Ear Wireless Headphones II, SoundLink Color II, SoundSport Wireless and SoundSport Pulse Wireless.

He also wants a halt to the data collection, which he said violates the federal Wiretap Act and Illinois laws against eavesdropping and consumer fraud.

Dore, a partner at Edelson PC, said customers do not see the Bose app’s user service and privacy agreements when signing up, and the privacy agreement says nothing about data collection.

Edelson specializes in suing technology companies over alleged privacy violations.

The case is Zak v Bose Corp, U.S. District Court, Northern District of Illinois, No. 17-02928.

Source: Bose headphones spy on listeners: lawsuit | Article [AMP] | Reuters

Phone makers and carriers receive your location data, friends and more that Facebook pulls from your phone

A confidential Facebook document reviewed by The Intercept shows that the social network courts carriers, along with phone makers — some 100 different companies in 50 countries — by offering the use of even more surveillance data, pulled straight from your smartphone by Facebook itself.

Offered to select Facebook partners, the data includes not just technical information about Facebook members’ devices and use of Wi-Fi and cellular networks, but also their past locations, interests, and even their social groups. This data is sourced not just from the company’s main iOS and Android apps, but from Instagram and Messenger as well. The data has been used by Facebook partners to assess their standing against competitors, including customers lost to and won from them, but also for more controversial uses like racially targeted ads.

[…]

Facebook’s cellphone partnerships are particularly worrisome because of the extensive surveillance powers already enjoyed by carriers like AT&T and T-Mobile: Just as your internet service provider is capable of watching the data that bounces between your home and the wider world, telecommunications companies have a privileged vantage point from which they can glean a great deal of information about how, when, and where you’re using your phone. AT&T, for example, states plainly in its privacy policy that it collects and stores information “about the websites you visit and the mobile applications you use on our networks.” Paired with carriers’ calling and texting oversight, that accounts for just about everything you’d do on your smartphone.

[…]

the Facebook mobile app harvests and packages eight different categories of information […] These categories include use of video, demographics, location, use of Wi-Fi and cellular networks, personal interests, device information, and friend homophily, an academic term of art. A 2017 article on social media friendship from the Journal of the Society of Multivariate Experimental Psychology defined “homophily” in this context as “the tendency of nodes to form relations with those who are similar to themselves.” In other words, Facebook is using your phone to not only provide behavioral data about you to cellphone carriers, but about your friends as well.

Source: Facebook’s Work With Phone Carriers Alarms Legal Experts

Bits of Freedom cries to halt the shocking personal data sent out to everyone using Real Time Bidding advertising

During RTB, personal data such as what you read online, what you watch, your location, your sexual orientation, etc is sent to a whole slew of advertisers so they can select you as an object to show their adverts do. This, together with other profiling information sent, can be used to build up a long term profile of you and to identify you. There is no control about what happens to this data once it has been sent. This is clearly contrary to the spirit of the AVG / GDPR. The two standard RTB frameworks – Google’s Authorized Buyers and IAB’s OpenRTB both refuse to accept any responsibility about personal information, whilst both are encouraging and facilitating the trade of it.

Source: Bits of Freedom: stop met grootschalig lekken van persoonsgegevens bij real time bidding – Emerce

Google Gmail tracks purchase history through gmail, puts them on https://myaccount.google.com/purchases

Google tracks a lot of what you buy, even if you purchased it elsewhere, like in a store or from Amazon.

Last week, CEO Sundar Pichai wrote a New York Times op-ed that said “privacy cannot be a luxury good.” But behind the scenes, Google is still collecting a lot of personal information from the services you use, such as Gmail, and some of it can’t be easily deleted.

A page called “Purchases ” shows an accurate list of many — though not all — of the things I’ve bought dating back to at least 2012. I made these purchases using online services or apps such as Amazon, DoorDash or Seamless, or in stores such as Macy’s, but never directly through Google.

But because the digital receipts went to my Gmail account, Google has a list of info about my buying habits.

[…]

But there isn’t an easy way to remove all of this. You can delete all the receipts in your Gmail inbox and archived messages. But, if you’re like me, you might save receipts in Gmail in case you need them later for returns. There is no way to delete them from Purchases without also deleting them from Gmail — when you click on the “Delete” option in Purchases, it simply guides you back to the Gmail message.

[…]

Google’s privacy page says that only you can view your purchases. But it says “Information about your orders may also be saved with your activity in other Google services ” and that you can see and delete this information on a separate “My Activity” page.

Except you can’t. Google’s activity controls page doesn’t give you any ability to manage the data it stores on Purchases.

Google told CNBC you can turn off the tracking entirely, but you have to go to another page for search setting preferences. However, when CNBC tried this, it didn’t work — there was no such option to fully turn off the tracking. It’s weird this isn’t front and center on Google’s new privacy pages or even in Google’s privacy checkup feature.

Google says it doesn’t use your Gmail to show you ads and promises it “does not sell your personal information, which includes your Gmail and Google Account information,” and does “not share your personal information with advertisers, unless you have asked us to.”

But, for reasons that still aren’t clear, it’s pulling that information out of your Gmail and dumping it into a “Purchases” page most people don’t seem to know exists.

Source: Google Gmail tracks purchase history — how to delete it

Freed whistleblower Chelsea Manning back in jail for refusing to testify before secret grand jury

After seven days of freedom, US Army whistleblower Chelsea Manning is back behind bars for refusing to testify before a secret federal grand jury investigating WikiLeaks.

District Court Judge Anthony Trenga ordered Manning back to prison, and said she will, in addition, be fined $500 a day for the first 30 days in the clink, and $1,000 a day after that, until she testifies. Manning previously served 63 days in the cooler for refusing to talk, 28 of which were in solitary confinement.

“We are of course disappointed with the outcome of today’s hearing, but I anticipate it will be exactly as coercive as the previous sanction — which is to say not at all,” her attorney Moira Meltzer-Cohen said in a statement on Thursday.

“In 2010 Chelsea made a principled decision to let the world see the true nature modern asymmetric warfare. It is telling that the United States has always been more concerned with the disclosure of those documents than with the damning substance of the disclosures.”

The grand jury, which was kept secret until a typo revealed its existence, is researching the 2010 WikiLeaks publication of US State Department cables and the Collateral Murder video showing two journalists being killed in Iraq by US forces, as well as other documents relating to the ongoing wars in Iraq and Afghanistan.

[…]

After nearly seven years behind bars, Manning had her sentence commuted by President Obama, and was a free woman, for a while. Her refusal to testify in front of a secret grand jury on the grounds that they are undemocratic means she has now been taken into custody again until she changes her mind.

“Facing jail again, potentially today, doesn’t change my stance,” she said before today’s hearing.

“The prosecutors are deliberately placing me in an impossible position: go to jail and face the prospect of being held in contempt again or forgoing my principles and the strong positions that I hold dear. The latter is a far worse jail than the government can produce.”

Source: Freed whistleblower Chelsea Manning back in jail for refusing to testify before secret grand jury • The Register

‘Seasteader’ Now on the Run For His Life from Thai Authorities who overran their seastead

An American bitcoin trader and his girlfriend became the first couple to actually live on a “seastead” — a 20-meter octagon floating in international waters a full 12 nautical miles from Thailand.

Long-time Slashdot reader SonicSpike shared this article from the libertarian Foundation for Economic Education describing what happened next: [W]hile they got to experience true sovereignty for a handful of weeks, their experiment was cut short after the Thai government declared that their seastead was a threat to its national sovereignty… Asserting that [their seastead] “Exly” was still within Thailand’s 200-mile exclusive economic zone, the government made plans to charge the couple with threatening Thailand’s national sovereignty, a crime punishable by death. However, before the Thai Navy could come detain the couple, they were tipped off and managed to escape. They are now on the run, fleeing for their lives.
Venture capitalist and PayPal co-founder Peter Thiel has donated over $1 million to the Seasteading Institute — though news about this first experiment must be discouraging. “We lived on a floating house boat for a few weeks and now Thailand wants us killed,” one of the seasteaders posted on his Facebook feed.

Last week the Arizona Republic reported that since the Thai government dismantled his ocean home, he’s been “on the run” for over two weeks.

Source: Bitcoin-Trading ‘Seasteader’ Now on the Run For His Life – Slashdot

All Chromebooks will also be Linux laptops going forward – the catch: on top of Chrome OS in a VM container. So not really a linux laptop then.

At Google I/O in Mountain View, Google quietly let slip that “all devices [Chromebook] launched this year will be Linux-ready right out of the box.” Wait. What?

In case you’ve missed it, last year, Google started making it possible to run desktop Linux on Chrome OS. Since then, more Chromebook devices are able to run Linux. Going forward, all of them will be able to do so, too. Yes. All of them. ARM and Intel-based.

This isn’t surprising. Chrome OS, after all, is built on Linux. Chrome OS started as a spin off of Ubuntu Linux. It then migrated to Gentoo Linux and evolved into Google’s own take on the vanilla Linux kernel. But its interface remains the Chrome web browser UI — to this day.

Earlier, you could run Debian, Ubuntu and Kali Linux on Chrome OS using the open-source Crouton program in a chroot container. Or, you could run Gallium OS, a third-party, Xubuntu Chromebook-specific Linux variant. But it wasn’t easy.

Now? It’s as simple as simple can be. Just open the Chrome OS app switcher by pressing the Search/Launcher key and then type “Terminal”. This launches the Termina VM, which will start running a Debian 9.0 Stretch Linux container.

Congratulations! You’re now running Debian Linux on your Chromebook.

Source: All Chromebooks will also be Linux laptops going forward | ZDNet

Which means that you ‘re not really running Linux on the hardware, but in a Virtual Machine. Which means that Google sees everything you do.

Your Kid’s Echo Dot May Be Storing Data Even After You ‘Delete’ It

When Amazon launched its kid’s version of the Echo Dot smart speaker a year ago, we hoped it would be a technological blessing, rather than a curse. But as further proof that private information is no longer sacred, a complaint filed yesterday with the Federal Trade Commission alleges that the devices are unlawfully storing kids’ data—even after parents attempt to delete it.

Child and privacy advocacy groups—most notably the Campaign for a Commercial-Free Childhood (CCFC) and the Center for Digital Democracy—submitted a 96-page complaint with the FTC that alleges, in part, that:

  • Amazon’s process for reviewing personal information places undue burden on parents. (Parents cannot search through the information and must instead read or listen to every voice recording of their child’s interaction with the device in order to review.)
  • Amazon’s parental consent mechanism does not provide assurance that the person giving consent is the parent of the child.
  • Amazon does not disclose which “kid skills”—developed by third parties—collect child personal information or what they collect. It tells parents to read the privacy policy of each kid skill, but the vast majority did not provide individual privacy policies.
  • Amazon does not give notice or obtain parental consent before recording the voices of children who do not live in the home (visiting friends, family, etc.) with the owner of the device. They advertise having the technology to create voice profiles for customized user experiences but fail to use it to stop information collection from unrecognized children.
  • Amazon’s website and literature directs parents trying to delete information collected about their child to the voice recording deletion page and fails to disclose that deleting voice recordings does not delete the underlying information.
  • Amazon keeps children’s personal information longer than reasonably necessary. It only deletes information if a parent explicitly requests deletion by contacting customer service; otherwise it is retained forever.

To further prove its point, the CCFC performed a test in which a child told Alexa to “remember” a fake name, social security number, telephone number, address and food allergy. Alexa remembered and repeated the information, despite several attempts by an adult to delete or edit it.

In response to the complaint, an Amazon spokesperson said in an email, “FreeTime on Alexa and Echo Dot Kids Edition are compliant with the Children’s Online Privacy Protection Act (COPPA),” and directed users to more information on its privacy practices here.

Source: Your Kid’s Echo Dot May Be Storing Data Even After You ‘Delete’ It

Manning immediately ordered to appear before new U.S. grand jury as she is freed from jail

Former U.S. Army intelligence analyst Chelsea Manning, who was being detained for refusing to testify before a grand jury, was released on Thursday and immediately summoned to appear before a new grand jury next week, her lawyers said.

[…]

Manning was released after the term expired for the previous grand jury in Virginia that was seeking her testimony in connection with what is believed to be the government’s long-running investigation into WikiLeaks and its founder Julian Assange.

She was simultaneously subpoenaed to appear before a different grand jury on May 16, meaning she could be found in contempt again for refusing to testify and returned to jail, her lawyers said in a statement.

Manning had appeared before the grand jury in early March but declined to answer questions.

She was jailed for 62 days for contempt of court. A U.S. appeals court denied her request to be released on bail and upheld the lower court’s decision to hold her in civil contempt for refusing to testify.

“Chelsea will continue to refuse to answer questions, and will use every available legal defense to prove to District Judge (Anthony) Trenga that she has just cause for her refusal to give testimony,” the statement said.

It is unclear exactly why federal prosecutors want Manning to testify, although her representatives say the questions she was asked concern the release of information she disclosed to the public in 2010 through WikiLeaks.

Source: Manning ordered to appear before new U.S. grand jury as she is freed from jail – Reuters

Nice one, democracy. Not.

China’s Mass Surveillance App Hacked; Code Reveals Specific Criteria For Illegal Oppression of specific minorities

Human Rights Watch got their hands on an app used by Chinese authorities in the western Xinjiang region to surveil, track and categorize the entire local population – particularly the 13 million or so Turkic Muslims subject to heightened scrutiny, of which around one million are thought to live in cultural ‘reeducation’ camps.

By “reverse engineering” the code in the “Integrated Joint Operations Platform” (IJOP) app, HRW was able to identify the exact criteria authorities rely on to ‘maintain social order.’ Of note, IJOP is “central to a larger ecosystem of social monitoring and control in the region,” and similar to systems being deployed throughout the entire country.

The platform targets 36 types of people for data collection, from those who have “collected money or materials for mosques with enthusiasm,” to people who stop using smartphones.

[A]uthorities are collecting massive amounts of personal information—from the color of a person’s car to their height down to the precise centimeter—and feeding it into the IJOP central system, linking that data to the person’s national identification card number. Our analysis also shows that Xinjiang authorities consider many forms of lawful, everyday, non-violent behavior—such as “not socializing with neighbors, often avoiding using the front door”—as suspicious. The app also labels the use of 51 network tools as suspicious, including many Virtual Private Networks (VPNs) and encrypted communication tools, such as WhatsApp and Viber. –Human Rights Watch

Another method of tracking is the “Four Associations”

The IJOP app suggests Xinjiang authorities track people’s personal relationships and consider broad categories of relationship problematic. One category of problematic relationships is called “Four Associations” (四关联), which the source code suggests refers to people who are “linked to the clues of cases” (关联案件线索), people “linked to those on the run” (关联在逃人员), people “linked to those abroad” (关联境外人员), and people “linked to those who are being especially watched” (关联关注人员). –HRW

*An extremely detailed look at the data collected and how the app works can be found in the actual report.

[…]

When IJOP detects a deviation from normal parameters, such as when a person uses a phone not registered to them, or when they use more electricity than what would be considered “normal,” or when they travel to an unauthorized area without police permission, the system flags them as “micro-clues” which authorities use to gauge the level of suspicion a citizen should fall under.

IJOP also monitors personal relationships – some of which are deemed inherently suspicious, such as relatives who have obtained new phone numbers or who maintain foreign links.

Chinese authorities justify the surveillance as a means to fight terrorism. To that end, IJOP checks for terrorist content and “violent audio-viusual content” when surveilling phones and software. It also flags “adherents of Wahhabism,” the ultra-conservative form of Islam accused of being a “source of global terrorism.

[…]

Meanwhile, under the broader “Strike Hard Campaign, authorities in Xinjiang are also collecting “biometrics, including DNA samples, fingerprints, iris scans, and blood types of all residents in the region ages 12 to 65,” according to the report, which adds that “the authorities require residents to give voice samples when they apply for passports.

The Strike Hard Campaign has shown complete disregard for the rights of Turkic Muslims to be presumed innocent until proven guilty. In Xinjiang, authorities have created a system that considers individuals suspicious based on broad and dubious criteria, and then generates lists of people to be evaluated by officials for detention. Official documents state that individuals “who ought to be taken, should be taken,” suggesting the goal is to maximize the number of people they find “untrustworthy” in detention. Such people are then subjected to police interrogation without basic procedural protections. They have no right to legal counsel, and some are subjected to torture and mistreatment, for which they have no effective redress, as we have documented in our September 2018 report. The result is Chinese authorities, bolstered by technology, arbitrarily and indefinitely detaining Turkic Muslims in Xinjiang en masse for actions and behavior that are not crimes under Chinese law.

Read the entire report from Human Rights Watch here.

Source: China’s Mass Surveillance App Hacked; Code Reveals Specific Criteria For Illegal Oppression | Zero Hedge

Google gives Chrome 3rd party cookie control – which allows it to track you better, but rivals to not be able to do so

Google I/O Google, the largest handler of web cookies, plans to change the way its Chrome browser deals with the tokens, ostensibly to promote greater privacy, following similar steps taken by rival browser makers Apple, Brave, and Mozilla.

At Google I/O 2019 on Tuesday, Google’s web platform director Ben Galbraith announced the plan, which has begun to appear as a hidden opt-in feature in Chrome Canary – a version of Chrome for developer testing – and is expected to evolve over the coming months.

When a website creates a cookie on a visitor’s device for its own domain, it’s called a first-party cookie. Websites may also send responses to visitor page requests that refer to resources on a third-party domain, like a one-pixel tracking image hosted by an advertising site. By attempting to load that invisible image, the visitor enables the ad site to set a third-party cookie, if the user’s browser allows it.

Third-party cookies can have legitimate uses. They can help maintain states across sessions. For example, they can provide a way to view an embedded YouTube video (the third party in someone else’s website) without forcing a site visitor already logged into YouTube to navigate to YouTube, login and return.

But they can also be abused, which is why browser makers have implemented countermeasures. Apple uses WebKit’s Intelligent Tracking Protection for example to limit third-party cookies. Brave and Firefox block third party requests and cookies by default.

[…]

Augustine Fou, a cybersecurity and ad fraud researcher who advises companies about online marketing, told The Register that while Google’s cookie changes will benefit consumer privacy, they’ll be devastating for the rest of the ad tech business.

“It’s really great for Google’s own bottom line because all their users are logged in to various Google services anyway, and Google has consent/permission to advertise and personalize ads with the data,” he said.

In a phone interview with The Register, Johnny Ryan, chief policy and industry relations officer at browser maker Brave, expressed disbelief that Google makes it sound as if it’s opposed to tracking.

“Google isn’t just the biggest tracker, it’s the biggest workaround actor of tracking prevention yet,” he said, pointing to the company’s efforts to bypass tracking protection in Apple’s Safari browser.

In 2012, Google agreed to pay $22.5m to settle Federal Trade Commission charges that it “placed advertising tracking cookies on consumers’ computers, in many cases by circumventing the Safari browser’s default cookie-blocking setting.”

Ryan explained that last year Google implemented a forced login system that automatically allows Chrome into the user’s Google account whenever the user signs into a single Google application like Gmail.

“When the browser knows everything you’re doing, you don’t need to track anything else,” he said. “If you’re signed into Chrome, everything goes to Google.”

But other ad companies will know less, which will make them less competitive. “In real-time ad bidding, where Google’s DoubleClick is already by far the biggest player, Google will have a huge advantage because the Google cookie, the only cookie across websites, will have so much more valuable bid responses from advertisers.”

Source: Google puts Chrome on a cookie diet (which just so happens to starve its rivals, cough, cough…) • The Register

EU Votes to Amass a Giant Centralised Database of Biometric Data with 350m people in it

The European Parliament has voted by a significant margin to streamline its systems for managing everything from travel to border security by amassing an enormous information database that will include biometric data and facial images—an issue that has raised significant alarm among privacy advocates.

This system, called the Common Identity Repository (CIR), streamlines a number of functions, including the ability for officials to search a single database rather than multiple ones, with shared biometric data like fingerprints and images of faces, as well as a repository with personally identifying information like date of birth, passport numbers, and more. According to ZDNet, CIR comprises one of the largest tracking databases on the planet.

The CIR will also amass the records of more than 350 million people into a single database containing the identifying information on both citizens and non-citizens of the EU, ZDNet reports. According to Politico Europe, the new system “will grant officials access to a person’s verified identity with a single fingerprint scan.”

This system has received significant criticism from those who argue there are serious privacy rights at stake, with civil liberties advocacy group Statewatch asserting last year that it would lead to the “creation of a Big Brother centralised EU state database.”

The European Parliament has said the system “will make EU information systems used in security, border and migration management interoperable enabling data exchange between the systems.” The idea is that it will also make obtaining information a faster and more effective process, which is either great or nightmarish depending on your trust in government data collection and storage.

[…]

The CIR was approved through two separate votes: one for merging systems used for things related to visas and borders was approved 511 to 123 (with nine abstentions), and the other for streamlining systems users for law enforcement, judicial, migration, and asylum matters, which was approved 510 to 130 (also with nine abstentions). If this sounds like the handiwork of some serious lobbying, you might be correct, as one European Parliament official told Politico Europe.

A European Commission official told the outlet that they didn’t “think anyone understands what they’re voting for.” So that’s reassuring.

Source: EU Votes to Amass a Giant Database of Biometric Data

Because centralised databases are never leaked or hacked. Wait…

Is Alexa Listening? Amazon Employees Can Access Home Addresses, telephone numbers, contacts

An Amazon.com Inc. team auditing Alexa users’ commands has access to location data and can, in some cases, easily find a customer’s home address, according to five employees familiar with the program.

The team, spread across three continents, transcribes, annotates and analyzes a portion of the voice recordings picked up by Alexa. The program, whose existence Bloomberg revealed earlier this month, was set up to help Amazon’s digital voice assistant get better at understanding and responding to commands.

Team members with access to Alexa users’ geographic coordinates can easily type them into third-party mapping software and find home residences, according to the employees, who signed nondisclosure agreements barring them from speaking publicly about the program.

While there’s no indication Amazon employees with access to the data have attempted to track down individual users, two members of the Alexa team expressed concern to Bloomberg that Amazon was granting unnecessarily broad access to customer data that would make it easy to identify a device’s owner.

[…]

Some of the workers charged with analyzing recordings of Alexa customers use an Amazon tool that displays audio clips alongside data about the device that captured the recording. Much of the information stored by the software, including a device ID and customer identification number, can’t be easily linked back to a user.

However, Amazon also collects location data so Alexa can more accurately answer requests, for example suggesting a local restaurant or giving the weather in nearby Ashland, Oregon, instead of distant Ashland, Michigan.

[…]

It’s unclear how many people have access to that system. Two Amazon employees said they believed the vast majority of workers in the Alexa Data Services group were, until recently, able to use the software.

[…]

A second internal Amazon software tool, available to a smaller pool of workers who tag transcripts of voice recordings to help Alexa categorize requests, stores more personal data, according to one of the employees.

After punching in a customer ID number, those workers, called annotators and verifiers, can see the home and work addresses and phone numbers customers entered into the Alexa app when they set up the device, the employee said. If a user has chosen to share their contacts with Alexa, their names, numbers and email addresses also appear in the dashboard.

[…]

Amazon appears to have been restricting the level of access employees have to the system.

One employee said that, as recently as a year ago, an Amazon dashboard detailing a user’s contacts displayed full phone numbers. Now, in that same panel, some digits are obscured.

Amazon further limited access to data after Bloomberg’s April 10 report, two of the employees said. Some data associates, who transcribe, annotate and verify audio recordings, arrived for work to find that they no longer had access to software tools they had previously used in their jobs, these people said. As of press time, their access had not been restored.

Source: Is Alexa Listening? Amazon Employees Can Access Home Addresses – Bloomberg

‘They’re Basically Lying’ – (Mental) Health Apps Caught Secretly Sharing Data

“Free apps marketed to people with depression or who want to quit smoking are hemorrhaging user data to third parties like Facebook and Google — but often don’t admit it in their privacy policies, a new study reports…” writes The Verge.

“You don’t have to be a user of Facebook’s or Google’s services for them to have enough breadcrumbs to ID you,” warns Slashdot schwit1. From the article: By intercepting the data transmissions, they discovered that 92 percent of the 36 apps shared the data with at least one third party — mostly Facebook- and Google-run services that help with marketing, advertising, or data analytics. (Facebook and Google did not immediately respond to requests for comment.) But about half of those apps didn’t disclose that third-party data sharing, for a few different reasons: nine apps didn’t have a privacy policy at all; five apps did but didn’t say the data would be shared this way; and three apps actively said that this kind of data sharing wouldn’t happen. Those last three are the ones that stood out to Steven Chan, a physician at Veterans Affairs Palo Alto Health Care System, who has collaborated with Torous in the past but wasn’t involved in the new study. “They’re basically lying,” he says of the apps.

Part of the problem is the business model for free apps, the study authors write: since insurance might not pay for an app that helps users quit smoking, for example, the only ways for free app developer to stay afloat is to either sell subscriptions or sell data. And if that app is branded as a wellness tool, the developers can skirt laws intended to keep medical information private.
A few apps even shared what The Verge calls “very sensitive information” like self reports about substance use and user names.

Source: ‘They’re Basically Lying’ – Mental Health Apps Caught Secretly Sharing Data – Slashdot

Personal information on sites about faith, illness, sexual orientation, addiction, schools in NL is directly passed on to advertisers without GDPR consent.

Websites met informatie over gevoelige onderwerpen lappen de privacywet massaal aan hun laars. Dat zegt de Consumentenbond. Veel sites plaatsen zonder toestemming cookies van advertentienetwerken, waardoor die zeer persoonlijke informatie over de bezoekers in handen krijgen.

Onderzoekers van de Consumentenbond zochten in maart en april op onderwerpen binnen de categorieën geloof, jeugd, medisch en geaardheid. Via zoekvragen over onder meer depressie, verslaving, seksuele geaardheid en kanker kwamen zij op 106 websites.

Bijna de helft van die sites plaatste bij bezoek direct, dus zonder toestemming van de bezoeker, een of meer advertentiecookies, bijna altijd van Google. Websites als CIP.nl, Refoweb.nl en scholieren.com plaatsten er zelfs tientallen. Ouders.nl maakte het helemaal bont en plaatste maar liefst 37 cookies.

Ook een flink aantal instellingen voor geestelijke gezondheidszorg viel op. Onder andere ggzdrenthe.nl, connection-sggz.nl, parnassiagroep.nl en lentis.nl volgden ongevraagd het surfgedrag van hun bezoekers en speelden deze informatie door naar Google.

De privacywet AVG is nu een jaar van kracht, maar het is volgens de bond zorgwekkend hoe slecht de wet wordt nageleefd.

Source: ‘Persoonlijke informatie niet veilig bij sites over geloof, ziekte en geaardheid’ – Emerce

Apple killing right to repair bill

The bill has been pulled by its sponsor, Susan Talamantes-Eggman: “It became clear that the bill would not have the support it needed today, and manufacturers had sown enough doubt with vague and unbacked claims of privacy and security concerns,” she said. Her full statement has been added at the end of the piece.

In recent weeks, an Apple representative and a lobbyist for CompTIA, a trade organization that represents big tech companies, have been privately meeting with legislators in California to encourage them to kill legislation that would make it easier for consumers to repair their electronics, Motherboard has learned.

According to two sources in the California State Assembly, the lobbyists have met with members of the Privacy and Consumer Protection Committee, which is set to hold a hearing on the bill Tuesday afternoon. The lobbyists brought an iPhone to the meetings and showed lawmakers and their legislative aides the internal components of the phone. The lobbyists said that if improperly disassembled, consumers who are trying to fix their own iPhone could hurt themselves by puncturing the lithium-ion battery, the sources, who Motherboard is not naming because they were not authorized to speak to the media, said.

The argument is similar to one made publicly by Apple executive Lisa Jackson in 2017 at TechCrunch Disrupt, when she said the iPhone is “too complex” for normal people to repair them.

[…]

a few weeks after CompTIA and 18 other trade organizations associated with big tech companies—including CTIA and the Entertainment Software Association—sent letters in opposition of the legislation to members of the Assembly’s Privacy and Consumer Protection Committee. One copy of the letter, addressed to committee chairperson Ed Chau and obtained by Motherboard, urges the chairperson “against moving forward with this legislation.” CTIA represents wireless carriers including Verizon, AT&T, and T-Mobile, while the Entertainment Software Association represents Nintendo, Sony, Microsoft, and other video game manufacturers.

“With access to proprietary guides and tools, hackers can more easily circumvent security protections, harming not only the product owner but also everyone who shares their network,” the letter, obtained by Motherboard, stated. “When an electronic product breaks, consumers have a variety of repair options, including using an OEM’s [original equipment manufacturer] authorized repair network.”

Experts, however, say Apple’s and CompTIA’s warnings are far overblown. People with no special training regularly replace the batteries or cracked screens in their iPhones, and there are thousands of small, independent repair companies that regularly fix iPhones without incident. The issue is that many of these companies operate in a grey area because they are forced to purchase replacement parts from third parties in Shenzhen, China, because Apple doesn’t sell them to independent companies unless they become part of the “Apple Authorized Service Provider Program,” which limits the types of repairs they are allowed to do and requires companies to pay Apple a fee to join.

“To suggest that there are safety and security concerns with spare parts and manuals is just patently absurd,” Nathan Proctor, director of consumer rights group US PIRG’s right to repair campaign told Motherboard in a phone call. “We know that all across the country, millions of people are doing this for themselves. Millions more are taking devices to independent repair technicians.”

[…]

“The security of devices is not related to diagnostics and service manuals, they’re related to poor code with vulnerabilities, weak authentication, devices deployed by default to be vulnerable,” Roberts told Motherboard. “We all know there’s no debate. Security for connected devices has nothing to do with repair.”

Source: Apple Is Telling Lawmakers People Will Hurt Themselves if They Try to Fix iPhones – Motherboard

Wow, this is simply ridiculous. Profiteering by the large companies at the expense of smaller companies seems to be something the US government absolutely loves.

Kremlin signs total internet surveillance and censorship system into law, from Nov 1st.

Russia’s internet iron curtain has been formally signed into law by President Putin. The nation’s internet service providers have until 1 November to ensure they comply.

The law will force traffic through government-controlled exchanges and eventually require the creation of a national domain name system.

The bill has been promoted as advancing Russian sovereignty and ensuring Runet, Russia’s domestic internet, remains functioning regardless of what happens elsewhere in the world. The government has claimed “aggressive” US cybersecurity policies justify the move.

Control of exchanges is seen as an easy way for the Russian government to increase its control over what data its citizens can see, and what they can post. The Kremlin wants all data required by the network to be stored within Russian borders.

ISPs will only be allowed to connect to other ISPs, or peer, through approved exchanges. These exchanges will have to include government-supplied boxes which can block data traffic as required.

There have been widespread protests within the country against the law.

Source: Having a bad day? Be thankful you don’t work at a Russian ISP: Kremlin signs off Pootynet restrictions • The Register

Security lapse exposed a Chinese smart city surveillance system

Smart cities are designed to make life easier for their residents: better traffic management by clearing routes, making sure the public transport is running on time and having cameras keeping a watchful eye from above.

But what happens when that data leaks? One such database was open for weeks for anyone to look inside.

Security researcher John Wethington found a smart city database accessible from a web browser without a password. He passed details of the database to TechCrunch in an effort to get the data secured.

[…]

he system monitors the residents around at least two small housing communities in eastern Beijing, the largest of which is Liangmaqiao, known as the city’s embassy district. The system is made up of several data collection points, including cameras designed to collect facial recognition data.

The exposed data contains enough information to pinpoint where people went, when and for how long, allowing anyone with access to the data — including police — to build up a picture of a person’s day-to-day life.

A portion of the database containing facial recognition scans (Image: supplied)

The database processed various facial details, such as if a person’s eyes or mouth are open, if they’re wearing sunglasses, or a mask — common during periods of heavy smog — and if a person is smiling or even has a beard.

The database also contained a subject’s approximate age as well as an “attractive” score, according to the database fields.

But the capabilities of the system have a darker side, particularly given the complicated politics of China.

The system also uses its facial recognition systems to detect ethnicities and labels them — such as “汉族” for Han Chinese, the main ethnic group of China — and also “维族” — or Uyghur Muslims, an ethnic minority under persecution by Beijing.

Where ethnicities can help police identify suspects in an area even if they don’t have a name to match, the data can be used for abuse.

The Chinese government has detained more than a million Uyghurs in internment camps in the past year, according to a United Nations human rights committee. It’s part of a massive crackdown by Beijing on the ethnic minority group. Just this week, details emerged of an app used by police to track Uyghur Muslims.

We also found that the customer’s system also pulls in data from the police and uses that information to detect people of interest or criminal suspects, suggesting it may be a government customer.

Facial recognition scans would match against police records in real time (Image: supplied)

Each time a person is detected, the database would trigger a “warning” noting the date, time, location and a corresponding note. Several records seen by TechCrunch include suspects’ names and their national identification card number.

Source: Security lapse exposed a Chinese smart city surveillance system – TechCrunch

Facebook uploaded the contacts of 1.5m people without permission

On Thursday, at just about the same time as the most highly anticipated government document of the decade was released in Washington D.C., Facebook updated a month-old blog post to note that actually a security incident impacted “millions” of Instagram users and not “tens of thousands” as they said at first.

Last month, Facebook announced that hundreds of millions of Facebook and Facebook Lite account passwords were stored in plaintext in a database exposed to over 20,000 employees.

https://www.theregister.co.uk/2019/04/18/facebook_hoovered_up_15m_address_books_without_permission/

Pregnancy and parenting club Bounty fined £400,000 for shady data sharing practices of more than 14 million people

The Information Commissioner’s Office has fined commercial pregnancy and parenting club Bounty some £400,000 for illegally sharing personal details of more than 14 million people.

The organisation, which dishes out advice to expectant and inexperienced parents, has faced criticism over the tactics it uses to sign up new members and was the subject of a campaign to boot its reps from maternity wards.

[…]

the business had also worked as a data brokering service until April last year, distributing data to third parties to then pester unsuspecting folk with electronic direct marketing. By sharing this information and not being transparent about its uses while it was extracting the stuff, Bounty broke the Data Protection Act 1998.

Bounty shared roughly 34.4 million records from June 2017 to April 2018 with credit reference and marketing agencies. Acxiom, Equifax, Indicia and Sky were the four biggest of the 39 companies that Bounty told the ICO it sold stuff to.

This data included details of new mother and mothers-to-be but also of very young children’s birth dates and their gender.

Source: Pregnancy and parenting club Bounty fined £400,000 for shady data sharing practices • The Register

Chinese stock photo pusher tries to claim copyright on Event Horizon pic, Chinese Flag

China’s largest stock photo flinger has been forced to backtrack after it tried to put its own price tags on images of the first black hole and the Chinese flag.

Visual China Group reportedly tried to hawk out the first-ever image of a supermassive black hole and its shadow, which was the painstaking work of boffins running the Event Horizon Telescope.

The website is reported to have tried to suck users into payment, describing the picture, on which it affixed its logo, as an “editorial image” and directed users to dial a customer rep to discuss commercial use.

According to Reuters, the firm said it had obtained a non-exclusive editing licence for the project for media use – but it was widely understood the images were released under a Creative Commons licence, specifically CC BY 4.0.

The pic pushers were also said to have drawn criticism for asking for payment for images such as China’s flag and logos of companies including Baidu.

After the Tianjin city branch of China’s internet overseer stepped in, Visual China apologised and said that it would “learn from these lessons” and “seriously rectify” the problem.

Source: Hole lotta crud: Chinese stock photo pusher tries to claim copyright on Event Horizon pic • The Register

Copyright is such a brilliant system!