Phone makers and carriers receive your location data, friends and more that Facebook pulls from your phone

A confidential Facebook document reviewed by The Intercept shows that the social network courts carriers, along with phone makers — some 100 different companies in 50 countries — by offering the use of even more surveillance data, pulled straight from your smartphone by Facebook itself.

Offered to select Facebook partners, the data includes not just technical information about Facebook members’ devices and use of Wi-Fi and cellular networks, but also their past locations, interests, and even their social groups. This data is sourced not just from the company’s main iOS and Android apps, but from Instagram and Messenger as well. The data has been used by Facebook partners to assess their standing against competitors, including customers lost to and won from them, but also for more controversial uses like racially targeted ads.

[…]

Facebook’s cellphone partnerships are particularly worrisome because of the extensive surveillance powers already enjoyed by carriers like AT&T and T-Mobile: Just as your internet service provider is capable of watching the data that bounces between your home and the wider world, telecommunications companies have a privileged vantage point from which they can glean a great deal of information about how, when, and where you’re using your phone. AT&T, for example, states plainly in its privacy policy that it collects and stores information “about the websites you visit and the mobile applications you use on our networks.” Paired with carriers’ calling and texting oversight, that accounts for just about everything you’d do on your smartphone.

[…]

the Facebook mobile app harvests and packages eight different categories of information […] These categories include use of video, demographics, location, use of Wi-Fi and cellular networks, personal interests, device information, and friend homophily, an academic term of art. A 2017 article on social media friendship from the Journal of the Society of Multivariate Experimental Psychology defined “homophily” in this context as “the tendency of nodes to form relations with those who are similar to themselves.” In other words, Facebook is using your phone to not only provide behavioral data about you to cellphone carriers, but about your friends as well.

Source: Facebook’s Work With Phone Carriers Alarms Legal Experts

Bits of Freedom cries to halt the shocking personal data sent out to everyone using Real Time Bidding advertising

During RTB, personal data such as what you read online, what you watch, your location, your sexual orientation, etc is sent to a whole slew of advertisers so they can select you as an object to show their adverts do. This, together with other profiling information sent, can be used to build up a long term profile of you and to identify you. There is no control about what happens to this data once it has been sent. This is clearly contrary to the spirit of the AVG / GDPR. The two standard RTB frameworks – Google’s Authorized Buyers and IAB’s OpenRTB both refuse to accept any responsibility about personal information, whilst both are encouraging and facilitating the trade of it.

Source: Bits of Freedom: stop met grootschalig lekken van persoonsgegevens bij real time bidding – Emerce

Google Gmail tracks purchase history through gmail, puts them on https://myaccount.google.com/purchases

Google tracks a lot of what you buy, even if you purchased it elsewhere, like in a store or from Amazon.

Last week, CEO Sundar Pichai wrote a New York Times op-ed that said “privacy cannot be a luxury good.” But behind the scenes, Google is still collecting a lot of personal information from the services you use, such as Gmail, and some of it can’t be easily deleted.

A page called “Purchases ” shows an accurate list of many — though not all — of the things I’ve bought dating back to at least 2012. I made these purchases using online services or apps such as Amazon, DoorDash or Seamless, or in stores such as Macy’s, but never directly through Google.

But because the digital receipts went to my Gmail account, Google has a list of info about my buying habits.

[…]

But there isn’t an easy way to remove all of this. You can delete all the receipts in your Gmail inbox and archived messages. But, if you’re like me, you might save receipts in Gmail in case you need them later for returns. There is no way to delete them from Purchases without also deleting them from Gmail — when you click on the “Delete” option in Purchases, it simply guides you back to the Gmail message.

[…]

Google’s privacy page says that only you can view your purchases. But it says “Information about your orders may also be saved with your activity in other Google services ” and that you can see and delete this information on a separate “My Activity” page.

Except you can’t. Google’s activity controls page doesn’t give you any ability to manage the data it stores on Purchases.

Google told CNBC you can turn off the tracking entirely, but you have to go to another page for search setting preferences. However, when CNBC tried this, it didn’t work — there was no such option to fully turn off the tracking. It’s weird this isn’t front and center on Google’s new privacy pages or even in Google’s privacy checkup feature.

Google says it doesn’t use your Gmail to show you ads and promises it “does not sell your personal information, which includes your Gmail and Google Account information,” and does “not share your personal information with advertisers, unless you have asked us to.”

But, for reasons that still aren’t clear, it’s pulling that information out of your Gmail and dumping it into a “Purchases” page most people don’t seem to know exists.

Source: Google Gmail tracks purchase history — how to delete it

Freed whistleblower Chelsea Manning back in jail for refusing to testify before secret grand jury

After seven days of freedom, US Army whistleblower Chelsea Manning is back behind bars for refusing to testify before a secret federal grand jury investigating WikiLeaks.

District Court Judge Anthony Trenga ordered Manning back to prison, and said she will, in addition, be fined $500 a day for the first 30 days in the clink, and $1,000 a day after that, until she testifies. Manning previously served 63 days in the cooler for refusing to talk, 28 of which were in solitary confinement.

“We are of course disappointed with the outcome of today’s hearing, but I anticipate it will be exactly as coercive as the previous sanction — which is to say not at all,” her attorney Moira Meltzer-Cohen said in a statement on Thursday.

“In 2010 Chelsea made a principled decision to let the world see the true nature modern asymmetric warfare. It is telling that the United States has always been more concerned with the disclosure of those documents than with the damning substance of the disclosures.”

The grand jury, which was kept secret until a typo revealed its existence, is researching the 2010 WikiLeaks publication of US State Department cables and the Collateral Murder video showing two journalists being killed in Iraq by US forces, as well as other documents relating to the ongoing wars in Iraq and Afghanistan.

[…]

After nearly seven years behind bars, Manning had her sentence commuted by President Obama, and was a free woman, for a while. Her refusal to testify in front of a secret grand jury on the grounds that they are undemocratic means she has now been taken into custody again until she changes her mind.

“Facing jail again, potentially today, doesn’t change my stance,” she said before today’s hearing.

“The prosecutors are deliberately placing me in an impossible position: go to jail and face the prospect of being held in contempt again or forgoing my principles and the strong positions that I hold dear. The latter is a far worse jail than the government can produce.”

Source: Freed whistleblower Chelsea Manning back in jail for refusing to testify before secret grand jury • The Register

‘Seasteader’ Now on the Run For His Life from Thai Authorities who overran their seastead

An American bitcoin trader and his girlfriend became the first couple to actually live on a “seastead” — a 20-meter octagon floating in international waters a full 12 nautical miles from Thailand.

Long-time Slashdot reader SonicSpike shared this article from the libertarian Foundation for Economic Education describing what happened next: [W]hile they got to experience true sovereignty for a handful of weeks, their experiment was cut short after the Thai government declared that their seastead was a threat to its national sovereignty… Asserting that [their seastead] “Exly” was still within Thailand’s 200-mile exclusive economic zone, the government made plans to charge the couple with threatening Thailand’s national sovereignty, a crime punishable by death. However, before the Thai Navy could come detain the couple, they were tipped off and managed to escape. They are now on the run, fleeing for their lives.
Venture capitalist and PayPal co-founder Peter Thiel has donated over $1 million to the Seasteading Institute — though news about this first experiment must be discouraging. “We lived on a floating house boat for a few weeks and now Thailand wants us killed,” one of the seasteaders posted on his Facebook feed.

Last week the Arizona Republic reported that since the Thai government dismantled his ocean home, he’s been “on the run” for over two weeks.

Source: Bitcoin-Trading ‘Seasteader’ Now on the Run For His Life – Slashdot

All Chromebooks will also be Linux laptops going forward – the catch: on top of Chrome OS in a VM container. So not really a linux laptop then.

At Google I/O in Mountain View, Google quietly let slip that “all devices [Chromebook] launched this year will be Linux-ready right out of the box.” Wait. What?

In case you’ve missed it, last year, Google started making it possible to run desktop Linux on Chrome OS. Since then, more Chromebook devices are able to run Linux. Going forward, all of them will be able to do so, too. Yes. All of them. ARM and Intel-based.

This isn’t surprising. Chrome OS, after all, is built on Linux. Chrome OS started as a spin off of Ubuntu Linux. It then migrated to Gentoo Linux and evolved into Google’s own take on the vanilla Linux kernel. But its interface remains the Chrome web browser UI — to this day.

Earlier, you could run Debian, Ubuntu and Kali Linux on Chrome OS using the open-source Crouton program in a chroot container. Or, you could run Gallium OS, a third-party, Xubuntu Chromebook-specific Linux variant. But it wasn’t easy.

Now? It’s as simple as simple can be. Just open the Chrome OS app switcher by pressing the Search/Launcher key and then type “Terminal”. This launches the Termina VM, which will start running a Debian 9.0 Stretch Linux container.

Congratulations! You’re now running Debian Linux on your Chromebook.

Source: All Chromebooks will also be Linux laptops going forward | ZDNet

Which means that you ‘re not really running Linux on the hardware, but in a Virtual Machine. Which means that Google sees everything you do.

Your Kid’s Echo Dot May Be Storing Data Even After You ‘Delete’ It

When Amazon launched its kid’s version of the Echo Dot smart speaker a year ago, we hoped it would be a technological blessing, rather than a curse. But as further proof that private information is no longer sacred, a complaint filed yesterday with the Federal Trade Commission alleges that the devices are unlawfully storing kids’ data—even after parents attempt to delete it.

Child and privacy advocacy groups—most notably the Campaign for a Commercial-Free Childhood (CCFC) and the Center for Digital Democracy—submitted a 96-page complaint with the FTC that alleges, in part, that:

  • Amazon’s process for reviewing personal information places undue burden on parents. (Parents cannot search through the information and must instead read or listen to every voice recording of their child’s interaction with the device in order to review.)
  • Amazon’s parental consent mechanism does not provide assurance that the person giving consent is the parent of the child.
  • Amazon does not disclose which “kid skills”—developed by third parties—collect child personal information or what they collect. It tells parents to read the privacy policy of each kid skill, but the vast majority did not provide individual privacy policies.
  • Amazon does not give notice or obtain parental consent before recording the voices of children who do not live in the home (visiting friends, family, etc.) with the owner of the device. They advertise having the technology to create voice profiles for customized user experiences but fail to use it to stop information collection from unrecognized children.
  • Amazon’s website and literature directs parents trying to delete information collected about their child to the voice recording deletion page and fails to disclose that deleting voice recordings does not delete the underlying information.
  • Amazon keeps children’s personal information longer than reasonably necessary. It only deletes information if a parent explicitly requests deletion by contacting customer service; otherwise it is retained forever.

To further prove its point, the CCFC performed a test in which a child told Alexa to “remember” a fake name, social security number, telephone number, address and food allergy. Alexa remembered and repeated the information, despite several attempts by an adult to delete or edit it.

In response to the complaint, an Amazon spokesperson said in an email, “FreeTime on Alexa and Echo Dot Kids Edition are compliant with the Children’s Online Privacy Protection Act (COPPA),” and directed users to more information on its privacy practices here.

Source: Your Kid’s Echo Dot May Be Storing Data Even After You ‘Delete’ It

Manning immediately ordered to appear before new U.S. grand jury as she is freed from jail

Former U.S. Army intelligence analyst Chelsea Manning, who was being detained for refusing to testify before a grand jury, was released on Thursday and immediately summoned to appear before a new grand jury next week, her lawyers said.

[…]

Manning was released after the term expired for the previous grand jury in Virginia that was seeking her testimony in connection with what is believed to be the government’s long-running investigation into WikiLeaks and its founder Julian Assange.

She was simultaneously subpoenaed to appear before a different grand jury on May 16, meaning she could be found in contempt again for refusing to testify and returned to jail, her lawyers said in a statement.

Manning had appeared before the grand jury in early March but declined to answer questions.

She was jailed for 62 days for contempt of court. A U.S. appeals court denied her request to be released on bail and upheld the lower court’s decision to hold her in civil contempt for refusing to testify.

“Chelsea will continue to refuse to answer questions, and will use every available legal defense to prove to District Judge (Anthony) Trenga that she has just cause for her refusal to give testimony,” the statement said.

It is unclear exactly why federal prosecutors want Manning to testify, although her representatives say the questions she was asked concern the release of information she disclosed to the public in 2010 through WikiLeaks.

Source: Manning ordered to appear before new U.S. grand jury as she is freed from jail – Reuters

Nice one, democracy. Not.

China’s Mass Surveillance App Hacked; Code Reveals Specific Criteria For Illegal Oppression of specific minorities

Human Rights Watch got their hands on an app used by Chinese authorities in the western Xinjiang region to surveil, track and categorize the entire local population – particularly the 13 million or so Turkic Muslims subject to heightened scrutiny, of which around one million are thought to live in cultural ‘reeducation’ camps.

By “reverse engineering” the code in the “Integrated Joint Operations Platform” (IJOP) app, HRW was able to identify the exact criteria authorities rely on to ‘maintain social order.’ Of note, IJOP is “central to a larger ecosystem of social monitoring and control in the region,” and similar to systems being deployed throughout the entire country.

The platform targets 36 types of people for data collection, from those who have “collected money or materials for mosques with enthusiasm,” to people who stop using smartphones.

[A]uthorities are collecting massive amounts of personal information—from the color of a person’s car to their height down to the precise centimeter—and feeding it into the IJOP central system, linking that data to the person’s national identification card number. Our analysis also shows that Xinjiang authorities consider many forms of lawful, everyday, non-violent behavior—such as “not socializing with neighbors, often avoiding using the front door”—as suspicious. The app also labels the use of 51 network tools as suspicious, including many Virtual Private Networks (VPNs) and encrypted communication tools, such as WhatsApp and Viber. –Human Rights Watch

Another method of tracking is the “Four Associations”

The IJOP app suggests Xinjiang authorities track people’s personal relationships and consider broad categories of relationship problematic. One category of problematic relationships is called “Four Associations” (四关联), which the source code suggests refers to people who are “linked to the clues of cases” (关联案件线索), people “linked to those on the run” (关联在逃人员), people “linked to those abroad” (关联境外人员), and people “linked to those who are being especially watched” (关联关注人员). –HRW

*An extremely detailed look at the data collected and how the app works can be found in the actual report.

[…]

When IJOP detects a deviation from normal parameters, such as when a person uses a phone not registered to them, or when they use more electricity than what would be considered “normal,” or when they travel to an unauthorized area without police permission, the system flags them as “micro-clues” which authorities use to gauge the level of suspicion a citizen should fall under.

IJOP also monitors personal relationships – some of which are deemed inherently suspicious, such as relatives who have obtained new phone numbers or who maintain foreign links.

Chinese authorities justify the surveillance as a means to fight terrorism. To that end, IJOP checks for terrorist content and “violent audio-viusual content” when surveilling phones and software. It also flags “adherents of Wahhabism,” the ultra-conservative form of Islam accused of being a “source of global terrorism.

[…]

Meanwhile, under the broader “Strike Hard Campaign, authorities in Xinjiang are also collecting “biometrics, including DNA samples, fingerprints, iris scans, and blood types of all residents in the region ages 12 to 65,” according to the report, which adds that “the authorities require residents to give voice samples when they apply for passports.

The Strike Hard Campaign has shown complete disregard for the rights of Turkic Muslims to be presumed innocent until proven guilty. In Xinjiang, authorities have created a system that considers individuals suspicious based on broad and dubious criteria, and then generates lists of people to be evaluated by officials for detention. Official documents state that individuals “who ought to be taken, should be taken,” suggesting the goal is to maximize the number of people they find “untrustworthy” in detention. Such people are then subjected to police interrogation without basic procedural protections. They have no right to legal counsel, and some are subjected to torture and mistreatment, for which they have no effective redress, as we have documented in our September 2018 report. The result is Chinese authorities, bolstered by technology, arbitrarily and indefinitely detaining Turkic Muslims in Xinjiang en masse for actions and behavior that are not crimes under Chinese law.

Read the entire report from Human Rights Watch here.

Source: China’s Mass Surveillance App Hacked; Code Reveals Specific Criteria For Illegal Oppression | Zero Hedge

Google gives Chrome 3rd party cookie control – which allows it to track you better, but rivals to not be able to do so

Google I/O Google, the largest handler of web cookies, plans to change the way its Chrome browser deals with the tokens, ostensibly to promote greater privacy, following similar steps taken by rival browser makers Apple, Brave, and Mozilla.

At Google I/O 2019 on Tuesday, Google’s web platform director Ben Galbraith announced the plan, which has begun to appear as a hidden opt-in feature in Chrome Canary – a version of Chrome for developer testing – and is expected to evolve over the coming months.

When a website creates a cookie on a visitor’s device for its own domain, it’s called a first-party cookie. Websites may also send responses to visitor page requests that refer to resources on a third-party domain, like a one-pixel tracking image hosted by an advertising site. By attempting to load that invisible image, the visitor enables the ad site to set a third-party cookie, if the user’s browser allows it.

Third-party cookies can have legitimate uses. They can help maintain states across sessions. For example, they can provide a way to view an embedded YouTube video (the third party in someone else’s website) without forcing a site visitor already logged into YouTube to navigate to YouTube, login and return.

But they can also be abused, which is why browser makers have implemented countermeasures. Apple uses WebKit’s Intelligent Tracking Protection for example to limit third-party cookies. Brave and Firefox block third party requests and cookies by default.

[…]

Augustine Fou, a cybersecurity and ad fraud researcher who advises companies about online marketing, told The Register that while Google’s cookie changes will benefit consumer privacy, they’ll be devastating for the rest of the ad tech business.

“It’s really great for Google’s own bottom line because all their users are logged in to various Google services anyway, and Google has consent/permission to advertise and personalize ads with the data,” he said.

In a phone interview with The Register, Johnny Ryan, chief policy and industry relations officer at browser maker Brave, expressed disbelief that Google makes it sound as if it’s opposed to tracking.

“Google isn’t just the biggest tracker, it’s the biggest workaround actor of tracking prevention yet,” he said, pointing to the company’s efforts to bypass tracking protection in Apple’s Safari browser.

In 2012, Google agreed to pay $22.5m to settle Federal Trade Commission charges that it “placed advertising tracking cookies on consumers’ computers, in many cases by circumventing the Safari browser’s default cookie-blocking setting.”

Ryan explained that last year Google implemented a forced login system that automatically allows Chrome into the user’s Google account whenever the user signs into a single Google application like Gmail.

“When the browser knows everything you’re doing, you don’t need to track anything else,” he said. “If you’re signed into Chrome, everything goes to Google.”

But other ad companies will know less, which will make them less competitive. “In real-time ad bidding, where Google’s DoubleClick is already by far the biggest player, Google will have a huge advantage because the Google cookie, the only cookie across websites, will have so much more valuable bid responses from advertisers.”

Source: Google puts Chrome on a cookie diet (which just so happens to starve its rivals, cough, cough…) • The Register

EU Votes to Amass a Giant Centralised Database of Biometric Data with 350m people in it

The European Parliament has voted by a significant margin to streamline its systems for managing everything from travel to border security by amassing an enormous information database that will include biometric data and facial images—an issue that has raised significant alarm among privacy advocates.

This system, called the Common Identity Repository (CIR), streamlines a number of functions, including the ability for officials to search a single database rather than multiple ones, with shared biometric data like fingerprints and images of faces, as well as a repository with personally identifying information like date of birth, passport numbers, and more. According to ZDNet, CIR comprises one of the largest tracking databases on the planet.

The CIR will also amass the records of more than 350 million people into a single database containing the identifying information on both citizens and non-citizens of the EU, ZDNet reports. According to Politico Europe, the new system “will grant officials access to a person’s verified identity with a single fingerprint scan.”

This system has received significant criticism from those who argue there are serious privacy rights at stake, with civil liberties advocacy group Statewatch asserting last year that it would lead to the “creation of a Big Brother centralised EU state database.”

The European Parliament has said the system “will make EU information systems used in security, border and migration management interoperable enabling data exchange between the systems.” The idea is that it will also make obtaining information a faster and more effective process, which is either great or nightmarish depending on your trust in government data collection and storage.

[…]

The CIR was approved through two separate votes: one for merging systems used for things related to visas and borders was approved 511 to 123 (with nine abstentions), and the other for streamlining systems users for law enforcement, judicial, migration, and asylum matters, which was approved 510 to 130 (also with nine abstentions). If this sounds like the handiwork of some serious lobbying, you might be correct, as one European Parliament official told Politico Europe.

A European Commission official told the outlet that they didn’t “think anyone understands what they’re voting for.” So that’s reassuring.

Source: EU Votes to Amass a Giant Database of Biometric Data

Because centralised databases are never leaked or hacked. Wait…

Is Alexa Listening? Amazon Employees Can Access Home Addresses, telephone numbers, contacts

An Amazon.com Inc. team auditing Alexa users’ commands has access to location data and can, in some cases, easily find a customer’s home address, according to five employees familiar with the program.

The team, spread across three continents, transcribes, annotates and analyzes a portion of the voice recordings picked up by Alexa. The program, whose existence Bloomberg revealed earlier this month, was set up to help Amazon’s digital voice assistant get better at understanding and responding to commands.

Team members with access to Alexa users’ geographic coordinates can easily type them into third-party mapping software and find home residences, according to the employees, who signed nondisclosure agreements barring them from speaking publicly about the program.

While there’s no indication Amazon employees with access to the data have attempted to track down individual users, two members of the Alexa team expressed concern to Bloomberg that Amazon was granting unnecessarily broad access to customer data that would make it easy to identify a device’s owner.

[…]

Some of the workers charged with analyzing recordings of Alexa customers use an Amazon tool that displays audio clips alongside data about the device that captured the recording. Much of the information stored by the software, including a device ID and customer identification number, can’t be easily linked back to a user.

However, Amazon also collects location data so Alexa can more accurately answer requests, for example suggesting a local restaurant or giving the weather in nearby Ashland, Oregon, instead of distant Ashland, Michigan.

[…]

It’s unclear how many people have access to that system. Two Amazon employees said they believed the vast majority of workers in the Alexa Data Services group were, until recently, able to use the software.

[…]

A second internal Amazon software tool, available to a smaller pool of workers who tag transcripts of voice recordings to help Alexa categorize requests, stores more personal data, according to one of the employees.

After punching in a customer ID number, those workers, called annotators and verifiers, can see the home and work addresses and phone numbers customers entered into the Alexa app when they set up the device, the employee said. If a user has chosen to share their contacts with Alexa, their names, numbers and email addresses also appear in the dashboard.

[…]

Amazon appears to have been restricting the level of access employees have to the system.

One employee said that, as recently as a year ago, an Amazon dashboard detailing a user’s contacts displayed full phone numbers. Now, in that same panel, some digits are obscured.

Amazon further limited access to data after Bloomberg’s April 10 report, two of the employees said. Some data associates, who transcribe, annotate and verify audio recordings, arrived for work to find that they no longer had access to software tools they had previously used in their jobs, these people said. As of press time, their access had not been restored.

Source: Is Alexa Listening? Amazon Employees Can Access Home Addresses – Bloomberg

‘They’re Basically Lying’ – (Mental) Health Apps Caught Secretly Sharing Data

“Free apps marketed to people with depression or who want to quit smoking are hemorrhaging user data to third parties like Facebook and Google — but often don’t admit it in their privacy policies, a new study reports…” writes The Verge.

“You don’t have to be a user of Facebook’s or Google’s services for them to have enough breadcrumbs to ID you,” warns Slashdot schwit1. From the article: By intercepting the data transmissions, they discovered that 92 percent of the 36 apps shared the data with at least one third party — mostly Facebook- and Google-run services that help with marketing, advertising, or data analytics. (Facebook and Google did not immediately respond to requests for comment.) But about half of those apps didn’t disclose that third-party data sharing, for a few different reasons: nine apps didn’t have a privacy policy at all; five apps did but didn’t say the data would be shared this way; and three apps actively said that this kind of data sharing wouldn’t happen. Those last three are the ones that stood out to Steven Chan, a physician at Veterans Affairs Palo Alto Health Care System, who has collaborated with Torous in the past but wasn’t involved in the new study. “They’re basically lying,” he says of the apps.

Part of the problem is the business model for free apps, the study authors write: since insurance might not pay for an app that helps users quit smoking, for example, the only ways for free app developer to stay afloat is to either sell subscriptions or sell data. And if that app is branded as a wellness tool, the developers can skirt laws intended to keep medical information private.
A few apps even shared what The Verge calls “very sensitive information” like self reports about substance use and user names.

Source: ‘They’re Basically Lying’ – Mental Health Apps Caught Secretly Sharing Data – Slashdot

Personal information on sites about faith, illness, sexual orientation, addiction, schools in NL is directly passed on to advertisers without GDPR consent.

Websites met informatie over gevoelige onderwerpen lappen de privacywet massaal aan hun laars. Dat zegt de Consumentenbond. Veel sites plaatsen zonder toestemming cookies van advertentienetwerken, waardoor die zeer persoonlijke informatie over de bezoekers in handen krijgen.

Onderzoekers van de Consumentenbond zochten in maart en april op onderwerpen binnen de categorieën geloof, jeugd, medisch en geaardheid. Via zoekvragen over onder meer depressie, verslaving, seksuele geaardheid en kanker kwamen zij op 106 websites.

Bijna de helft van die sites plaatste bij bezoek direct, dus zonder toestemming van de bezoeker, een of meer advertentiecookies, bijna altijd van Google. Websites als CIP.nl, Refoweb.nl en scholieren.com plaatsten er zelfs tientallen. Ouders.nl maakte het helemaal bont en plaatste maar liefst 37 cookies.

Ook een flink aantal instellingen voor geestelijke gezondheidszorg viel op. Onder andere ggzdrenthe.nl, connection-sggz.nl, parnassiagroep.nl en lentis.nl volgden ongevraagd het surfgedrag van hun bezoekers en speelden deze informatie door naar Google.

De privacywet AVG is nu een jaar van kracht, maar het is volgens de bond zorgwekkend hoe slecht de wet wordt nageleefd.

Source: ‘Persoonlijke informatie niet veilig bij sites over geloof, ziekte en geaardheid’ – Emerce

Apple killing right to repair bill

The bill has been pulled by its sponsor, Susan Talamantes-Eggman: “It became clear that the bill would not have the support it needed today, and manufacturers had sown enough doubt with vague and unbacked claims of privacy and security concerns,” she said. Her full statement has been added at the end of the piece.

In recent weeks, an Apple representative and a lobbyist for CompTIA, a trade organization that represents big tech companies, have been privately meeting with legislators in California to encourage them to kill legislation that would make it easier for consumers to repair their electronics, Motherboard has learned.

According to two sources in the California State Assembly, the lobbyists have met with members of the Privacy and Consumer Protection Committee, which is set to hold a hearing on the bill Tuesday afternoon. The lobbyists brought an iPhone to the meetings and showed lawmakers and their legislative aides the internal components of the phone. The lobbyists said that if improperly disassembled, consumers who are trying to fix their own iPhone could hurt themselves by puncturing the lithium-ion battery, the sources, who Motherboard is not naming because they were not authorized to speak to the media, said.

The argument is similar to one made publicly by Apple executive Lisa Jackson in 2017 at TechCrunch Disrupt, when she said the iPhone is “too complex” for normal people to repair them.

[…]

a few weeks after CompTIA and 18 other trade organizations associated with big tech companies—including CTIA and the Entertainment Software Association—sent letters in opposition of the legislation to members of the Assembly’s Privacy and Consumer Protection Committee. One copy of the letter, addressed to committee chairperson Ed Chau and obtained by Motherboard, urges the chairperson “against moving forward with this legislation.” CTIA represents wireless carriers including Verizon, AT&T, and T-Mobile, while the Entertainment Software Association represents Nintendo, Sony, Microsoft, and other video game manufacturers.

“With access to proprietary guides and tools, hackers can more easily circumvent security protections, harming not only the product owner but also everyone who shares their network,” the letter, obtained by Motherboard, stated. “When an electronic product breaks, consumers have a variety of repair options, including using an OEM’s [original equipment manufacturer] authorized repair network.”

Experts, however, say Apple’s and CompTIA’s warnings are far overblown. People with no special training regularly replace the batteries or cracked screens in their iPhones, and there are thousands of small, independent repair companies that regularly fix iPhones without incident. The issue is that many of these companies operate in a grey area because they are forced to purchase replacement parts from third parties in Shenzhen, China, because Apple doesn’t sell them to independent companies unless they become part of the “Apple Authorized Service Provider Program,” which limits the types of repairs they are allowed to do and requires companies to pay Apple a fee to join.

“To suggest that there are safety and security concerns with spare parts and manuals is just patently absurd,” Nathan Proctor, director of consumer rights group US PIRG’s right to repair campaign told Motherboard in a phone call. “We know that all across the country, millions of people are doing this for themselves. Millions more are taking devices to independent repair technicians.”

[…]

“The security of devices is not related to diagnostics and service manuals, they’re related to poor code with vulnerabilities, weak authentication, devices deployed by default to be vulnerable,” Roberts told Motherboard. “We all know there’s no debate. Security for connected devices has nothing to do with repair.”

Source: Apple Is Telling Lawmakers People Will Hurt Themselves if They Try to Fix iPhones – Motherboard

Wow, this is simply ridiculous. Profiteering by the large companies at the expense of smaller companies seems to be something the US government absolutely loves.

Kremlin signs total internet surveillance and censorship system into law, from Nov 1st.

Russia’s internet iron curtain has been formally signed into law by President Putin. The nation’s internet service providers have until 1 November to ensure they comply.

The law will force traffic through government-controlled exchanges and eventually require the creation of a national domain name system.

The bill has been promoted as advancing Russian sovereignty and ensuring Runet, Russia’s domestic internet, remains functioning regardless of what happens elsewhere in the world. The government has claimed “aggressive” US cybersecurity policies justify the move.

Control of exchanges is seen as an easy way for the Russian government to increase its control over what data its citizens can see, and what they can post. The Kremlin wants all data required by the network to be stored within Russian borders.

ISPs will only be allowed to connect to other ISPs, or peer, through approved exchanges. These exchanges will have to include government-supplied boxes which can block data traffic as required.

There have been widespread protests within the country against the law.

Source: Having a bad day? Be thankful you don’t work at a Russian ISP: Kremlin signs off Pootynet restrictions • The Register

Security lapse exposed a Chinese smart city surveillance system

Smart cities are designed to make life easier for their residents: better traffic management by clearing routes, making sure the public transport is running on time and having cameras keeping a watchful eye from above.

But what happens when that data leaks? One such database was open for weeks for anyone to look inside.

Security researcher John Wethington found a smart city database accessible from a web browser without a password. He passed details of the database to TechCrunch in an effort to get the data secured.

[…]

he system monitors the residents around at least two small housing communities in eastern Beijing, the largest of which is Liangmaqiao, known as the city’s embassy district. The system is made up of several data collection points, including cameras designed to collect facial recognition data.

The exposed data contains enough information to pinpoint where people went, when and for how long, allowing anyone with access to the data — including police — to build up a picture of a person’s day-to-day life.

A portion of the database containing facial recognition scans (Image: supplied)

The database processed various facial details, such as if a person’s eyes or mouth are open, if they’re wearing sunglasses, or a mask — common during periods of heavy smog — and if a person is smiling or even has a beard.

The database also contained a subject’s approximate age as well as an “attractive” score, according to the database fields.

But the capabilities of the system have a darker side, particularly given the complicated politics of China.

The system also uses its facial recognition systems to detect ethnicities and labels them — such as “汉族” for Han Chinese, the main ethnic group of China — and also “维族” — or Uyghur Muslims, an ethnic minority under persecution by Beijing.

Where ethnicities can help police identify suspects in an area even if they don’t have a name to match, the data can be used for abuse.

The Chinese government has detained more than a million Uyghurs in internment camps in the past year, according to a United Nations human rights committee. It’s part of a massive crackdown by Beijing on the ethnic minority group. Just this week, details emerged of an app used by police to track Uyghur Muslims.

We also found that the customer’s system also pulls in data from the police and uses that information to detect people of interest or criminal suspects, suggesting it may be a government customer.

Facial recognition scans would match against police records in real time (Image: supplied)

Each time a person is detected, the database would trigger a “warning” noting the date, time, location and a corresponding note. Several records seen by TechCrunch include suspects’ names and their national identification card number.

Source: Security lapse exposed a Chinese smart city surveillance system – TechCrunch

Facebook uploaded the contacts of 1.5m people without permission

On Thursday, at just about the same time as the most highly anticipated government document of the decade was released in Washington D.C., Facebook updated a month-old blog post to note that actually a security incident impacted “millions” of Instagram users and not “tens of thousands” as they said at first.

Last month, Facebook announced that hundreds of millions of Facebook and Facebook Lite account passwords were stored in plaintext in a database exposed to over 20,000 employees.

https://www.theregister.co.uk/2019/04/18/facebook_hoovered_up_15m_address_books_without_permission/

Pregnancy and parenting club Bounty fined £400,000 for shady data sharing practices of more than 14 million people

The Information Commissioner’s Office has fined commercial pregnancy and parenting club Bounty some £400,000 for illegally sharing personal details of more than 14 million people.

The organisation, which dishes out advice to expectant and inexperienced parents, has faced criticism over the tactics it uses to sign up new members and was the subject of a campaign to boot its reps from maternity wards.

[…]

the business had also worked as a data brokering service until April last year, distributing data to third parties to then pester unsuspecting folk with electronic direct marketing. By sharing this information and not being transparent about its uses while it was extracting the stuff, Bounty broke the Data Protection Act 1998.

Bounty shared roughly 34.4 million records from June 2017 to April 2018 with credit reference and marketing agencies. Acxiom, Equifax, Indicia and Sky were the four biggest of the 39 companies that Bounty told the ICO it sold stuff to.

This data included details of new mother and mothers-to-be but also of very young children’s birth dates and their gender.

Source: Pregnancy and parenting club Bounty fined £400,000 for shady data sharing practices • The Register

Chinese stock photo pusher tries to claim copyright on Event Horizon pic, Chinese Flag

China’s largest stock photo flinger has been forced to backtrack after it tried to put its own price tags on images of the first black hole and the Chinese flag.

Visual China Group reportedly tried to hawk out the first-ever image of a supermassive black hole and its shadow, which was the painstaking work of boffins running the Event Horizon Telescope.

The website is reported to have tried to suck users into payment, describing the picture, on which it affixed its logo, as an “editorial image” and directed users to dial a customer rep to discuss commercial use.

According to Reuters, the firm said it had obtained a non-exclusive editing licence for the project for media use – but it was widely understood the images were released under a Creative Commons licence, specifically CC BY 4.0.

The pic pushers were also said to have drawn criticism for asking for payment for images such as China’s flag and logos of companies including Baidu.

After the Tianjin city branch of China’s internet overseer stepped in, Visual China apologised and said that it would “learn from these lessons” and “seriously rectify” the problem.

Source: Hole lotta crud: Chinese stock photo pusher tries to claim copyright on Event Horizon pic • The Register

Copyright is such a brilliant system!

Sonos finally blasted in complaint to UK privacy watchdog – lets hope they do something with it

Sonos stands accused of seeking to obtain “excessive” amounts of personal data without valid consent in a complaint filed with the UK’s data watchdog.

The complaint, lodged by tech lawyer George Gardiner in a personal capacity, challenges the Sonos privacy policy’s compliance with the General Data Protection Regulation and the UK’s implementation of that law.

It argues that Sonos had not obtained valid consent from users who were asked to agree to a new privacy policy and had failed to meet privacy-by-design requirements.

The company changed its terms in summer 2017 to allow it to collect more data from its users – ostensibly because it was launching voice services. Sonos said that anyone who didn’t accept the fresh Ts&Cs would no longer be able to download future software updates.

Sonos denied at the time that this was effectively bricking the system, but whichever way you cut it, the move would deprecate the kit of users that didn’t accept the terms. The app controlling the system would also eventually become non-functional.

Gardiner pointed out, however, that security risks and an interest in properly maintaining an expensive system meant there was little practical alternative other than to update the software.

This resulted in a mandatory acceptance of the terms of the privacy policy, rendering any semblance of consent void.

“I have no option but to consent to its privacy policy otherwise I will have over £3,000 worth of useless devices,” he said in a complaint sent to the ICO and shared with The Register.

Users setting up accounts are told: “By clicking on ‘Submit’ you agree to Sonos’ Terms and Conditions and Privacy Policy.” This all-or-nothing approach is contrary to data protection law, he argued.

Sonos collects personal data in the form of name, email address, IP addresses and “information provided by cookies or similar technology”.

The system also collects data on room names assigned by users, the controller device, the operating system of the device a person uses and content source.

Sonos said that collecting and processing this data – a slurp that users cannot opt out of – is necessary for the “ongoing functionality and performance of the product and its ability to interact with various services”.

But Gardiner questioned whether it was really necessary for Sonos to collect this much data, noting that his system worked without it prior to August 2017. He added that he does not own a product that requires voice recognition.

Source: Turn me up some: Smart speaker outfit Sonos blasted in complaint to UK privacy watchdog • The Register

I am in the exact same position – suddenly I had to accept an invasive change of privacy policy and earlier in March I also had to log in with a Sonos account in order to get the kit working (it wouldn’t update without logging in and the app only showed the login and update page). This is not what I signed up for when I bought the (expensive!) products.

EU Tells Internet Archive That Much Of Its Site Is ‘Terrorist Content’, shows how it will censor the internet with no recourse

We’ve been trying to explain for the past few months just how absolutely insane the new EU Terrorist Content Regulation will be for the internet. Among many other bad provisions, the big one is that it would require content removal within one hour as long as any “competent authority” within the EU sends a notice of content being designated as “terrorist” content. The law is set for a vote in the EU Parliament just next week.

And as if they were attempting to show just how absolutely insane the law would be for the internet, multiple European agencies (we can debate if they’re “competent”) decided to send over 500 totally bogus takedown demands to the Internet Archive last week, claiming it was hosting terrorist propaganda content.

In the past week, the Internet Archive has received a series of email notices from Europol’s European Union Internet Referral Unit (EU IRU) falsely identifying hundreds of URLs on archive.org as “terrorist propaganda”. At least one of these mistaken URLs was also identified as terrorist content in a separate take down notice from the French government’s L’Office Central de Lutte contre la Criminalité liée aux Technologies de l’Information et de la Communication (OCLCTIC).

And just in case you think that maybe the requests are somehow legit, they are so obviously bogus that anyone with a browser would know they are bogus. Included in the list of takedown demands are a bunch of the Archive’s “collection pages” including the entire Project Gutenberg page of public domain texts, it’s collection of over 15 million freely downloadable texts, the famed Prelinger Archive of public domain films and the Archive’s massive Grateful Dead collection. Oh yeah, also a page of CSPAN recordings. So much terrorist content!

And, as the Archive explains, there’s simply no way that (1) the site could have complied with the Terrorist Content Regulation had it been law last week when they received the notices, and (2) that they should have blocked all that obviously non-terrorist content.

The Internet Archive has a few staff members that process takedown notices from law enforcement who operate in the Pacific time zone. Most of the falsely identified URLs mentioned here (including the report from the French government) were sent to us in the middle of the night – between midnight and 3am Pacific – and all of the reports were sent outside of the business hours of the Internet Archive.

The one-hour requirement essentially means that we would need to take reported URLs down automatically and do our best to review them after the fact.

It would be bad enough if the mistaken URLs in these examples were for a set of relatively obscure items on our site, but the EU IRU’s lists include some of the most visited pages on archive.org and materials that obviously have high scholarly and research value.

Those are the requests from Europol, who unfortunately likely qualify as a “competent” authority under the law. The Archive also points out the request from both Europol and the French computer crimes unit targeting a page providing commentary on the Quran as being terrorist content. The French agency told the Archive it needed to take down that content within 24 hours or the Archive may get blocked in France.

Source: EU Tells Internet Archive That Much Of Its Site Is ‘Terrorist Content’ | Techdirt

A Team At Amazon Is Listening To Recordings Captured By Alexa

Seven people, described as having worked in Amazon’s voice review program, told Bloomberg that they sometimes listen to as many as 1,000 recordings per shift, and that the recordings are associated with the customer’s first name, their device’s serial number, and an account number. Among other clips, these employees and contractors said they’ve reviewed recordings of what seemed to be a woman singing in the shower, a child screaming, and a sexual assault. Sometimes, when recordings were difficult to understand — or when they were amusing — team members shared them in an internal chat room, according to Bloomberg.

In an emailed statement to BuzzFeed News, an Amazon spokesperson wrote that “an extremely small sample of Alexa voice recordings” is annotated, and reviewing the audio “helps us train our speech recognition and natural language understanding systems, so Alexa can better understand your requests, and ensure the service works well for everyone.”

[…]

Amazon’s privacy policy says that Alexa’s software provides a variety of data to the company (including your use of Alexa, your Alexa Interactions, and other Alexa-enabled products), but doesn’t explicitly state how employees themselves interact with the data.

Apple and Google, which make two other popular voice-enabled assistants, also employ humans who review audio commands spoken to their devices; both companies say that they anonymize the recordings and don’t associate them with customers’ accounts. Apple’s Siri sends a limited subset of encrypted, anonymous recordings to graders, who label the quality of Siri’s responses. The process is outlined on page 69 of the company’s security white paper. Google also saves and reviews anonymized audio snippets captured by Google Home or Assistant, and distorts the audio.

On an FAQ page, Amazon states that Alexa is not recording all your conversations. Amazon’s Echo smart speakers and the dozens of other Alexa-enabled devices are designed to capture and process audio, but only when a “wake word” — such as “Alexa,” “Amazon,” “Computer,” or “Echo” — is uttered. However, Alexa devices do occasionally capture audio inadvertently and send that audio to Amazon servers or respond to it with triggered actions. In May 2018, an Echo unintentionally sent audio recordings of a woman’s private conversation to one of her husband’s employees.

Source: A Team At Amazon Is Listening To Recordings Captured By Alexa

Does Google meet its users’ expectations around consumer privacy? This news industry research says no

While the ethics around data collection and consumer privacy have been questioned for years, it wasn’t until Facebook’s Cambridge Analytics scandal that people began to realize how frequently their personal data is shared, transferred, and monetized without their permission.

Cambridge Analytica was by no means an isolated case. Last summer, an AP investigation found that Google’s location tracking remains on even if you turn it off in Google Maps, Search, and other apps. Research from Vanderbilt professor Douglas Schmidt found that Google engages in “passive” data collection, often without the user’s knowledge. His research also showed that Google utilizes data collected from other sources to de-anonymize existing user data.

That’s why we at Digital Content Next, the trade association of online publishers I lead, wrote this Washington Post op-ed, “It isn’t just about Facebook, it’s about Google, too” when Facebook first faced Capitol Hill. It’s also why the descriptor surveillance advertising is increasingly being used to describe Google and Facebook’s advertising businesses, which use personal data to tailor and micro-target ads.

[…]

The results of the study are consistent with our Facebook study: People don’t want surveillance advertising. A majority of consumers indicated they don’t expect to be tracked across Google’s services, let alone be tracked across the web in order to make ads more targeted.

Do you expect Google to collect data about a person’s activities on Google platforms (e.g. Android and Chrome) and apps (e.g. Search, YouTube, Maps, Waze)?

Yes: 48%
No: 52%
Do you expect Google to track a person’s browsing across the web in order to make ads more targeted?

Yes: 43%
No: 57%

Nearly two out of three consumers don’t expect Google to track them across non-Google apps, offline activities from data brokers, or via their location history.

Do you expect Google to collect data about a person’s locations when a person is not using a Google platform or app?

Yes: 34%
No: 66%
Do you expect Google to track a person’s usage of non-Google apps in order to make ads more targeted?

Yes: 36%
No: 64%
Do you expect Google to buy personal information from data companies and merge it with a person’s online usage in order to make ads more targeted?

Yes: 33%
No: 67%

There was only one question where a small majority of respondents felt that Google was acting according to their expectations. That was about Google merging data from search queries with other data it collects on its own services. They also don’t expect Google to connect the data back to the user’s personal account, but only by a small majority. Google began doing both of these in 2016 after previously promising it wouldn’t.

Do you expect Google to collect and merge data about a person’s search activities with activities on its other applications?

Yes: 57%
No: 43%
Do you expect Google to connect a variety of user data from Google apps, non-Google apps, and across the web with that user’s personal Google account?

Yes: 48%
No: 52%

Google’s personal data collection practices affect the more than 2 billion people who use devices running their Android operating software and hundreds of millions more iPhone users who rely on Google for browsing, maps, or search. Most of them expect Google to collect some data about them in exchange for use of services. However, as our research shows, a significant majority of consumers do not expect Google to track their activities across their lives, their locations, on other sites, and on other platforms. And as the AP discovered, Google continues to do some of this even after consumers explicitly turn off tracking.

Source: Does Google meet its users’ expectations around consumer privacy? This news industry research says no » Nieman Journalism Lab

Toyota to give royalty-free access to hybrid-vehicle patents

The pledge by one of the world’s biggest automakers to share its closely guarded patents, the second time it has opened up a technology, is aimed at driving industry uptake of hybrids and fending off the challenge of all-battery electric vehicles(EVs).

Toyota said it would grant licenses on nearly 24,000 patents on technologies used in its Prius, the world’s first mass-produced “green” car, and offer to supply competitors with components including motors, power converters and batteries used in its lower-emissions vehicles.

“We want to look beyond producing finished vehicles,” Toyota Executive Vice President Shigeki Terashi told reporters.

“We want to contribute to an increase in take up (of electric cars) by offering not just our technology but our existing parts and systems to other vehicle makers.”

The Nikkei Asian Review first reported Toyota’s plans to give royalty-free access to hybrid-vehicle patents.

Terashi said that the access excluded patents on its lithium-ion battery technology.

[…]

Toyota is also betting on hydrogen fuel cell vehicles (FCVs) as the ultimate zero-emissions vehicle, and as a result, has lagged many of its rivals in marketing all-battery EVs.

In 2015, it said it would allow access to its FCV-related patents through 2020.

Source: Toyota to give royalty-free access to hybrid-vehicle patents – Reuters