Palantir’s Top-Secret User Manual for Cops shows how easily they can find scary amounts of information on you and your friends

Through a public record request, Motherboard has obtained a user manual that gives unprecedented insight into Palantir Gotham (Palantir’s other services, Palantir Foundry, is an enterprise data platform), which is used by law enforcement agencies like the Northern California Regional Intelligence Center. The NCRIC serves around 300 communities in northern California and is what is known as a “fusion center,” a Department of Homeland Security intelligence center that aggregates and investigates information from state, local, and federal agencies, as well as some private entities, into large databases that can be searched using software like Palantir.

Fusion centers have become a target of civil liberties groups in part because they collect and aggregate data from so many different public and private entities. The US Department of Justice’s Fusion Center Guidelines list the following as collection targets:

1562941666896-Screen-Shot-2019-07-12-at-102230-AM
Data via US Department of Justice. Chart via Electronic Information Privacy Center.
1562940862696-Screen-Shot-2019-07-12-at-101110-AM
A flow chart that explains how cops can begin to search for records relating to a single person.

The guide doesn’t just show how Gotham works. It also shows how police are instructed to use the software. This guide seems to be specifically made by Palantir for the California law enforcement because it includes examples specific to California. We don’t know exactly what information is excluded, or what changes have been made since the document was first created. The first eight pages that we received in response to our request is undated, but the remaining twenty-one pages were copyrighted in 2016. (Palantir did not respond to multiple requests for comment.)

The Palantir user guide shows that police can start with almost no information about a person of interest and instantly know extremely intimate details about their lives. The capabilities are staggering, according to the guide:

  • If police have a name that’s associated with a license plate, they can use automatic license plate reader data to find out where they’ve been, and when they’ve been there. This can give a complete account of where someone has driven over any time period.
  • With a name, police can also find a person’s email address, phone numbers, current and previous addresses, bank accounts, social security number(s), business relationships, family relationships, and license information like height, weight, and eye color, as long as it’s in the agency’s database.
  • The software can map out a person’s family members and business associates of a suspect, and theoretically, find the above information about them, too.

All of this information is aggregated and synthesized in a way that gives law enforcement nearly omniscient knowledge over any suspect they decide to surveil.

[…]

In order for Palantir to work, it has to be fed data. This can mean public records like business registries, birth certificates, and marriage records, or police records like warrants and parole sheets. Palantir would need other data sources to give police access to information like emails and bank account numbers.

“Palantir Law Enforcement supports existing case management systems, evidence management systems, arrest records, warrant data, subpoenaed data, RMS or other crime-reporting data, Computer Aided Dispatch (CAD) data, federal repositories, gang intelligence, suspicious activity reports, Automated License Plate Reader (ALPR) data, and unstructured data such as document repositories and emails,” Palantir’s website says.

Some data sources—like marriage, divorce, birth, and business records—also implicate other people that are associated with a person personally or through family. So when police are investigating a person, they’re not just collecting a dragnet of emails, phone numbers, business relationships, travel histories, etc. about one suspect. They’re also collecting information for people who are associated with this suspect.

Source: Revealed: This Is Palantir’s Top-Secret User Manual for Cops – VICE

Google admits leaked private voice conversations, decides to clamp down on whistleblowers, not improve privacy

Google admitted on Thursday that more than 1,000 sound recordings of customer conversations with the Google Assistant were leaked by some of its partners to a Belgian news site.

[…]

“We just learned that one of these language reviewers has violated our data security policies by leaking confidential Dutch audio data,” Google product manager of search David Monsees said in a blog post. “Our Security and Privacy Response teams have been activated on this issue, are investigating, and we will take action. We are conducting a full review of our safeguards in this space to prevent misconduct like this from happening again”

Monsees said its partners only listen to “around 0.2 percent of all audio snippets” and said they are “not associated with user accounts,” even though VRT was able to figure out who was speaking in some of the clips.

Source: Google admits leaked private voice conversations

NB the CNBC  article states that you can delete old conversations, but we know that’s not the case for transcribed Alexa conversations and we know that if you delete your shopping emails from Gmail, Google keeps your shopping history.

Google contractors are secretly listening to your Assistant and Home recordings

Not only is your Google Home device listening to you, a new report suggests there might be a Google contractor who’s listening as well. Even if you didn’t ask your device any questions, it’s still sending what you say to the company, who allow an actual person to collect data from it.

[…]

VRT, with the help of a whistleblower, was able to listen to some of these clips and subsequently heard enough to discern the addresses of several Dutch and Belgian people using Google Home — in spite of the fact some hadn’t even uttered the words “Hey Google,” which are supposed to be the device’s listening trigger.

The person who leaked the recordings was working as a subcontractor to Google, transcribing the audio files for subsequent use in improving its speech recognition. They got in touch with VRT after reading about Amazon Alexa keeping recordings indefinitely.

According to the whistleblower, the recordings presented to them are meant to be carefully annotated, with notes included about the speakers presumed identity and age. From the sound of the report, these transcribers have heard just about everything. Personal information? Bedroom activities? Domestic violence? Yes, yes, and yes.

While VRT only listened to recordings from Dutch and Belgian users, the platform the whistleblower showed them had recordings from all over the world – which means there are probably thousands of other contractors listening to Assistant recordings.

The VRT report states that the Google Home Terms of Service don’t mention that recordings might be listened to by other humans.

The report did say the company tries to anonymize the recordings before sending them to contractors, identifying them by numbers rather than user names. But again, VRT was able to pick up enough data from the recordings to find the addresses of the users in question, and even confront some of the users in the recordings – to their great dismay.

Google’s defense to VRT was that the company only transcribes and uses “about 0.2% of all audio clips,” to improve their voice recognition technology.

Source: Google contractors are secretly listening to your Assistant recordings

UK data regulator threatens British Airways with 747-sized fine for massive personal data blurt

The UK Information Commissioner’s Office has warned BA it faces a whopping £183.39m following the theft of million customer records from its website and mobile app servers.

The record-breaking fine – more or less the lower end of the price of one of the 747-400s in BA’s fleet – under European General Data Protection Regulation (GDPR), represents 1.5 per cent of BA’s world-wide revenue in 2017.

Information Commissioner Elizabeth Denham said: “People’s personal data is just that – personal. When an organisation fails to protect it from loss, damage or theft it is more than an inconvenience. That’s why the law is clear – when you are entrusted with personal data you must look after it. Those that don’t will face scrutiny from my office to check they have taken appropriate steps to protect fundamental privacy rights.”

The breach hit almost 500,000 people. The ICO statement reveals the breach is believed to have started in June 2018, previous statements from BA said it began in late August. The data watchdog described the attack as diverting user traffic from BA’s site to a fraudulent site.

ICO investigators found a variety of information was compromised including log-in details, card numbers, names, addresses and travel information.

Sophisticated card skimming group Magecart, which also hit Ticketmaster, was blamed for the data slurp. The group is believed to have exploited third party scripts, possibly modified JavaScript, running on BA’s site to gain access to the airline’s payment system.

Such scripts are often used to support marketing and data tracking functions or running external ads.

The Reg revealed that BA parent company IAG was in talks with staff to outsource cyber security to IBM just before the hack was carried out.

Source: UK data regulator threatens British Airways with 747-sized fine for massive personal data blurt • The Register

Internet group brands Mozilla ‘internet villain’ for supporting DNS privacy feature which may also allow users access to porn in the UK, make it hard for the great filter there to see where everyone is surfing

An industry group of internet service providers has branded Firefox browser maker Mozilla an “internet villain” for supporting a DNS security standard.

The U.K.’s Internet Services Providers’ Association (ISPA), the trade group for U.K. internet service providers, nominated the browser maker for its proposed effort to roll out the security feature, which they say will allow users to “bypass UK filtering obligations and parental controls, undermining internet safety standards in the UK.”

Mozilla said late last year it was planning to test DNS-over-HTTPS to a small number of users.

Whenever you visit a website — even if it’s HTTPS enabled — the DNS query that converts the web address into an IP address that computers can read is usually unencrypted. The security standard is implemented at the app level, making Mozilla the first browser to use DNS-over-HTTPS. By encrypting the DNS query it also protects the DNS request against man-in-the-middle attacks, which allow attackers to hijack the request and point victims to a malicious page instead.

DNS-over-HTTPS also improves performance, making DNS queries — and the overall browsing experience — faster.

But the ISPA doesn’t think DNS-over-HTTPS is compatible with the U.K.’s current website blocking regime.

Under U.K. law, websites can be blocked for facilitating the infringement of copyrighted or trademarked material or if they are deemed to contain terrorist material or child abuse imagery. In encrypting DNS queries, it’s claimed that it will make it more difficult for internet providers to filter their subscribers’ internet access.

The ISPA isn’t alone. U.K. spy agency GCHQ and the Internet Watch Foundation, which maintains the U.K.’s internet blocklist, have criticized the move to roll out encrypted DNS features to the browser.

The ISPA’s nomination quickly drew ire from the security community. Amid a backlash on social media, the ISPA doubled down on its position. “Bringing in DNS-over-HTTPS by default would be harmful for online safety, cybersecurity and consumer choice,” but said it encourages “further debate.”

One internet provider, Andrews & Arnold, donated £2,940 — around $3,670 — to Mozilla in support of the nonprofit. “The amount was chosen because that is what our fee for ISPA membership would have been, were we a member,” said a tweet from the company.

Mozilla spokesperson Justin O’Kelly told TechCrunch: “We’re surprised and disappointed that an industry association for ISPs decided to misrepresent an improvement to decades old internet infrastructure.”

“Despite claims to the contrary, a more private DNS would not prevent the use of content filtering or parental controls in the UK. DNS-over-HTTPS (DoH) would offer real security benefits to UK citizens. Our goal is to build a more secure internet, and we continue to have a serious, constructive conversation with credible stakeholders in the UK about how to do that,” he said.

“We have no current plans to enable DNS-over-HTTPS by default in the U.K. However, we are currently exploring potential DNS-over-HTTPS partners in Europe to bring this important security feature to other Europeans more broadly,” he added.

Mozilla isn’t the first to roll out DNS-over-HTTPS. Last year Cloudflare released a mobile version of its 1.1.1.1 privacy-focused DNS service to include DNS-over-HTTPS. Months earlier, Google-owned Jigsaw released its censorship-busting app Infra, which aimed to prevent DNS manipulation.

Mozilla has yet to set a date for the full release of DNS-over-HTTPS in Firefox.

Source: Internet group brands Mozilla ‘internet villain’ for supporting DNS privacy feature | TechCrunch

Privacy-first browsers look to take the shine off Google’s Chrome

Before Google, Facebook and Amazon, tech dominance was known by a single name: Microsoft.

And no product was more dominant than Microsoft’s web browser, Internet Explorer. The company’s browser was the gateway to the internet for about 95 percent of users in the early 2000s, which helped land Microsoft at the center of a major government effort to break up the company.

Almost two decades later, Google’s Chrome now reigns as the biggest browser on the block, and the company is facing challenges similar to Microsoft’s from competitors, as well as government scrutiny.

But Google faces a new wrinkle — a growing realization among consumers that their every digital move is tracked.

“I think Cambridge Analytica acted as a catalyst to get people aware that their data could be used in ways they didn’t expect,” said Peter Dolanjski, the product lead for Mozilla’s Firefox web browser, referring to the scandal in which a political consulting firm obtained data on millions of Facebook users and their friends.

[…]

Web browsers, being the primary way the vast majority of people experience the internet, are a crucial choke point in the digital ecosystem. While the browsers are free to users, the companies that operate them can have an outsized impact on how the internet works — especially if they gain a dominant market position. For a company like Google, which makes most of its money from online advertising, that has meant being able to liberally collect user data. For a nonprofit like Mozilla, more users means the chance to convince developers and other tech companies to adopt their privacy-focused standards.

[…]

Chrome, with more than 60 percent market share worldwide, is yet another source of complaints about Google’s power, after its search engine and advertisement businesses. Last year, Chrome changed the system for logging in to the browser, a move that one researcher said could allow Google to collect data much more easily.

Firefox trails Microsoft in corporate size and influence, but it is pressing other browsers on privacy and playing up its status as a nonprofit. Last month, Firefox changed the initial settings for new users so that third-party tracking “cookies” such as those used for ad purposes are blocked — meaning the default is no tracking.

[…]

A technology columnist at the Post wrote in a scathing review last month that he was switching from Chrome to Firefox, calling Google’s product “a lot like surveillance software.” In a week of desktop websurfing, the columnist, Geoffrey Fowler, wrote that he discovered 11,189 requests for tracker cookies that were blocked by Firefox but would have been allowed by Chrome.

[…]

The browser fight has become heated enough to worry the advertising and media industries. Advertisers have become used to filling up websites with sometimes dozens of “cookies” and other forms of online tracking, and they fear a wider backlash against personalized, data-driven ads.

[…]

For now, there are few signs that Google’s browser dominance will end anytime soon, but the tech industry is riddled with examples of companies that appeared to be invincible just before their fall, including with web browsers.

Source: Privacy-first browsers look to take the shine off Google’s Chrome

Google Gmail purchase history can’t be deleted

Google and other tech companies have been under fire recently for a variety of issues, including failing to protect user data, failing to disclose how data is collected and used and failing to police the content posted to their services.

[…]

n May, I wrote up something weird I spotted on Google’s account management page. I noticed that Google uses Gmail to store a list of everything you’ve purchased, if you used Gmail or your Gmail address in any part of the transaction.

If you have a confirmation for a prescription you picked up at a pharmacy that went into your Gmail account, Google logs it. If you have a receipt from Macy’s, Google keeps it. If you bought food for delivery and the receipt went to your Gmail, Google stores that, too.

You get the idea, and you can see your own purchase history by going to Google’s Purchases page.

Google says it does this so you can use Google Assistant to track packages or reorder things, even if that’s not an option for some purchases that aren’t mailed or wouldn’t be reordered, like something you bought a store.

At the time of my original story, Google said users can delete everything by tapping into a purchase and removing the Gmail. It seemed to work if you did this for each purchase, one by one. This isn’t easy — for years worth of purchases, this would take hours or even days of time.

So, since Google doesn’t let you bulk-delete this purchases list, I decided to delete everything in my Gmail inbox. That meant removing every last message I’ve sent or received since I opened my Gmail account more than a decade ago.

Despite Google’s assurances, it didn’t work.

ike a horror movie villain that just won’t die

On Friday, three weeks after I deleted every Gmail, I checked my purchases list.

I still see receipts for things I bought years ago. Prescriptions, food deliveries, books I bought on Amazon, music I purchased from iTunes, a subscription to Xbox Live I bought from Microsoft — it’s all there.

CNBC Tech: Google Purchases
A list of my purchases Google pulled in from Gmail.
Todd Haselton | CNBC

Google continues to show me purchases I’ve made recently, too.

I can’t delete anything and I can’t turn it off.

Source: Google Gmail purchase history can’t be deleted

Top VPNs secretly owned by Chinese firms

Almost a third (30%) of the world’s top virtual private network (VPN) providers are secretly owned by six Chinese companies, according to a study by privacy and security research firm VPNpro.

The study shows that the top 97 VPNs are run by just 23 parent companies, many of which are based in countries with lax privacy laws.

Six of these companies are based in China and collectively offer 29 VPN services, but in many cases, information on the parent company is hidden to consumers.

Researchers at VPNpro have pieced together ownership information through company listings, geolocation data, the CVs of employees and other documentation.

In some instances, ownership of different VPNs is split amongst a number of subsidiaries. For example, Chinese company Innovative Connecting owns three separate businesses that produce VPN apps: Autumn Breeze 2018, Lemon Cove and All Connected. In total, Innovative Connecting produces 10 seemingly unconnected VPN products, the study shows.

Although the ownership of a number of VPN services by one company is not unusual, VPNpro is concerned that so many are based in countries with lax or non-existence privacy laws.

For example, seven of the top VPN services are owned by Gaditek, based in Pakistan. This means the Pakistani government can legally access any data without a warrant and data can also be freely handed over to foreign institutions, according to VPNpro.

The ability to access the data held by VPN providers, the researchers said, could enable governments or other organisations to identify users and their activity online. This potentially puts human rights activists, privacy advocates, investigative journalists and whistleblowers in jeopardy.

This lack of privacy, the study notes, extends to ordinary consumers, who are also coming under greater government surveillance.

“We’re not accusing any of these companies of doing anything underhand. However, we are concerned that so many VPN providers are not fully transparent about who owns them and where they are based,” said Laura Kornelija Inamedinova, research analyst at VPNpro.

Source: Top VPNs secretly owned by Chinese firms

What if All Your Slack Chats Were Leaked?

Slack is one of many Silicon Valley unicorns going public this year, but it’s the only one that has admitted it is at risk for nation-state attacks. In the S-1 forms filed with the Securities and Exchange Commission, Uber, Lyft, Pinterest and Snapchat addressed threats that could lower the price of their stock — including malware, phishing, disgruntled employees and denial-of-service attacks — but only Slack explicitly highlighted “nation-states” as a potential threat.

According to Slack’s S-1 form, the company faces threats from “sophisticated organized crime, nation-state, and nation-state supported actors.” The company acknowledges that its security measures “may not be sufficient to protect Slack and our internal systems and networks against certain attacks,” and correctly assesses that it is “virtually impossible” for the company to completely eliminate the risk of a nation-state attack.

But it is possible for Slack to minimize that risk. Or it would be, if Slack gave all its users the ability to decide which information Slack should keep and which information it should delete.

Right now, Slack stores everything you do on its platform by default — your username and password, every message you’ve sent, every lunch you’ve planned and every confidential decision you’ve made. That data is not end-to-end encrypted, which means Slack can read it, law enforcement can request it, and hackers — including the nation-state actors highlighted in Slack’s S-1 — can break in and steal it.

Slack is widely marketed for and used in business settings, so the company’s servers hold a treasure trove of valuable, proprietary information. Slack’s paying enterprise customers do have a way to mitigate their security risk — they can change their settings to set shorter retention periods and automatically delete old messages — but it’s not just big companies that are at risk.

Slack’s users include community organizers, political organizations, journalists and unions. At the Electronic Frontier Foundation, where I work, we collaborate with activists, reporters and others on their digital privacy and security, and we’ve noticed these users increasingly gravitating toward Slack’s free product.

And that’s what makes the company’s warning to investors particularly alarming: Free customer accounts don’t allow for any changes to data retention. Instead, Slack retains all of your messages but makes only the most recent 10,000 visible to you. Everything beyond that 10,000-message limit remains on Slack’s servers. So while those messages might seem out of sight and out of mind, they are all still indefinitely available to Slack, law enforcement and third-party hackers.

Source: Opinion | What if All Your Slack Chats Were Leaked? – The New York Times

UChicago and Google Sued in Federal Class Action Suit for Patient Data Sharing between 2009 – 2016

A former patient at the University of Chicago Medical Center is suing UChicago, the medical center, and Google, accusing them of violating the privacy rights of patients at UChicago Medicine through the sharing of patient records containing identifiable information.

The class action lawsuit, filed by Matt Dinerstein in the Northern District of Illinois on Wednesday, claims that UChicago violated federal law protecting patient privacy in its partnership with Google to share records of patients from 2009 to 2016. It also claims that Google will be able to use the patient data to develop highly lucrative health-care technologies.

The suit charges that the University breached contracts between UChicago and its patients by allegedly falsely claiming to patients that it would be protecting their medical records. It also charges UChicago for violating an Illinois law dictating that companies cannot engage in deceptive practices with clients.

UChicago spokesperson Jeremy Manier said in a statement e-mailed to The Maroon, “The claims in this lawsuit are without merit. The University of Chicago Medical Center has complied with the laws and regulations applicable to patient privacy.”

“The Medical Center entered into a research partnership with Google as part of the Medical Center’s continuing efforts to improve the lives of its patients,” the statement continues. “That research partnership was appropriate and legal and the claims asserted in this case are baseless and a disservice to the Medical Center’s fundamental mission of improving the lives of its patients. The University and the Medical Center will vigorously defend this action in court.”

A Google spokesperson said in a statement e-mailed to The Maroon, “We believe our healthcare research could help save lives in the future, which is why we take privacy seriously and follow all relevant rules and regulations in our handling of health data.”

UChicago announced in 2017 that it would begin sharing electronic medical records with Google in a partnership to develop machine-learning techniques that could improve the quality of health services. At the time, UChicago said that Google would ensure that “patient data is kept private and secure,” and would be “strictly following HIPAA privacy rule.”

HIPAA, the Health Insurance Portability and Accountability Act, is a federal law mandating that shared patient information must be “de-identified”—stripped of any identifying information such as addresses and photos—to protect patients’ privacy.

The complaint accuses UChicago of making insufficient efforts to scrub patient-identifying data before handing over documents.

Though UChicago and Google claim to have de-identified patients, UChicago’s inclusion of timestamps indicating when patients checked in and out of the medical center makes the records identifiable and thereby violate HIPAA, the suit alleges. It cites an article published last year by Google and researchers from collaborating universities that says, “All EHRs [medical records] were de-identified, except that dates of service were maintained in the UCM [UChicago Medicine] dataset.”

Google’s potential capability to “re-identify” patients with its advanced data mining technologies indicates that “these records were not sufficiently anonymized and put the patients’ privacy at grave risk,” the complaint claims. It notes Google’s possession of geolocation information that can “pinpoint and match exactly when certain people entered and exited the University’s hospital.”

UChicago is not the only university to share health records with Google; other universities with similar partnerships include Stanford University and the University of California, San Francisco, according to the article published by Google and collaborating researchers. Wednesday’s lawsuit rests on the fact that UChicago’s records, as obtained by Google, include timestamps of patient records.

The suit also argues that Google’s acquisition of a British startup called DeepMind in 2014 has allowed Google to possess robust machine-learning technologies that would allow Google to connect medical records to Google users’ data.

DeepMind and Google obtained health records from the British Royal Free Hospital in 2015. The project was accused by a British watchdog organization for not complying with data protection law, the suit claims.

Source: UChicago and Google Sued in Federal Class Action Suit for Data Sharing

Hong Kong Protests Show Dangers of a Cashless Society

Allowing cash to die would be a grave mistake. A cashless society is a surveillance society. The recent round of protests in Hong Kong highlights exactly what we have to lose.

The current unrest concerns a proposed change to Hong Kong’s extradition laws that would allow island fugitives to be transferred to Taiwan, Macau, and mainland China. The proposal sparked mass outrage, as many Hongkongers saw it as little more but a new way for the People’s Republic of China to erode the legal sovereignty of Hong Kong.

[…]

So tens of thousands of Hongkongers took to the streets to protest what they saw as creeping tyranny from a powerful threat. But they did it in a very particular way.

In Hong Kong, most people use a contactless smart card called an “Octopus card” to pay for everything from transit, to parking, and even retail purchases. It’s pretty handy: Just wave your tentacular card over the sensor and make your way to the platform.

But no one used their Octopus card to get around Hong Kong during the protests. The risk was that a government could view the central database of Octopus transactions to unmask these democratic ne’er-do-wells. Traveling downtown during the height of the protests? You could get put on a list, even if you just happened to be in the area.

So the savvy subversives turned to cash instead. Normally, the lines for the single-ticket machines that accept cash are populated only by a few confused tourists, while locals whiz through the turnstiles with their fintech wizardry.

But on protest days, the queues teemed with young activists clutching old school paper notes. As one protestor told Quartz: “We’re afraid of having our data tracked.”

Using cash to purchase single tickets meant that governments couldn’t connect activists’ activities with their Octopus accounts. It was instant anonymity. Sure, it was less convenient. And one-off physical tickets cost a little more than the Octopus equivalent. But the trade-off of avoiding persecution and jail time was well worth it.

What could protestors do in a cashless world? Maybe they would have to grit their teeth and hope for the best. But relying on the benevolence or incompetence of a motivated entity like China is not a great plan. Or perhaps public transit would be off-limits altogether. This could limit the protests to fit people within walking or biking distance, or people who have access to a private car—a rarity in expensive dense cities.

If some of our eggheads had their way, the protestors would have had no choice. A chorus of commentators call for an end to cash, whether because it frustrates central bank schemes, fuels black and grey markets, or is simply inefficient. We have plenty of newfangled payment options, they say. Why should modern first world economies hew to such primordial human institutions?

The answer is that there is simply no substitute for the privacy that cash, including digitized versions like cryptocurrencies, provide. Even if all of the alleged downsides that critics bemoan were true, cash would still be worth defending and celebrating for its core privacy-preserving functions. As Jerry Brito of Coin Center points out, cash protects our autonomy and indeed our human dignity.

[…]

Coin Center’s Peter Van Valkenburgh calls apps like WeChat Pay “tools for totalitarianism” for good reason: Each transaction is linked to your identity for possible viewing by Communist Party zealots. No wonder less than 8 percent of Hongkongers bother with hyper-palatable WeChat Pay.

Of course, Western offerings like Apple Pay and Venmo also maintain user databases that can be mined. Users may feel protected by the legal limits that countries like the United States place on what consumer data the government can extract from private business. But as research by Van Valkenburgh points out, US anti-money laundering laws afford less Fourth Amendment protection than you might expect. Besides, we still need to trust government and businesses to do the right thing. As the Edward Snowden revelations proved, this trust can be misplaced.

Hong Kong is about as first world as you can get. Yet even in such a developed economy, power’s jealous hold is but an ill-worded reform away. We should not allow today’s relative freedom to obscure the threat that a cashless world poses to our sovereignty. Not only canit happen here,” for some of your fellow citizens, it might already have.

Source: Hong Kong Protests Show Dangers of a Cashless Society – Reason.com

Amazon Confirms It Keeps Alexa Transcripts You Can’t Delete

Next time you use Amazon Alexa to message a friend or order a pizza, know that the record could be stored indefinitely, even if you ask to delete it.

In May, Delaware Senator Chris Coons sent Amazon CEO Jeff Bezos a letter asking why Amazon keeps transcripts of voices captured by Echo devices, citing privacy concerns over the practice. He was prompted by reports that Amazon stores the text.

“Unfortunately, recent reporting suggests that Amazon’s customers may not have as much control over their privacy as Amazon had indicated,” Coons wrote in the letter. “While I am encouraged that Amazon allows users to delete audio recordings linked to their accounts, I am very concerned by reports that suggest that text transcriptions of these audio records are preserved indefinitely on Amazon’s servers, and users are not given the option to delete these text transcripts.”

CNET first reported that Amazon’s vice president of public policy, Brian Huseman, responded to the senator on June 28, informing him that Amazon keeps the transcripts until users manually delete the information. The letter states that Amazon works “to ensure those transcripts do not remain in any of Alexa’s other storage systems.”

However, there are some Alexa-captured conversations that Amazon retains, regardless of customers’ requests to delete the recordings and transcripts, according to the letter.

As an example of records that Amazon may choose to keep despite deletion requests, Huseman mentioned instances when customers use Alexa to subscribe to Amazon’s music or delivery service, request a rideshare, order pizza, buy media, set alarms, schedule calendar events, or message friends. Huseman writes that it keeps these recordings because “customers would not want or expect deletion of the voice recording to delete the underlying data or prevent Alexa from performing the requested task.”

The letter says Amazon generally stores recordings and transcripts so users can understand what Alexa “thought it heard” and to train its machine learning systems to better understand the variations of speech “based on region, dialect, context, environment, and the individual speaker, including their age.” Such transcripts are not anonymized, according to the letter, though Huseman told Coons in his letter, “When a customer deletes a voice recording, we delete the transcripts associated with the customer’s account of both of the customer’s request and Alexa’s response.”

Amazon declined to provide a comment to Gizmodo beyond what was included in Huseman’s letter.

In his public response to the letter, Coons expressed concern that it shed light on the ways Amazon is keeping some recordings.

“Amazon’s response leaves open the possibility that transcripts of user voice interactions with Alexa are not deleted from all of Amazon’s servers, even after a user has deleted a recording of his or her voice,” Coons said. “What’s more, the extent to which this data is shared with third parties, and how those third parties use and control that information, is still unclear.”

Source: Amazon Confirms It Keeps Alexa Transcripts You Can’t Delete

Dutch ING Bank wants to use customer payment data for direct marketing, privacy watchdog says NO! whilst Dutch Gov wants more banking data sharing with everyone!

The authority on personal data has reprimanded the ING Bank over plans to use payment data for advertising. The authority has told other banks to examine their policies for direct marketing. ING Bank recently changed their privacy statement, stating that the bank will use payment data for direct marketing offers. As an example they said being able to offer specific product offers after child support payments had come in. Many ING customers caught this and emailed and called the authority about this angrily.

This is the second time the ING has tried this: in 2014 they tried to do this, but then also sharing the payment data with third parties.

Source: AP: Banken mogen betaalgegevens niet zomaar gebruiken voor reclame – Emerce

In the meantime, the Dutch government is trying to find a way to prohibit cash payments of over EUR 3000,- and insiduously in the same law allowing banks and government to share client banking data more easily.

source: Kabinet gaat contante betaling boven de 3000 euro verbieden

Silicon Valley’s Hottest Email App Superhuman sends emails that track you and your location without your knowledge

Superhuman is one of the most talked about new apps in Silicon Valley. Why? The product — a $30 per month email app for power users hoping for greater productivity— is a good alternative to many popular and stale email apps, nearly everyone who has used it says so. Even better is the company’s publicity strategy: The service invite only and posting on social media is the quickest way to get in the door. So it gets some local buzz, a $33 million dollar investment, bigger blog write-ups and then a New York Times article to top it all off last month.

After a peak, a roller coaster hits a downward slope.

Superhuman was criticized sharply on Tuesday when a blog post by Mike Davidson, previously the VP of design at Twitter, spread widely across social media. The post goes into detail about how one of Superhuman’s powerful features was actually just a run-of-the-mill privacy-violating tracking pixel with an option to turn it off or a notification for the recipient on the other end. If you use Superhuman, you’ll be able to see when someone opened your email, how many times they did it, what device they were using and what location they’re in.

Here’s Davidson:

It is disappointing then that one of the most hyped new email clients, Superhuman, has decided to embed hidden tracking pixels inside of the emails its customers send out. Superhuman calls this feature “Read Receipts” and turns it on by default for its customers, without the consent of its recipients.

Tracking pixels are not new. If you get an email newsletter, for instance, it’s probably got a tracking pixel feeding this kind of data back to advertisers, senders, and a whole host of other trackers interested in collecting everything they can about you.

Let me put it this way: I send an email to your mother. She opens it. Now I know a ton of information about her including her whereabouts without ever her ever being informed or consenting to this tracking. What does this kind of behavior mean for nosy advertisers? What about abusive spouses? A stalker? Pushy salespeople? Intrusive co-workers and bosses?

Davidson sums it up in his blog:

They’ve identified a feature that provides value to some of their customers (i.e. seeing if someone has opened your email yet) and they’ve trampled the privacy of every single person they send email to in order to achieve that. Superhuman never asks the person on the other end if they are OK with sending a read receipt (complete with timestamp and geolocation). Superhuman never offers a way to opt out. Just as troublingly, Superhuman teaches its user to surveil by default. I imagine many users sign up for this, see the feature, and say to themselves “Cool! Read receipts! I guess that’s one of the things my $30 a month buys me.”

Tracking emails is a tried-and-true tactic used by a ton of companies. That doesn’t make it ethical or irreversible. There has been plenty of criticism of the strategy — and there is a technical workaround that we’ll talk about momentarily — but since the tech has been, until now, mainly visible to businesses, the conversation has paled in comparison to some of the other big privacy issues arising in recent years.

Superhuman is a consumer app. It’s targeted at power users, yes, but the potential audience is big and the buzz is real. Combined with the increasing public distaste for privacy violations in the name of building a more powerful app, Twitter has been awash this week and especially on Tuesday with criticism of Superhuman: Why does it need to take so much information without an option or notification?

We emailed Superhuman but did not get a response.

A tracking pixel works by embedding a small and hidden image in an email. The image is able to report back information including when the email is opened and where the reader is located. It’s hidden for a reason: The spy is not trying to ask permission.

If you’re willing to put in a little work, you can spot who among your contacts is using Superhuman by following these instructions.

The workaround is to disable images by default in email. The method varies in different email apps but will typically be located somewhere in the settings.

Apps like Gmail have tried for years to scrub tracking pixels. Marketers and other users sending these tracking tools out have been battling, sometimes successfully, to continue to track Gmail’s billion users without their permission.

In that case, disabling images by default is the only sure-fire way to go. When you do allow images in an email, know that you may be instantly giving up a small fortune of information to the sender — and whoever they’re working with — without even realizing it.

Source: Silicon Valley’s Hottest Email App Raises Ethical Questions About the Future of Email

We are shocked to learn that China, an oppressive surveillance state, injects spyware into visitors’ phones

The New York Times reported today that guards working the border with Krygyzstan in the Xinjiang region have insisted on putting an app called Fengcai on the Android devices of visitors – including tourists, journalists, and other foreigners.

The Android app is said to harvest details from the handset ranging from text messages and call records to contacts and calendar entries. It also apparently checks to see if the device contains any of 73,000 proscribed documents, including missives from terrorist groups, including ISIS recruitment fliers and bomb-making instructions. China being China, it also looks for information on the Dalai Lama and – bizarrely – mentions of a Japanese grindcore band.

Visitors using iPhones had their mobes connected to a different, hardware-based device that is believed to install similar spyware.

This is not the first report of Chinese authorities using spyware to keep tabs on people in the Xinjiang region, though it is the first time tourists are believed to have been the primary target. The app doesn’t appear to be used at any other border crossings into the Middle Kingdom.

In May, researchers with German security company Cure53 described how a similar app known as BXAG that was not only collecting data from Android phones, but also sending that harvested information via an insecure HTTP connection, putting visitors in even more danger from third parties who might be eavesdropping.

The remote region in northwest China has for decades seen conflict between the government and local Muslim and ethnic Uighur communities, with reports of massive reeducation camps beign set up in the area. Beijing has also become increasingly reliant on digital surveillance tools to maintain control over its population, and use of intrusive software in Xinjiang to monitor the locals has become more common.

Human Rights Watch also reported that those living in the region sometimes had their phones spied on by a police-installed app called IJOP, while in 2018 word emerged that a mandatory spyware tool called Jing Wang was being pushed to citizens in the region

Source: We are shocked to learn that China, an oppressive surveillance state, injects spyware into visitors’ phones • The Register

The Americans just force you to unlock the phone for them…

Google’s new reCaptcha forces page admins to put it on EVERY page so Google can track you everywhere

According to tech statistics website Built With, more than 650,000 websites are already using reCaptcha v3; overall, there are at least 4.5 million websites use reCaptcha, including 25% of the top 10,000 sites. Google is also now testing an enterprise version of reCaptcha v3, where Google creates a customized reCaptcha for enterprises that are looking for more granular data about users’ risk levels to protect their site algorithms from malicious users and bots.

But this new, risk-score based system comes with a serious trade-off: users’ privacy.

According to two security researchers who’ve studied reCaptcha, one of the ways that Google determines whether you’re a malicious user or not is whether you already have a Google cookie installed on your browser. It’s the same cookie that allows you to open new tabs in your browser and not have to re-log in to your Google account every time. But according to Mohamed Akrout, a computer science PhD student at the University of Toronto who has studied reCaptcha, it appears that Google is also using its cookies to determine whether someone is a human in reCaptcha v3 tests. Akrout wrote in an April paper about how reCaptcha v3 simulations that ran on a browser with a connected Google account received lower risk scores than browsers without a connected Google account. “If you have a Google account it’s more likely you are human,” he says. Google did not respond to questions about the role that Google cookies play in reCaptcha.

With reCaptcha v3, technology consultant Marcos Perona and Akrout’s tests both found that their reCaptcha scores were always low risk when they visited a test website on a browser where they were already logged into a Google account. Alternatively, if they went to the test website from a private browser like Tor or a VPN, their scores were high risk.

To make this risk-score system work accurately, website administrators are supposed to embed reCaptcha v3 code on all of the pages of their website, not just on forms or log-in pages. Then, reCaptcha learns over time how their website’s users typically act, helping the machine learning algorithm underlying it to generate more accurate risk scores. Because reCaptcha v3 is likely to be on every page of a website,  if you’re signed into your Google account there’s a chance Google is getting data about every single webpage you go to that is embedded with reCaptcha v3—and there many be no visual indication on the site that it’s happening, beyond a small reCaptcha logo hidden in the corner.

Source: Google’s new reCaptcha has a dark side

Mozilla Has a New Tool for Tricking Advertisers Into Believing You’re Filthy Rich

If you notice the ads being served to you are eerily similar to stuff you were just browsing online, it’s not all in your head, and it’s the insidious truth of existing online without installing a bunch of browser extensions. But there’s now a tool that, while comically absurd in execution, can stick it to the man (advertisers) by effectively disguising your true interests. Hope you like tabs.

The tool, called Track THIS, was developed by the Mozilla Firefox folks and lets you pick one of four profiles—Hypebeast, Filthy Rich, Doomsday, or Influencer. You’ll then allow the tool to open 100 tabs based on the associated profile type. Data brokers and advertisers build a profile on you based on how you navigate the internet, which includes the webpages you visit. So whichever one of these personalities you choose will, theoretically, be how advertisers view you, which in turn will influence the type of ads you see.

I tried out both the Filthy Rich and Doomsday Prepper profiles. It took a few minutes for all 100 tabs to open up for each on Chrome. (If you’re on a computer that doesn’t have much RAM, just know that you might have to restart after everything freezes.) For the former, there were a lot of yacht sites, luxury designers, stock market sites, expensive watches, some equestrian real estate brokers, a page to sign up for a Mastercard Gold Card, and a page to book a room at the MGM Grand. For the latter, links to survival supplies and checklists, tents, mylar blankets, doomsday movies, and a lot (a lot) of conspiracy theories. I’m about to get served some ads for some luxury-ass Hazmat suits.

Screenshot: Melanie Ehrenkranz

As Mozilla noted in a blog post announcing the tool, it’ll likely only work as intended for a few days and then will revert back to showing you ads more in tune with your actual preferences. “This will show you ads for products you might not be interested in at all, so it’s really just throwing off brands who want to advertise to a very specific type of person,” the company wrote. “You’ll still be seeing ads. And eventually, if you just use the internet as you typically would day to day, you’ll start seeing ads again that align more closely to your normal browsing habits.”

Of course, you’re probably not going to fire up 100 tabs routinely to trick advertisers—the tool is more of a brilliantly ridiculous nod to the lengths we have to go to only temporarily be just a little less intimately targeted.

Source: Mozilla Has a New Tool for Tricking Advertisers Into Believing You’re Filthy Rich

Chrome is the biggest snoop of all on your computer or cell phone – so switch browser before there is no alternative any more

You open your browser to look at the Web. Do you know who is looking back at you?

Over a recent week of Web surfing, I peered under the hood of Google Chrome and found it brought along a few thousand friends. Shopping, news and even government sites quietly tagged my browser to let ad and data companies ride shotgun while I clicked around the Web.

This was made possible by the Web’s biggest snoop of all: Google. Seen from the inside, its Chrome browser looks a lot like surveillance software.

Lately I’ve been investigating the secret life of my data, running experiments to see what technology really gets up to under the cover of privacy policies that nobody reads. It turns out, having the world’s biggest advertising company make the most popular Web browser was about as smart as letting kids run a candy shop.

It made me decide to ditch Chrome for a new version of nonprofit Mozilla’s Firefox, which has default privacy protections. Switching involved less inconvenience than you might imagine.

My tests of Chrome vs. Firefox unearthed a personal data caper of absurd proportions. In a week of Web surfing on my desktop, I discovered 11,189 requests for tracker “cookies” that Chrome would have ushered right onto my computer but were automatically blocked by Firefox. These little files are the hooks that data firms, including Google itself, use to follow what websites you visit so they can build profiles of your interests, income and personality.

Chrome welcomed trackers even at websites you would think would be private. I watched Aetna and the Federal Student Aid website set cookies for Facebook and Google. They surreptitiously told the data giants every time I pulled up the insurance and loan service’s log-in pages.

And that’s not the half of it.

Look in the upper right corner of your Chrome browser. See a picture or a name in the circle? If so, you’re logged in to the browser, and Google might be tapping into your Web activity to target ads. Don’t recall signing in? I didn’t, either. Chrome recently started doing that automatically when you use Gmail.

Chrome is even sneakier on your phone. If you use Android, Chrome sends Google your location every time you conduct a search. (If you turn off location sharing it still sends your coordinates out, just with less accuracy.)

Firefox isn’t perfect — it still defaults searches to Google and permits some other tracking. But it doesn’t share browsing data with Mozilla, which isn’t in the data-collection business.

At a minimum, Web snooping can be annoying. Cookies are how a pair of pants you look at in one site end up following you around in ads elsewhere. More fundamentally, your Web history — like the color of your underpants — ain’t nobody’s business but your own. Letting anyone collect that data leaves it ripe for abuse by bullies, spies and hackers.

[…]

Choosing a browser is no longer just about speed and convenience — it’s also about data defaults.

It’s true that Google usually obtains consent before gathering data, and offers a lot of knobs you can adjust to opt out of tracking and targeted advertising. But its controls often feel like a shell game that results in us sharing more personal data.

I felt hoodwinked when Google quietly began signing Gmail users into Chrome last fall. Google says the Chrome shift didn’t cause anybody’s browsing history to be “synced” unless they specifically opted in — but I found mine was being sent to Google and don’t recall ever asking for extra surveillance. (You can turn off the Gmail auto-login by searching “Gmail” in Chrome settings and switching off “Allow Chrome sign-in.”)

After the sign-in shift, Johns Hopkins associate professor Matthew Green made waves in the computer science world when he blogged he was done with Chrome. “I lost faith,” he told me. “It only takes a few tiny changes to make it very privacy unfriendly.”

When you use Chrome, signing into Gmail automatically logs in the browser to your Google account. When “sync” is also on, Google receives your browsing history.

There are ways to defang Chrome, which is much more complicated than just using “Incognito Mode.” But it’s much easier to switch to a browser not owned by an advertising company.

Like Green, I’ve chosen Firefox, which works across phones, tablets, PCs and Macs. Apple’s Safari is also a good option on Macs, iPhones and iPads, and the niche Brave browser goes even further in trying to jam the ad-tech industry.

What does switching to Firefox cost you? It’s free, and downloading a different browser is much simpler than changing phones.

[…]

And as a nonprofit, it earns money when people make searches in the browser and click on ads — which means its biggest source of income is Google. Mozilla’s chief executive says the company is exploring new paid privacy services to diversify its income.

Its biggest risk is that Firefox might someday run out of steam in its battle with the Chrome behemoth. Even though it’s the No. 2 desktop browser,with about 10 percent of the market, major sites could decide to drop support, leaving Firefox scrambling.

If you care about privacy, let’s hope for another David and Goliath outcome.

Source: Google is the biggest snoop of all on your computer or cell phone

FYI: Your Venmo transfers with those edgy emojis aren’t private by default. And someone’s put 7m of them into a public DB

Graduate student Dan Salmon has released online seven million Venmo transfers, scraped from the social payment biz in recent months, to call attention to the privacy risks of public transaction data.

Venmo, for the uninitiated, is an app that allows friends to pay each other money for stuff. El Reg‘s Bay Area vultures primarily use it for settling restaurant and bar bills that we have no hope of expensing; one person pays on their personal credit card, and their pals transfer their share via Venmo. It makes picking up the check a lot easier.

Because it’s the 2010s, by default, Venmo makes those transactions public along with attached messages and emojis, sorta like Twitter but for payments, allowing people to pry into strangers’ spending and interactions. Who went out with whom for drinks, who owed someone a sizable debt, who went on vacation, and so on.

“I am releasing this dataset in order to bring attention to Venmo users that all of this data is publicly available for anyone to grab without even an API key,” said Salmon in a post to GitHub. “There is some very valuable data here for any attacker conducting [open-source intelligence] research.”

[…]

Despite past criticism from privacy advocates and a settlement with the US Federal Trade Commission, Venmo has kept person-to-person purchases public by default.

[…]

Last July, Berlin-based researcher Hang Do Thi Duc explored some 200m Venmo transactions from 2017 and set up a website, PublicByDefault.fyi, to peruse the e-commerce data. His stated goal was to change people’s attitudes about sharing data unnecessarily.

When The Register asked about transaction privacy last year, after a developer created a bot that tweeted Venmo purchases mentioning drugs, a company spokesperson said, “Like on other social networks, Venmo users can choose what they want to share on the Venmo public feed. There are a number of different settings that users can customize when it comes to sharing payments on Venmo.”

The current message from the company is not much different: “Venmo was designed for sharing experiences with your friends in today’s social world, and the newsfeed has always been a big part of this,” a Venmo spokesperson told The Register in an email. “Our users trust us with their money and personal information, and we take this responsibility very seriously.”

“I think Venmo is resisting calls to make their data private because it would go against the entire pitch of the app,” said Salmon. “Venmo is designed to be a “‘social’ app and the more open and social you make things, the more you open yourself to problems.”

Venmo’s privacy policy details all the ways in which customer data is not private.

Source: FYI: Your Venmo transfers with those edgy emojis aren’t private by default. And someone’s put 7m of them into a public DB • The Register

Readability of privacy policies for big tech companies visualised

For The New York Times, Kevin Litman-Navarro plotted the length and readability of privacy policies for large companies:

To see exactly how inscrutable they have become, I analyzed the length and readability of privacy policies from nearly 150 popular websites and apps. Facebook’s privacy policy, for example, takes around 18 minutes to read in its entirety – slightly above average for the policies I tested.

The comparison is between websites with a focus on Facebook and Google, but the main takeaway I think is that almost all privacy policies are complex, because they’re not there for the users.

Source: Readability of privacy policies for big tech companies | FlowingData

Popular Soccer App Spied on Fans Through Phone Microphone to Catch Bars Pirating Game Streams

Spain’s data protection agency has fined La Liga, the nation’s top professional soccer league, 250,000 euros ($283,000 USD) for using the league’s phone app to spy on its fans. With millions of downloads, the app was reportedly being used to surveil bars in an effort to catch establishments playing matches on television without a license.

The La Liga app provides users with schedules, player rankings, statistics, and league news. It also knows when they’re watching games and where.

According to Spanish newspaper El País, the league told authorities that when its apps detected users were in bars the apps would record audio through phone microphones. The apps would then use the recording to determine if the user was watching a soccer game, using technology that’s similar to the Shazam app. If a game was playing in the vicinity, officials would then be able to determine if that bar location had a license to play the game.

So not only was the app spying on fans, but it was also turning those fans into unwitting narcs. El Diario reports that the app has been downloaded 10 million times.

Source: Popular Soccer App Spied on Fans Through Phone Microphone to Catch Bars Pirating Game Streams

The fine is insanely low, especially considering it’s the Spanish billionaires club that has to pay it.

The Russian Government Now Requires Tinder to Hand Over People’s Sexts

Tinder users in Russia may now have to decide whether the perks of dating apps outweigh a disconcerting invasion of privacy. Russian authorities are now requiring that the dating app hand over a wealth of intimate user data, including private messages, if and when it asks for them.

Tinder is the fourth dating app in the nation to be forced to comply with the Russian government’s request for user data, Moscow Times reports, and it’s among 175 services that have already consented to share information with the nation’s Federal Security Service, according to a registry online.

Tinder was added to the list of services that have to comply with the Russian data requests last Friday, May 31. The data Tinder must collect and provide to Russia upon request includes user data and all communications including audio and video. According to Tinder’s privacy policy, it does collect all your basic profile details, such as your date of birth and gender as well as the content you publish and your chats with other users, among other information. Which means the Russian government could get its hands on your sexts, your selfies, and even details on where you’ve been or where you might be going if it wants to.

It’s unclear if the possible data requests will apply to just Tinder users within Russia or any users of the dating app, regardless of where they are. If it’s the latter, it points to an unsettling reality in which one nation is able to extend its reach into the intimate data of people all over the world by simply making the request to any complying service that happens to also operate in Russia.

We have reached out to Tinder about which users this applies to, whether it will comply with this request, and what type of data it will share with the Russian authorities. We will update when we hear back. According to the Associated Press, Russian’s communications regulator confirmed on Monday that the company had shared information with it.

The Russian government is not only targeting Tinder. As the lengthy registry online indicates, a large and diverse range of services are already on the list and have been for years. This includes Snap, Wechat, Vimeo, and Badoo, another popular dating app in Russia.

Telegram famously objected to the Russian authorities’ request for its encryption keys last year, which resulted in the government banning the encrypted messaging app. It was an embarrassing mess for Russian internet service providers, which in their attempt to block workarounds for the messaging app, disrupted a litany of services online.

Source: The Russian Government Now Requires Tinder to Hand Over People’s Sexts

EU countries and car manufacturers, navigation systems will share information between everyone

Advanced Driver Assistance Systems (ADAS) in cars such as automatic braking systems, systems that detect the state of the road, if there is anything in your blind spot and navigation systems will be sharing their data with European countries, car manufacturers and presumably insurers under the cloak of making driving safer. I’m sure it will, but I still don’t feel comfortable having the government know where I am at all times and what my driving style is like.

The link below is in Dutch.

Source: EU-landen en autofabrikanten delen informatie voor meer verkeersveiligheid – Emerce

Apple’s privacy schtick is just an act, say folks suing the iGiant: iTunes ‘purchase histories sold’ to highest bidders

Apple has been hit with a class-action complaint in the US accusing the iGiant of playing fast and loose with the privacy of its customers.

The lawsuit [PDF], filed this month in a northern California federal district court, claims the Cupertino music giant gathers data from iTunes – including people’s music purchase history and personal information – then hands that info over to marketers in order to turn a quick buck.

“To supplement its revenues and enhance the formidability of its brand in the eyes of mobile application developers, Apple sells, rents, transmits, and/or otherwise discloses, to various third parties, information reflecting the music that its customers purchase from the iTunes Store application that comes pre-installed on their iPhones,” the filing alleged.

“The data Apple discloses includes the full names and home addresses of its customers, together with the genres and, in some cases, the specific titles of the digitally-recorded music that its customers have purchased via the iTunes Store and then stored in their devices’ Apple Music libraries.”

What’s more, the lawsuit goes on to claim that the data Apple sells is then combined by the marketers with information purchased from other sources to create detailed profiles on individuals that allow for even more targeted advertising.

Additionally, the lawsuit alleges the Music APIs Apple includes in its developer kit can allow third-party devs to harvest similarly detailed logs of user activity for their own use, further violating the privacy of iTunes customers.

The end result, the complaint states, is that Cook and Co are complacent in the illegal harvesting and reselling of personal data, all while pitching iOS and iTunes as bastions of personal privacy and data security.

“Apple’s disclosures of the personal listening information of plaintiffs and the other unnamed Class members were not only unlawful, they were also dangerous because such disclosures allow for the targeting of particularly vulnerable members of society,” the complaint reads.

“For example, any person or entity could rent a list with the names and addresses of all unmarried, college-educated women over the age of 70 with a household income of over $80,000 who purchased country music from Apple via its iTunes Store mobile application. Such a list is available for sale for approximately $136 per thousand customers listed.”

Source: Apple’s privacy schtick is just an act, say folks suing the iGiant: iTunes ‘purchase histories sold’ to highest bidders • The Register

Newly Released Amazon Patent Shows Just How Much Creepier Alexa Can Get

A newly revealed patent application filed by Amazon is raising privacy concerns over an envisaged upgrade to the company’s smart speaker systems. This change would mean that, by default, the devices end up listening to and recording everything you say in their presence.

Alexa, Amazon’s virtual assistant system that runs on the company’s Echo series of smart speakers, works by listening out for a ‘wakeword’ that tells the device to turn on its extended speech recognition systems in order to respond to spoken commands.

[…]

In theory, Alexa-enabled devices will only record what you say directly after the wakeword, which is then uploaded to Amazon, where remote servers use speech recognition to deduce your meaning, then relay commands back to your local speaker.

But one issue in this flow of events, as Amazon’s recently revealed patent application argues, is it means that anything you say before the wakeword isn’t actually heard.

“A user may not always structure a spoken command in the form of a wakeword followed by a command (eg. ‘Alexa, play some music’),” the Amazon authors explain in their patent application, which was filed back in January, but only became public last week.

“Instead, a user may include the command before the wakeword (eg. ‘Play some music, Alexa’) or even insert the wakeword in the middle of a command (eg. ‘Play some music, Alexa, the Beatles please’). While such phrasings may be natural for a user, current speech processing systems are not configured to handle commands that are not preceded by a wakeword.”

To overcome this barrier, Amazon is proposing an effective workaround: simply record everything the user says all the time, and figure it out later.

Rather than only record what is said after the wakeword is spoken, the system described in the patent application would effectively continuously record all speech, then look for instances of commands issued by a person.

Source: Newly Released Amazon Patent Shows Just How Much Creepier Alexa Can Get

wow – a continuous spy in your home