Whisper App Exposes Entire History of Chat Logs, personal details and location

Whisper, the anonymous messaging app beloved by teens and tweens the world over, has a problem: it’s not as anonymous as we’d thought. The platform is only the latest that brands itself as private by design while leaking sensitive user data into the open, according to a damning Washington Post report out earlier today. According to the sleuths that uncovered the leak, “anonymous” posts on the platform—which tackle everything from closeted homosexuality, to domestic abuse, to unwanted pregnancies—could easily be tied to the original poster.

As is often the case, the culprit was a leaky bucket, that housed the platform’s entire posting history since it first came onto the scene in 2012. And because this app has historically courted a ton of teens, a lot of this data can get really unsavory, really fast. The Post describes being able to pull a search for users that listed their age as fifteen and getting more than a million results in return, which included not only their posts, but any identifying information they gave the platform, like age, ethnicity, gender, and the groups they were a part of—including groups that are centered around delicate topics like sexual assault.

Whisper told the Post that they’d shut down the leak once being contacted—a point that Gizmodo independently confirmed. Still, the company has yet to come around to cracking down on its less-than-satisfying policies surrounding location data. In 2014, Whisper was caught sharing this data with federal researchers as part of research on personnel stationed at military bases. In the years since then, it looks like a lot of this data is still up for grabs. While some law enforcement officials might need to get their hands on it, Gizmodo’s own analysis found multiple targeted advertising partners that are scooping up user location data as recently as this afternoon.

Source: Whisper App Exposes Entire History of Chat Logs: Report

Utah has given all its camera feeds to an AI, turning it Into a Surveillance Panopticon

The state of Utah has given an artificial intelligence company real-time access to state traffic cameras, CCTV and “public safety” cameras, 911 emergency systems, location data for state-owned vehicles, and other sensitive data.

The company, called Banjo, says that it’s combining this data with information collected from social media, satellites, and other apps, and claims its algorithms “detect anomalies” in the real world.

The lofty goal of Banjo’s system is to alert law enforcement of crimes as they happen. It claims it does this while somehow stripping all personal data from the system, allowing it to help cops without putting anyone’s privacy at risk. As with other algorithmic crime systems, there is little public oversight or information about how, exactly, the system determines what is worth alerting cops to.

Source: This Small Company Is Turning Utah Into a Surveillance Panopticon – VICE

Clearview AI: We Are ‘Working to Acquire All U.S. Mugshots’ From Past 15 Years

Clearview AI worked to build a national database of every mug shot taken in the United States during the past 15 years, according to an email obtained by OneZero through a public records request.

The email, sent by a representative for Clearview AI in August 2019, was in response to an inquiry from the Green Bay Police Department in Wisconsin, which had asked if there was a way to upload its own mug shots to Clearview AI’s app.

“We are… working to acquire all U.S. mugshots nationally from the last 15 years, so once we have that integrated in a few months’ time it might just be superfluous anyway,” wrote the Clearview AI employee, whose name was redacted.

Clearview AI is best known for scraping the public internet, including social media, for billions of images to power its facial recognition app, which was first reported on by the New York Times. Some of those images are pulled from online repositories of mug shots, like Rapsheets.org and Arrests.org, according to other emails obtained by OneZero. Acquiring a national mug shot database would make Clearview AI an even more powerful tool for police departments, which would be able to easily match a photograph of an individual against their criminal history.

Clearview AI did not immediately respond to a request for comment from OneZero. It is unclear whether the company ultimately succeeded in acquiring such a database.

Source: Clearview AI: We Are ‘Working to Acquire All U.S. Mugshots’ From Past 15 Years

Clearview AI Let Celebs, Investors Use Facial Recognition App for fun

Creepy facial recognition firm Clearview AI—which claims to have built an extensive database from billions of photos scraped from the public web—allowed the rich and powerful to use its app as a personal plaything and spy tool, according to reporting from the New York Times on Thursday.

Clearview and its founder, Hoan Ton-That, claim that the database is only supposed to be used by law enforcement and “select security professionals” in the course of investigations. Prior reports from the Times revealed that hundreds of law enforcement agencies, including the Department of Justice and Immigration and Customs Enforcement, had used Clearview’s biometric tools, which is alarming enough, given the total lack of any U.S. laws regulating how face recognition can be used and its proven potential in mass surveillance of anyone from minorities to political targets. Clearview also pitched itself and its tools to white supremacist Paul Nehlen, then a candidate for Congress, saying it could provide “unconventional databases” for “extreme opposition research.”

But the Times has now found that Clearview’s app was “freely used in the wild by the company’s investors, clients and friends” in situations ranging from showing off at parties to, in the case of billionaire Gristedes founder John Catsimatidis, correctly identifying a man his daughter was on a date with. More alarmingly, Catsimatidis launched a trial run of Clearview’s potential as a surveillance tool at his chain of grocery stores.

Catsimatidis told the Times that a Gristedes in Manhattan had used Clearview to screen for “shoplifters or people who had held up other stores,” adding, “People were stealing our Häagen-Dazs. It was a big problem.” That dovetails with other reporting by BuzzFeed that found Clearview is developing security cameras designed to work with its face recognition tools and that corporations including Kohl’s, Macy’s, and the NBA had tested it.

Source: Clearview AI Let Celebs, Investors Use Facial Recognition App

DuckDuckGo Made a List of Jerks Tracking You Online

DuckDuckGo, a privacy-focused tech company, today launched something called Tracker Radar—an open-source, automatically generated and continually updated list that currently contains more than 5,000 domains that more than 1,700 companies use to track people online.

The idea behind Tracker Radar, first reported by CNET, is to share the data DuckDuckGo has collected to create a better set of tracker blockers. DuckDuckGo says that the majority of existing tracker data falls into two types: block lists and in-browser tracker identification. The issue is the former relies on crowd-sourcing and manual maintenance. The latter is difficult to scale and also can be potentially abused due to the fact it’s generating a list based on your actual browsing habits. Tracker Radar supposedly gets around some of these issues by looking at the most common cross-site trackers and including a host of information about their behavior, things like prevalence, fingerprinting, cookies, and privacy policies, among other considerations.

This can be weedsy, especially if the particulars of adtech make your eyeballs roll out of their sockets. The gist is, that creepy feeling you get when you see ads on social media for that product you googled the other day? All that is powered by the types of hidden trackers DuckDuckGo is trying to block. On top of shopping data, these trackers can also glean your search history, location data, along with a number of other metrics. That can then be used to infer data like age, ethnicity, and gender to create a profile that then gets shared with other companies looking to profit off you without your explicit consent.

As for how people can actually take advantage of it, it’s a little more roundabout. The average joe mostly benefits by using… DuckDuckGo’s browser mobile apps for iOS and Android, or desktop browser extensions for Chrome, Firefox, and Safari.

As for developers, DuckDuckGo is encouraging them to create their own tracker block lists. The company is also suggesting researchers use Tracker Radar to help them study online tracking. You can find the data set here.

Source: DuckDuckGo Made a List of Jerks Tracking You Online

After blowing $100m to snoop on Americans’ phone call logs for four years, what did the NSA get? Just one lead

The controversial surveillance program that gave the NSA access to the phone call records of millions of Americans has cost US taxpayers $100m – and resulted in just one useful lead over four years.

That’s the upshot of a report [PDF] from the US government’s freshly revived Privacy and Civil Liberties Oversight Board (PCLOB). The panel dug into the super-snoops’ so-called Section 215 program, which is due to be renewed next month.

Those findings reflect concerns expressed by lawmakers back in November when at a Congressional hearing, the NSA was unable to give a single example of how the spy program had been useful in the fight against terrorism. At the time, Senator Dianne Feinstein (D-CA) stated bluntly: “If you can’t give us any indication of specific value, there is no reason for us to reauthorize it.”

That value appears to have been, in total, 15 intelligence reports at an overall cost of $100m between 2015 and 2019. Of the 15 reports that mentioned what the PCLOB now calls the “call detail records (CDR) program,” just two of them provided “unique information.” In other words, for the other 13 reports, use of the program reinforced what Uncle Sam’s g-men already knew. In 2018 alone, the government collected more than 434 million records covering 19 million different phone numbers.

What of those two reports? According to the PCLOB overview: “Based on one report, FBI vetted an individual, but, after vetting, determined that no further action was warranted. The second report provided unique information about a telephone number, previously known to US authorities, which led to the opening of a foreign intelligence investigation.”

Source: After blowing $100m to snoop on Americans’ phone call logs for four years, what did the NSA get? Just one lead • The Register

Facebook’s privacy tools are riddled with missing data

Facebook wants you to think it’s consistently increasing transparency about how the company stores and uses your data. But the company still isn’t revealing everything to its users, according to an investigation by Privacy International.

The obvious holes in Facebook’s privacy data exports paint a picture of a company that aims to placate users’ concerns without actually doing anything to change its practices.

Data lists are incomplete — The most pressing issue with Facebook’s downloadable privacy data is that it’s incomplete. Privacy International’s investigation tested the “Ads and Business” section on Facebook’s “Download Your Information” page, which purports to tell users which advertisers have been targeting them with ads.

The investigation found that the list of advertisers actually changes over time, seemingly at random. This essentially makes it impossible for users to develop a full understanding of which advertisers are using their data. In this sense, Facebook’s claims of transparency are inaccurate and misleading.

‘Off-Facebook’ data is misleading — Facebook’s most recent act of “transparency” is its “Off-Facebook Activity” tool, which allows users to “see and control the data that other apps and websites share with Facebook.” But the reports generated by this tool offer extremely limited detail. Some data is marked with a cryptic “CUSTOM” label, while even the best-labeled data gives no context surrounding the reason it’s included in the list.

Nothing to see here — Facebook’s supposed attempts at increased transparency do very little to actually help users understand what the company is doing with their personal data. These tools come off as nothing more than a ploy to take pressure off the company. Meanwhile, the company continues to quietly pay off massive lawsuits over actual user privacy issues.

Facebook doesn’t care about your privacy — it cares about making money. Users would do well to remember that.

Source: Report: Facebook’s privacy tools are riddled with missing data

US Gov wants to spy on all drones all the time: they must be constantly connected to the internet to give Feds real-time location data

Drone enthusiasts are up in arms over rules proposed by the US Federal Aviation Administration (FAA) that would require their flying gizmos to provide real-time location data to the government via an internet connection.

The requirement, for drones weighing 0.55lb (0.25kg) or more, would ground an estimated 80 per cent of gadgets in the United States, and many would never be able to fly again because they couldn’t be retrofitted with the necessary equipment, say drone owners. Those that did buy new drones would need to buy a monthly data plan for their flying machines: something that would likely cost $35 or more a month, given extortionate US mobile rates.

There are also additional costs of running what would need to be new location databases of drones, which the FAA expects will be run by private companies but doesn’t exist yet, which drones owners would have to pay for through subscriptions. The cost of all this is prohibitive, for little real benefit, they argue.

If a device loses internet connectivity while flying, and can’t send its real-time info, it must land. It may be possible to pair a drone control unit with, say, a smartphone or a gateway with fixed-lined internet connectivity, so that the drone can relay its data to the Feds via these nodes. However, that’s not much use if you’re out in the middle of nowhere, or if you wander into a wireless not-spot.

Nearly 35,000 public comments have been received by the FAA, with the comment period closing later today. The vast majority of the comments are critical and most make the same broad point: that the rules are too strict, too costly and are unnecessary.

The world’s largest drone maker, DJI, is among those fighting the rule change, unsurprisingly enough. The manufacturer argues that while it agrees that every drone should have its own unique ID, the FAA proposal is “complex, expensive and intrusive.”

It would also undermine the industry own remote ID solution that doesn’t require a real-time data connection but utilizes the same radio signals used to control drones to broadcast ID information. It also flags that the proposed solution has privacy implications: people would be able to track months of someone’s previous drone usage.

Source: Drones must be constantly connected to the internet to give Feds real-time location data – new US govt proposal • The Register

Project Svalbard, Have I Been Pwned will not be sold after all

This is going to be a lengthy blog post so let me use this opening paragraph as a summary of where Project Svalbard is at: Have I Been Pwned is no longer being sold and I will continue running it independently. After 11 months of a very intensive process culminating in many months of exclusivity with a party I believed would ultimately be the purchaser of the service, unexpected changes to their business model made the deal infeasible. It wasn’t something I could have seen coming nor was it anything to do with HIBP itself, but it introduced a range of new and insurmountable barriers. So that’s the tl;dr, let me now share as much as I can about what’s been happening since April 2019 and how the service will operate in the future.

Source: Troy Hunt: Project Svalbard, Have I Been Pwned and its Ongoing Independence

Ring doorbells to change privacy settings after study showed it shared personal information with Facebook and Google

Ring, the Amazon-owned maker of smart-home doorbells and web-enabled security cameras, is changing its privacy settings two weeks after a study showed the company shares customers’ personal information with Facebook, Google and other parties without users’ consent.

The change will let Ring users block the company from sharing most, but not all, of their data. A company spokesperson said people will be able to opt out of those sharing agreements “where applicable.” The spokesperson declined to clarify what “where applicable” might mean.

Ring will announce and start rolling out the opt-out feature soon, the spokesperson told CBS MoneyWatch.

Source: Ring to change privacy settings after study showed it shared personal information with Facebook and Google – CBS News

Facebook Cuts Off Some Mobile tracking Ad Data With Advertising Partners, should have done this long long ago

Facebook is tightening its rules around the use of raw, device-level data used for measuring ad campaigns that Facebook shares with an elite group of advertising technology partners.

As first spotted by AdAge, the company recently tweaked the terms of service that apply to its “advanced mobile measurement partner” program, which advertisers tap into to track the performance of their ads on Facebook. Those mobile measurement partners (MMPs) were, until now, free to share the raw data they accessed from Facebook with advertisers. These metrics drilled down to the individual device level, which advertisers could then reportedly connect to any device IDs they might already have on tap.

Facebook reportedly began notifying affected partners on February 5 and all advertising partners must agree to the updated terms of the program before April 22, according to Tencent.

While Facebook didn’t deliver the device IDs themselves, passing granular insights like the way a given consumer shops or browses the web—and then giving an advertiser free rein to link that data to, well, just about anyone—smacks hard of something that could easily turn Cambridge Analytica-y if the wrong actors got their hands on the data. As AdAge put it:

The program had safeguards that bound advertisers to act responsibly, but there were always concerns that advertisers could misuse the data, according to people familiar with the program. Facebook says that it did not uncover any wrongdoing on the part of advertisers when it decided to update the measurement program. However, the program under its older configuration came with clear risks, according to marketing partners.

Source: Facebook Cuts Off Some Ad Data With Advertising Partners

Apple has blocked Clearview AI’s iPhone app for violating its rules

An iPhone app built by controversial facial recognition startup Clearview AI has been blocked by Apple, effectively banning the app from use.

Apple confirmed to TechCrunch that the startup “violated” the terms of its enterprise developer program.

The app allows its users — which the company claims it serves only law enforcement officers — to use their phone camera or upload a photo to search its database of 3 billion photos. But BuzzFeed News revealed that the company — which claims to only cater to law enforcement users — also includes many private-sector users, including Macy’s, Walmart and Wells Fargo.

Clearview AI has been at the middle of a media — and legal — storm since its public debut in The New York Times last month. The company scrapes public photos from social media sites, drawing ire from the big tech giants that claim Clearview AI misused their services. But it’s also gained attention from hackers. On Wednesday, Clearview AI confirmed a data breach in which its client list was stolen.

Source: Apple has blocked Clearview AI’s iPhone app for violating its rules | TechCrunch

Clearview AI, Creepy Facial Recognition Company That Stole Your Pictures from Social Media, Says Entire Client List Was Stolen by Hackers

A facial-recognition company that contracts with powerful law-enforcement agencies just reported that an intruder stole its entire client list, according to a notification the company sent to its customers.

In the notification, which The Daily Beast reviewed, the startup Clearview AI disclosed to its customers that an intruder “gained unauthorized access” to its list of customers, to the number of user accounts those customers had set up, and to the number of searches its customers have conducted. The notification said the company’s servers were not breached and that there was “no compromise of Clearview’s systems or network.” The company also said it fixed the vulnerability and that the intruder did not obtain any law-enforcement agencies’ search histories.

Source: Clearview AI, Facial Recognition Company That Works With Law Enforcement, Says Entire Client List Was Stolen

Your car records a lot of things you don’t know about – including you.

Tesla chief executive Elon Musk calls this function Sentry Mode. I also call it Chaperone Mode and Snitch Mode. I’ve been writing recently about how we don’t drive cars, we drive computers. But this experience opened my eyes.

I love that my car recorded a hit-and-run on my behalf. Yet I’m scared we’re not ready for the ways cameras pointed inside and outside vehicles will change the open road — just like the cameras we’re adding to doorbells are changing our neighborhoods.

It’s not just crashes that will be different. Once governments, companies and parents get their hands on car video, it could become evidence, an insurance liability and even a form of control. Just imagine how it will change teenage romance. It could be the end of the idea that cars are private spaces to peace out and get away — an American symbol of independence.

“You are not alone in your car anymore,” says Andrew Guthrie Ferguson, a visiting professor at the American University Washington College of Law and the author of “The Rise of Big Data Policing.”

The moment my car was struck, it sent an alert to my phone and the car speakers began blaring ghoulish classical music, a touch of Musk’s famous bravado. The car saved four videos of the incident, each from a different angle, to a memory stick I installed near the cup holder. (Sentry Mode is an opt-in feature.) You can watch my car lurch when the bus strikes it, spot the ID number on the bus and see its driver’s face passing by moments before.

This isn’t just a Tesla phenomenon. Since 2016, some Cadillacs have let you store recordings from four outward-facing cameras, both as the car is moving and when it’s parked. Chevrolet offers a so-called Valet Mode to record potentially naughty parking attendants. Sold with Corvettes, they call this camera feature a “baby monitor for your baby.”

Now there are even face-monitoring cameras in certain General Motors, BMW and Volvo vehicles to make sure you’re not drowsy, drunk or distracted. Most keep a running log of where you’re looking.

Your older car’s camera may not be saving hours of footage, but chances are it keeps at least a few seconds of camera, speed, steering and other data on a hidden “black box” that activates in a crash. And I’m pretty sure your next car would make even 007 jealous; I’ve already seen automakers brag about adding 16 cameras and sensors to 2020 models.

The benefits of this technology are clear. The video clips from my car made a pretty compelling case for the city to pay for my repairs without even getting my insurance involved. Lots of Tesla owners proudly share crazy footage on YouTube. It’s been successfully used to put criminals behind bars.

But it’s not just the bad guys my car records. I’ve got clips of countless people’s behinds scooching by in tight parking lots, because Sentry Mode activates any time something gets close. It’s also recording my family: With another function called Dash Cam that records the road, Tesla has saved hours and hours of my travels — the good driving and the not-so-good alike.

We’ve been down this road before with connected cameras. Amazon’s Ring doorbells and Nest cams also seemed like a good idea, until hackers, stalkers and police tried to get their hands on the video feed. (Amazon founder and chief executive Jeff Bezos owns The Washington Post.) Applied to a car, the questions multiply: Can you just peer in on your teen driver — or spouse? Do I have to share my footage with the authorities? Should my car be allowed to kick me off the road if it thinks I’m sleepy? How long until insurance companies offer “discounts” for direct video access? And is any of this actually making cars safer or less expensive to own?

Your data can and will be used against you. Can we do anything to make our cars remain private spaces?

[…]

design choices may well determine our future privacy. It’s important to remember: Automakers can change how their cameras work with as little as a software update. Sentry mode arrived out of thin air last year on cars made as early as 2017.

We can learn from smart doorbells and home security devices where surveillance goes wrong.

The problems start with what gets recorded. Home security cameras have so normalized surveillance that they let people watch and listen in on family and neighbors. Today, Tesla’s Sentry Mode and Dash Cam only record video, not audio. The cars have microphones inside, but right now they seem to just be used for voice commands and other car functions — avoiding eavesdropping on potentially intimate car conversations.

Tesla also hasn’t activated a potentially invasive source of video: a camera pointed inside the car, right next to the rear view mirror. But, again, it’s not entirely clear why. CEO Musk tweeted it’s there to be used as part of a future ride-sharing program, implying it’s not used in other ways. Already some Tesla owners are champing at the bit to have it activated for Sentry Mode to see, for example, what a burglar is stealing. I could imagine others demanding live access for a “teen driving” mode.

(Tesla has shied away from perhaps the most sensible use for that inner camera: activating it to monitor whether drivers are paying attention while using its Autopilot driver assistance system, something GM does with its so-called SuperCruise system.)

In other ways, Tesla is already recording gobs. Living in a dense city, my Sentry Mode starts recording between five and seven times per day — capturing lots of people, the vast majority of whom are not committing any crime. (This actually drains the car’s precious battery; some owners estimate it sips about a mile’s worth of the car’s 322 mile potential range for every hour it runs.) Same with the Dash Cam that runs while I’m on the road: it’s recording not just my driving but all the other cars and people on the road, too.

The recordings stick around on a memory card until you delete them or the card fills up, and it writes over the old footage.

[…]

Chevrolet potentially ran afoul of eavesdropping laws when it debuted Valet Mode in 2015, because it was recording audio inside the cabin of the car without disclosure. (Now they’ve cut the audio and added a warning message to the infotainment system.) When it’s on, Tesla’s Sentry Mode activates a warning sign on its large dashboard screen with a pulsing version of the red circle some might remember from the evil HAL-9000 computer in “2001: A Space Odyssey.”

My biggest concern is who can access all that video footage. Connected security cameras let anybody with your password peer in from afar, through an app or the Web.

[…]

Teslas, like most new cars, come with their own independent cellular connections. And Tesla, by default, uploads clips from its customer cars’ external cameras. A privacy control in the car menus says Tesla uses the footage “to learn how to recognize things like lane lines, street signs and traffic light positions.”

[…]

Video from security cameras is already routine in criminal prosecutions. In the case of Ring cameras, the police can make a request of a homeowner, who is free to say no. But courts have also issued warrants to Amazon to hand over the video data it stores on its computers, and it had to comply.

It’s an open question whether police could just seize the video recordings saved on a drive in your car, says Ferguson, the law professor.

“They could probably go through a judge and get a probable cause warrant, if they believe there was a crime,” he says. “It’s a barrier, but is not that high of a barrier. Your car is going to snitch on you.”

Source: My car was in a hit-and-run. Then I learned it recorded the whole thing.

Google users in UK to lose EU data protection, get US non-protection

The shift, prompted by Britain’s exit from the EU, will leave the sensitive personal information of tens of millions with less protection and within easier reach of British law enforcement.

The change was described to Reuters by three people familiar with its plans. Google intends to require its British users to acknowledge new terms of service including the new jurisdiction.

Ireland, where Google and other U.S. tech companies have their European headquarters, is staying in the EU, which has one of the world’s most aggressive data protection rules, the General Data Protection Regulation.

Google has decided to move its British users out of Irish jurisdiction because it is unclear whether Britain will follow GDPR or adopt other rules that could affect the handling of user data, the people said.

If British Google users have their data kept in Ireland, it would be more difficult for British authorities to recover it in criminal investigations.

The recent Cloud Act in the United States, however, is expected to make it easier for British authorities to obtain data from U.S. companies. Britain and the United States are also on track to negotiate a broader trade agreement.

Beyond that, the United States has among the weakest privacy protections of any major economy, with no broad law despite years of advocacy by consumer protection groups.

A Google spokesman declined to comment ahead of a public announcement.

Source: Exclusive: Google users in UK to lose EU data protection – sources – Reuters

Firm Tracking Purchase, Transaction Histories of Millions Not Really Anonymizing Them

The nation’s largest financial data broker, Yodlee, holds extensive and supposedly anonymized banking and credit card transaction histories on millions of Americans. Internal documents obtained by Motherboard, however, appear to indicate that Yodlee clients could potentially de-anonymize those records by simply downloading a giant text file and poking around in it for a while.

According to Motherboard, the 2019 document explains how Yodlee obtains transaction data from partners like banks and credit card companies and what data is collected. That includes a unique identifier associated with the bank or credit card holder, amounts of transactions, dates of sale, which business the transaction was processed at, and bits of metadata, Motherboard wrote; it also includes data relating to purchases involving multiple retailers, such as a restaurant order through a delivery app. The document states that Yodlee is giving clients access to this data in the form of a large text file rather than a Yodlee-run interface.

The document also shows how Yodlee performs “data cleaning” on that text file, which means obfuscating patterns like “account numbers, phone numbers, and SSNs by redacting them with the letters “XXX,” Motherboard wrote. It also scrubs some payroll and financial transfer data, as well as the names of the banking and credit card companies involved.

But this process leaves the unique identifiers, which are shared across each entry associated with a particular account, intact. Research has repeatedly shown that taking supposedly anonymized data and reverse-engineering it to identify individuals within can be a trivial undertaking, even when no information is shared across records.

Experts told Motherboard that anyone with malicious intent would just need to verify a purchase was made by a specific individual and they might gain access to all other transactions using the same identifier.

With location and time data on just three to four purchases, an “attacker can unmask the person with a very high probability,” Rutgers University associate professor Vivek Singh told the site. “With this unmasking, the attacker would have access to all the other transactions made by that individual.”

Imperial College of London assistant professor Yves-Alexandre de Montjoye, who worked with Singh on a 2015 study that identified shoppers from metadata, wrote to Motherboard this process appeared to leave the data only “pseudonymized” and that “someone with access to the dataset and some information about you, e.g. shops you’ve been buying from and when, might be able to identify you.”

Yodlee and its owner, Envestnet, is facing serious heat from Congress. Democratic Senators Ron Wyden and Sherrod Brown, as well as Representative Anna Eshoo, recently sent a letter to the Federal Trade Commission asking for it to investigate whether the sale of this kind of financial data violates federal law.

“Envestnet claims that consumers’ privacy is protected because it anonymizes their personal financial data,” the congresspeople wrote. “But for years researchers have been able to re-identify the individuals to whom the purportedly anonymized data belongs with just three or four pieces of information.”

Source: Report: Firm Tracking Purchase, Transaction Histories of Millions Maybe Not Really Anonymizing Them

It’s very hard to get anonymity right.

Forcing us to get consent before selling browser histories violates our free speech, US ISPs claim

The US state of Maine is violating internet broadband providers’ free speech by forcing them to ask for their customers’ permission to sell their browser history, according to a new lawsuit.

The case was brought this month by four telco industry groups in response to a new state-level law aimed at providing Maine residents with privacy protections killed at the federal level by the FCC just days before they were due to take effect.

ACA Connects, CTIA, NCTA and USTelecom are collectively suing [PDF] Maine’s attorney general Aaron Frey, and the chair and commissioners of Maine’s Public Utilities Commission claiming that the statute, passed in June 2019, “imposes unprecedented and unduly burdensome restrictions on ISPs’, and only ISPs’, protected speech.”

How so? Because it includes “restrictions on how ISPs communicate with their own customers that are not remotely tailored to protecting consumer privacy.” The lawsuit even explains that there is a “proper way to protect consumer privacy” – and that’s the way the FCC does it, through “technology-neutral, uniform regulation.” Although that regulation is actually the lack of regulation.

If you’re still having a hard time understanding how requiring companies to get their customers’ permission before they sell their personal data infringes the First Amendment, the lawsuit has more details.

It “(1) requires ISPs to secure ‘opt-in’ consent from their customers before using information that is not sensitive in nature or even personally identifying; (2) imposes an opt-out consent obligation on using data that are by definition not customer personal information; (3) limits ISPs from advertising or marketing non-communications-related services to their customers; and (4) prohibits ISPs from offering price discounts, rewards in loyalty programs, or other cost saving benefits in exchange for a customer’s consent to use their personal information.”

All of this results in an “excessive burden” on ISPs, they claim, especially because not everyone else had to do the same. The new statute includes “no restrictions at all on the use, disclosure, or sale of customer personal information, whether sensitive or not, by the many other entities in the Internet ecosystem or traditional brick-and-mortar retailers,” the lawsuit complains.

Discrimination!

This is discrimination, they argue. “Maine cannot discriminate against a subset of companies that collect and use consumer data by attempting to regulate just that subset and not others, especially given the absence of any legislative findings or other evidentiary support that would justify targeting ISPs alone.”

We’ll leave the idea that customers are suffering by not receiving marketing materials from companies that ISPs sell their data to alone for now and focus on the core issue: that if Google and Facebook are allowed to sell their users’ personal data then ISPs feel they should be allowed to as well.

Which is a fair point, although profoundly depressing in a broader context. The basic argument appears to be that we should only provide the minimum protections that are available. Nothing above minimum is legal.

If you look at what the statute actually does, it was clearly written in users’ own interests. It prevents companies from refusing to serve customers that do not agree to allow it to collect and sell their personal data and it requires ISPs to take “reasonable measures” to protect that data. Those companies are still allowed to use the data to market their own products; just not to sell it to others to sell theirs.

But because the ISPs successfully managed to get the FCC to kill off its own rules on similar protections, it argues that the scrapping of rules is the legal precedent here. “The Statute is preempted by federal law because it directly conflicts with and deliberately thwarts federal determinations about the proper way to protect consumer privacy,” the lawsuit argues.

The solution of course is federal privacy protections. But despite overwhelming public support for just such a law, the same ISPs and telcos fighting this law in Maine, have flooded Washington DC with lobbying money and campaign contributions to make sure that it doesn’t progress through Congress. And if this Maine challenge is successful, next in the ISPs’ sites will be California’s new privacy laws.

Source: Forcing us to get consent before selling browser histories violates our free speech, US ISPs claim • The Register

Vodafone: Yes, we slurp data on customers’ network setups, but we do it for their own good. No, you can’t opt out.

Seeking to improve its pisspoor customer service rating, UK telecoms giant Vodafone has clarified just how much information it slurps from customer networks. You might want to rename those servers, m’kay?

The updates are rather extensive and were noted by customers after a headsup-type email arrived from the telco.

One offending paragraph gives Vodafone an awful lot of information about what a customer might be running on their own network:

For providing end user support and optimizing your WiFi experience we are collecting information about connected devices (MAC address, Serial Number, user given host names and WiFi connection quality) as well as information about the the WiFi networks (MAC addresses and identifiers, radio statistics).

More accurately, it gives a third party that information. Airties A.S. is the company responsible for hosting information that Vodafone’s support drones might use for diagnostics.

With Vodafone topping the broadband and landline complaint tables, according to the most recent Ofcom data (PDF), the company would naturally want to increase the chances of successfully resolving a customer’s problem. However, there is no way to opt out.

Source: Vodafone: Yes, we slurp data on customers’ network setups, but we do it for their own good • The Register

This Bracelet Prevents Smart Speakers From Spying on You

You probably don’t realize just how many devices in your home or workplace are not only capable of eavesdropping on all your conversations but are specifically designed to. Smartphones, tablets, computers, smartwatches, smart speakers, even voice-activated appliances that have access to smart assistants like Amazon’s Alexa or Google Assistant feature built-in microphones that are constantly monitoring conversations for specific activation words to bring them to life. But accurate voice recognition often requires processing recordings in the cloud on faraway servers, and despite what giant companies keep assuring us, there are obvious and warranted concerns about privacy.

You could simply find yourself a lovely cave deep in the woods and hide out the rest of your days away from technology if you don’t want to be the victim of endless eavesdropping, but this wearable jammer, created by researchers from the University of Chicago, is a (slightly) less drastic alternative. It’s chunky, there’s no denying it, but surrounding an inner core of electronics and batteries are a series of ultrasonic transducers blasting sound waves in all directions. While inaudible to human ears, the ultrasonic signals take advantage of a flaw found in sensitive microphone hardware that results in these signals being captured and interfering with the recordings of lower parts of the audio spectrum where the frequencies of human voices fall.

The results are recordings that are nearly incomprehensible to both human ears and the artificial intelligence-powered voice recognition software that smart assistants and other voice-activated devices rely on.

But why pack the technology into a wearable bracelet instead of creating a stationary device you could set up in the middle of a room for complete privacy? An array of transducers pointing in all directions are needed to properly blanket a room in ultrasonic sound waves, but thanks to science, wherever the signals from two neighboring transducers overlap, they cancel each other out, creating dead zones where microphones could continue to effectively operate.

By incorporating the jamming hardware into a wearable device, the natural and subconscious movements of the wearer’s arms and hands while they speak keep the transducers in motion. This effectively eliminates the risk of dead zones being created long enough to allow entire words or sentences to be detected by a smart device’s microphone. For those who are truly worried about their privacy, the research team has shared their source code for the signal generator as well as 3D models for the bracelet on GitHub for anyone to download and build themselves. You’ll need to supply your own electronics, and if you’re going to all the trouble, you might as well build one for each wrist, all but ensuring there’s never a dead zone in your silencing shield.

Source: This Punk Bracelet Prevents Smart Speakers From Hearing You

This is nice  because Project Alias / Parasite is aimed at a very specific machine, whereas this will protect you wherever you go. It’s just a bit clunky.

Internet Society told to halt .org sale to dodgy companies… by its own advisory council

The Internet Society’s own members are now opposing its sale of the .org internet registry to an unknown private equity firm.

The Chapters Advisory Council, the official voice of Internet Society (ISOC) members, will vote this month on whether to approve a formal recommendation that the society “not proceed [with the sale] unless a number of conditions are met.”

Those conditions largely comprise the publication of additional details and transparency regarding ISOC’s controversial sell-off of .org. Despite months of requests, neither the society nor the proposed purchaser, Ethos Capital, have disclosed critical elements of the deal, including who would actually own the registry if the sale went through.

Meanwhile, word has reached us that Ethos Capital attempted to broker a secret peace treaty this coming weekend in Washington DC by inviting key individuals to a closed-door meeting with the goal of thrashing out an agreement all sides would be happy with. After Ethos insisted the meeting be kept brief, and a number of those opposed to the sale declined to attend, Ethos’s funding for attendees’ flights and accommodation was suddenly withdrawn, and the plan to hold a confab fell apart, we understand.

ISOC – and .org’s current operator, the ISOC-controlled Public Interest Registry (PIR) – are still hoping to push DNS overseer ICANN to make a decision on the .org sale before the end of the month. But that looks increasingly unlikely following an aggressive letter from ICANN’s external lawyers last week insisting ICANN will take as much time as it feels necessary to review the deal.

The overall lack of transparency around the $1.13bn deal has led California’s Attorney General to demand documents relating to the sale – and ISOC’s chapters are demanding the same information as a pre-condition to any sale in their proposed advice to the ISOC board.

That information includes: full details of the transaction; a financial breakdown of what Ethos Capital intends to do with .org’s 10 million internet addresses; binding commitments on limiting price increases and free speech protections; and publication of the bylaws and related corporate documents for both the replacement to the current registry operator, PIR, and the proposed “Stewardship Council” which Ethos claims will give .org users a say in future decisions.

Disregarded

“There is a feeling amongst chapters that ISOC seems to have disregarded community participation, failed to properly account for the potential community impact, and misread the community mindset around the .ORG TLD,” the Chapters Advisory Council’s proposed advice to the ISOC board – a copy of which The Register has seen – states.

Although the advisory council has no legal ability to stop ISOC, if the proposed advice is approved by vote, and the CEO and board of trustees push ahead with the sale regardless, it could have severe repercussions for the organization’s non-profit status, and would further undermine ISOC’s position that the sale will “support the Internet Society’s vision that the Internet is for everyone.”

[…]

That lack of transparency was never more clear than when the ISOC board claimed to have met for two weeks in November to discuss the Ethos Capital offer to buy .org, but made no mention of the proposal and only made ISOC members and chapters aware of the decision after it had been made.

With a spotlight on ISOC’s secretive deliberations – and with board members now claiming they are subject to a non-disclosure agreement over the sale – the organization has added skeleton minutes that provide little or no insight into deliberations. It is not clear when those minutes were added – no update date is provided.

“The primary purpose of the Chapters Advisory Council shall be to channel and facilitate advice and recommendations to and from the President and Board of Trustees of the Internet Society in a bottom up manner, on any matters of concern or interest to the Chapter AC and ISOC Chapters,” reads the official description of the council on ISOC’s website.

With Ethos having failed to broker a secret deal, and ICANN indicating that it will consider the public interest in deciding whether to approve the sale, if ISOC’s advisory council does vote to advise the board not to move forward with the sale, the Internet Society will face a stark choice: stick by the secretive billionaires funding the purchase of .org with the added risk of blowing up the entire organization; or walk away from the deal.

Source: Revolution, comrades: Internet Society told to halt .org sale… by its own advisory council • The Register

Google allows random company to DMCA sites with the word ‘Did’ in it, de-indexes (deletes) them without warning or recourse.

In 2018, Target wrote an article about Ada Lovelace, the daughter of Lord Byron who some credit as being the world’s first computer programmer, despite being born in 1815. Unfortunately, however, those who search for that article today using Google won’t find it.

As the image below shows, the original Tweet announcing the article is still present in Google’s indexes but the article itself has been removed, thanks to a copyright infringement complaint that also claimed several other victims.

While there could be dozens of reasons the article infringed someone’s copyrights, the facts are so absurd as to be almost unbelievable. Sinclair’s article was deleted because an anti-piracy company working on behalf of a TV company decided that since its title (What Did Ada Lovelace’s Program Actually Do?) contained the word ‘DID’, it must be illegal.

This monumental screw-up was announced on Twitter by Sinclair himself, who complained that “Computers are stupid folks. Too bad Google has decided they are in charge.”

At risk of running counter to Sinclair’s claim, in this case – as Lovelace herself would’ve hopefully agreed – it is people who are stupid, not computers. The proof for that can be found in the DMCA complaint sent to Google by RightsHero, an anti-piracy company working on behalf of Zee TV, an Indian pay-TV channel that airs Dance India Dance.

Now in its seventh season, Dance India Dance is a dance competition reality show that is often referred to as DID. And now, of course, you can see where this is going. Because Target and at least 11 other sites dared to use the word in its original context, RightsHero flagged the pages as infringing and asked Google to deindex them.

But things only get worse from here.

Look up the word ‘did’ in any dictionary and you will never find the definition listed as an acronym for Dance India Dance. Instead, you’ll find the explanation as “past of do” or something broadly along those lines. However, if the complaint sent to Google had achieved its intended effect, finding out that would’ve been more difficult too.

Lo, here it is in its full glory.

As we can see, the notice not only claims Target’s article is infringing the copyrights of Dance India Dance (sorry, DID), but also no less than four online dictionaries explaining what the word ‘did’ actually means. (Spoiler: None say ‘Dance India Dance’).

Perhaps worse still, some of the other allegedly-infringing articles were published by some pretty serious information resources including:

-USGS Earthquake Hazards Program of the U.S. Geological Survey (Did You Feel It? (DYFI) collects information from people who felt an earthquake and creates maps that show what people experienced and the extent of damage)

– The US Department of Education (Did (or will) you file a Schedule 1 with your 2018 tax return?)

– Nature.com (Did pangolins spread the China coronavirus to people?)

Considering the scale of the problem here, we tried to contact RightsHero for comment. However, the only anti-piracy company bearing that name has a next-to-useless website that provides no information on where the company is, who owns it, who runs it, or how those people can be contacted.

In the absence of any action by RightsHero, Sinclair Target was left with a single option – issue a counterclaim to Google in the hope of having his page restored.

“I’ve submitted a counter-claim, which seemed to be the only thing I could do,” Target told TorrentFreak.

“Got a cheery confirmation email from Google saying, ‘Thanks for contacting us!’ and that it might be a while until the issue is resolved. I assume that’s because this is the point where finally a decision has to be made by a human being. It is annoying indeed.”

Finally, it’s interesting to take a line from Target’s analysis of Lovelace’s program. “She thought carefully about how operations could be organized into groups that could be repeated, thereby inventing the loop,” he writes.

10 DELETE “DID”
20 PROFIT?
30 GOTO 10

Source: Don’t Use the Word ‘Did’ or a Dumb Anti-Piracy Company Will Delete You From Google – TorrentFreak

How Big Companies Spy on Your Emails

The popular Edison email app, which is in the top 100 productivity apps on the Apple app store, scrapes users’ email inboxes and sells products based off that information to clients in the finance, travel, and e-Commerce sectors. The contents of Edison users’ inboxes are of particular interest to companies who can buy the data to make better investment decisions, according to a J.P. Morgan document obtained by Motherboard.

On its website Edison says that it does “process” users’ emails, but some users did not know that when using the Edison app the company scrapes their inbox for profit. Motherboard has also obtained documentation that provides more specifics about how two other popular apps—Cleanfox and Slice—sell products based on users’ emails to corporate clients.

Source: How Big Companies Spy on Your Emails – VICE

The advertising industry is systematically breaking the law says Norweigan consumer council

Based on the findings, more than 20 consumer and civil society organisations in Europe and from different parts of the world are urging their authorities to investigate the practices of the online advertising industry.

The report uncovers how every time we use apps, hundreds of shadowy entities are receiving personal data about our interests, habits, and behaviour. This information is used to profile consumers, which can be used for targeted advertising, but may also lead to discrimination, manipulation and exploitation.

– These practices are out of control and in breach of European data protection legislation. The extent of tracking makes it impossible for us to make informed choices about how our personal data is collected, shared and used, says Finn Myrstad, director of digital policy in the Norwegian Consumer Council.

The Norwegian Consumer Council is now filing formal complaints against Grindr, a dating app for gay, bi, trans, and queer people and companies that were receiving personal data through the app;  Twitter`s MoPub, AT&T’s AppNexus, OpenX, AdColony and Smaato. The complaints are directed to the Norwegian Data Protection Authority for breaches of the General Data Protection Regulation.

[…]

Every time you open an app like Grindr advertisement networks get your GPS location, device identifiers and even the fact that you use a gay dating app. This is an insane violation of users’ EU privacy rights, says Max Schrems, founder of the European privacy non-profit NGO noyb.

The harmful effects of profiling

Many actors in the online advertising industry collect information about us from a variety of places, including web browsing, connected devices, and social media. When combined, this data provides a complex picture of individuals, revealing what we do in our daily lives, our secret desires, and our most vulnerable moments.

–  This massive commercial surveillance is systematically at odds with our fundamental rights  and can be used to discriminate, manipulate and exploit us. The widespread tracking also has the potential to seriously degrade consumer trust in digital services, says Myrstad.

– Furthermore, in a recent Amnesty International report, Amnesty showed how these data-driven business models are a serious threat to human rights such as freedom of opinion and expression, freedom of thought, and the right to equality and non-discrimination.

[…]

– The situation is completely out of control. In order to shift the significant power imbalance between consumers and third party companies, the current practices of extensive tracking and profiling have to end, says Myrstad.

– There are very few actions consumers can take to limit or prevent the massive tracking and data sharing that is happening all across the internet. Authorities must take active enforcement measures to protect consumers against the illegal exploitation of personal data.

Source: New study: The advertising industry is systematically breaking the law : Forbrukerrådet

Netflix Loses Bid to Dismiss $25 Million Lawsuit Over ‘Black Mirror: Bandersnatch’ because someone feels they own the phrase: choose your own adventure

Chooseco LLC, a children’s book publisher, filed its complaint in January 2019. According to the plaintiff, it has been using the mark since the 1980s and has sold more than 265 million copies of its Choose Your Own Adventure books. 20th Century Fox holds options for movie versions, and Chooseco alleges that Netflix actively pursued a license. Instead of getting one, Netflix released Bandersnatch, which allows audiences to select the direction of the plot. Claiming $25 million in damages, Chooseco suggested that Bandersnatch viewers have been confused about association with its famous brand, particularly because of marketing around the movie as well as a scene where the main character — a video game developer — tells his father that the work he’s developing is based on a Choose Your Own Adventure book.

In reaction to the lawsuit, Netflix raised a First Amendment defense, particularly the balancing test in Rogers v. Grimaldi, whereby unless a work has no artistic relevance, the use of a mark must be misleading for it to be actionable.

U.S. District Court Judge William Sessions agrees that Bandersnatch is an artistic work even if Netflix derived profit from exploiting the Charlie Brooker film.

And the judge says that use of the trademark has artistic relevance.

“Here, the protagonist of Bandersnatch attempts to convert the fictional book ‘Bandersnatch’ into a videogame, placing the book at the center of the film’s plot,” states the ruling. “Netflix used Chooseco’s mark to describe the interactive narrative structure shared by the book, the videogame, and the film itself. Moreover, Netflix intended this narrative structure to comment on the mounting influence technology has in modern day life. In addition, the mental imagery associated with Chooseco’s mark adds to Bandersnatch’s 1980s aesthetic. Thus, Netflix’s use of Chooseco’s mark clears the purposely-low threshold of Rogers’ artistic relevance prong.”

Thus, the final question is whether Netflix’s film is explicitly misleading. Judge Sessions doesn’t believe it’s appropriate to dismiss the case prematurely without exploring factual issues in discovery.

“Here, Chooseco has sufficiently alleged that consumers associate its mark with interactive books and that the mark covers other forms of interactive media, including films,” continues the decision. “The protagonist in Bandersnatch explicitly stated that the fictitious book at the center of the film’s plot was a ‘Choose Your Own Adventure’ book. In addition, the book, the videogame, and the film itself all employ the same type of interactivity as Chooseco’s products. The similarity between Chooseco’s products, Netflix’s film, and the fictitious book Netflix described as a ‘Choose Your Own Adventure’ book increases the likelihood of consumer confusion.”

Netflix also attempted to defend its use of “Choose Your Own Adventure” as descriptive fair use. Here, too, the judge believes that factual exploration is appropriate.

Writes Sessions, “The physical characteristics and context of the use demonstrate that it is at least plausible Netflix used the term to attract public attention by associating the film with Chooseco’s book series.”

The decision adds that while Netflix contends that the phrase in question has been used by others to describe a branch of storytelling, that argument entails consideration of facts outside of Chooseco’s complaint, which at this stage must be accepted as true.

“Additionally, choose your own adventure arguably is not purely descriptive of narrative techniques — it requires at least some imagination to link the phrase to interactive plotlines,” writes Sessions. “Moreover, any descriptive aspects of the phrase may stem from Chooseco’s mark itself. In other words, the phrase may only have descriptive qualities because Chooseco attached it to its popular interactive book series. The Court lacks the facts necessary to determine whether consumers perceive the phrase in a descriptive sense or whether they simply associate it with Chooseco’s brand.”

Here’s the full decision allowing Chooseco’s Lanham Act and unfair competition claims to proceed.

The ruling may be surprising to some, particularly as there’s a line of cases where studios have escaped trademark claims over content. For example, see Warner Bros.’ win a few years ago over “Clean Slate” in The Dark Knight Rises. If Netflix and Chooseco can’t come to a settlement, many of these issues may be re-explored at the summary judgment round.

Source: Netflix Loses Bid to Dismiss $25 Million Lawsuit Over ‘Black Mirror: Bandersnatch’ | Hollywood Reporter

Wow, copyright law is beyond strange.

Data Protection Authority Investigates Avast for Selling Users’ Browsing and Maps History

On Tuesday, the Czech data protection authority announced an investigation into antivirus company Avast, which was harvesting the browsing history of over 100 million users and then selling products based on that data to a slew of different companies including Google, Microsoft, and Home Depot. The move comes after a joint Motherboard and PCMag investigation uncovered details of the data collection through a series of leaked documents.

“On the basis of the information revealed describing the practices of Avast Software s.r.o., which was supposed to sell data on the activities of anti-virus users through its ‘Jumpshot division’ the Office initiated a preliminary investigation of the case,” a statement from the Czech national data protection authority on its website reads. Under the European General Protection Regulation (GDPR) and national laws, the Czech Republic, like other EU states, has a data protection authority to enforce things like mishandling of personal data. With GDPR, companies can be fined for data abuses.

“At the moment we are collecting information on the whole case. There is a suspicion of a serious and extensive breach of the protection of users’ personal data. Based on the findings, further steps will be taken and general public will be informed in due time,“ added Ms Ivana Janů, President of the Czech Office for Personal Data Protection, in the statement. Avast is a Czech company.

Motherboard and PCMag’s investigation found that the data sold included Avast users’ Google searches and Google Maps lookups, particular YouTube videos, and people visiting specific porn videos. The data was anonymized, but multiple experts said it could be possible to unmask the identity of users, especially when that data, sold by Avast’s subsidiary Jumpshot, was combined with other data that its clients may possess.

Days after the investigation, Avast bought back a 35 percent stake in Jumpshot worth $61 million, and shuttered Jumpshot. Avast’s valuation fell by a quarter, will incur costs between $15 and $25 million, and the closure Jumpshot will cut annual revenues by around $36 million and underlying profits by $7 million, The Times reported.

Source: Data Protection Authority Investigates Avast for Selling Users’ Browsing History – VICE