Data Protection Authority Investigates Avast for Selling Users’ Browsing and Maps History

On Tuesday, the Czech data protection authority announced an investigation into antivirus company Avast, which was harvesting the browsing history of over 100 million users and then selling products based on that data to a slew of different companies including Google, Microsoft, and Home Depot. The move comes after a joint Motherboard and PCMag investigation uncovered details of the data collection through a series of leaked documents.

“On the basis of the information revealed describing the practices of Avast Software s.r.o., which was supposed to sell data on the activities of anti-virus users through its ‘Jumpshot division’ the Office initiated a preliminary investigation of the case,” a statement from the Czech national data protection authority on its website reads. Under the European General Protection Regulation (GDPR) and national laws, the Czech Republic, like other EU states, has a data protection authority to enforce things like mishandling of personal data. With GDPR, companies can be fined for data abuses.

“At the moment we are collecting information on the whole case. There is a suspicion of a serious and extensive breach of the protection of users’ personal data. Based on the findings, further steps will be taken and general public will be informed in due time,“ added Ms Ivana Janů, President of the Czech Office for Personal Data Protection, in the statement. Avast is a Czech company.

Motherboard and PCMag’s investigation found that the data sold included Avast users’ Google searches and Google Maps lookups, particular YouTube videos, and people visiting specific porn videos. The data was anonymized, but multiple experts said it could be possible to unmask the identity of users, especially when that data, sold by Avast’s subsidiary Jumpshot, was combined with other data that its clients may possess.

Days after the investigation, Avast bought back a 35 percent stake in Jumpshot worth $61 million, and shuttered Jumpshot. Avast’s valuation fell by a quarter, will incur costs between $15 and $25 million, and the closure Jumpshot will cut annual revenues by around $36 million and underlying profits by $7 million, The Times reported.

Source: Data Protection Authority Investigates Avast for Selling Users’ Browsing History – VICE

Instagram-Scraping Clearview AI Wants To Sell Its Facial Recognition Software To Authoritarian Regimes

As legal pressures and US lawmaker scrutiny mounts, Clearview AI, the facial recognition company that claims to have a database of more than 3 billion photos scraped from websites and social media, is looking to grow around the world.

A document obtained via a public records request reveals that Clearview has been touting a “rapid international expansion” to prospective clients using a map that highlights how it either has expanded, or plans to expand, to at least 22 more countries, some of which have committed human rights abuses.

The document, part of a presentation given to the North Miami Police Department in November 2019, includes the United Arab Emirates, a country historically hostile to political dissidents, and Qatar and Singapore, the penal codes of which criminalize homosexuality.

Clearview CEO Hoan Ton-That declined to explain whether Clearview is currently working in these countries or hopes to work in them. He did confirm that the company, which had previously claimed that it was working with 600 law enforcement agencies, has relationships with two countries on the map.

Source: Instagram-Scraping Clearview AI Wants To Sell Its Facial Recognition Software To Authoritarian Regimes

Almost Every Website You Visit Records Exactly How Your Mouse Moves

When you visit any website, its owner will know where you click, what you type, and how you move your mouse. That’s how websites work: In order to perform actions based on user input, they have to know what that input is.

On its own, that information isn’t all that useful, but many websites today use a service that pulls all of this data together to create session replays of a user’s every move. The result is a video that feels like standing over a user’s shoulder and watching them use the site directly — and what sites can glean from these sorts of tracking tools may surprise you.

Session replay services have been around for over a decade and are widely used. One service, called FullStory, lists popular sites like Zillow, TeeSpring, and Jane as clients on its website. Another, called LogRocket, boasts Airbnb, Reddit, and CarFax, and a third called Inspectlet lists Shopify, ABC, and eBay among its users. They bill themselves as tools for designing sites that are easy to use and increase desired user behavior, such as buying an item. If many users add items to their cart, but then abandon the purchase at a certain rough part of the checkout process, for instance, the service helps site owners figure out how to change the site’s design to nudge users over the checkout line.

Source: Almost Every Website You Visit Records Exactly How Your Mouse Moves

Block these kinds of sites using things like ublock origin, privacy badger, ghostery, facebook container, chameleon, noscript

US gov buys all US cell phone location data, wants to use it for deportations

The American Civil Liberties Union plans to fight newly revealed practices by the Department of Homeland Security which used commercially available cell phone location data to track suspected illegal immigrants.

“DHS should not be accessing our location information without a warrant, regardless whether they obtain it by paying or for free. The failure to get a warrant undermines Supreme Court precedent establishing that the government must demonstrate probable cause to a judge before getting some of our most sensitive information, especially our cell phone location history,” said Nathan Freed Wessler, a staff attorney with the ACLU’s Speech, Privacy, and Technology Project.

Earlier today, The Wall Street Journal reported that Homeland Security, through its Immigration and Customs Enforcement (ICE) and Customs and Border Protection (CBP) agencies, was buying geolocation data from commercial entities to investigate suspects of alleged immigration violations.

The location data, which aggregators acquire from cellphone apps, including games, weather, shopping and search services, is being used by Homeland Security to detect undocumented immigrants and others entering the U.S. unlawfully, the Journal reported.

According to privacy experts interviewed by the Journal, because the data is publicly available for purchase, the government practices don’t appear to violate the law — despite being what may be the largest dragnet ever conducted by the U.S. government using the aggregated data of its citizens.

It’s also an example of how the commercial surveillance apparatus put in place by private corporations in Democratic societies can be legally accessed by state agencies to create the same kind of surveillance networks used in more authoritarian countries like China, India and Russia.

“This is a classic situation where creeping commercial surveillance in the private sector is now bleeding directly over into government,” Alan Butler, general counsel of the Electronic Privacy Information Center, a think tank that pushes for stronger privacy laws, told the newspaper.

Source: ACLU says it’ll fight DHS efforts to use app locations for deportations | TechCrunch

How to Remove Windows 10’s Annoying Ads Masquerading as ‘Suggestions’

In a perfect world, every new computer with Windows 10 on it—or every new installation of Windows 10—would arrive free of annoying applications and other bloatware that few people need. (Sorry, Candy Crush Saga.) It would also be free of annoying advertising. While that’s not to say that Microsoft is dropping big banners for Coke or something in your OS, it is frustrating to see it shilling for its Edge browser in your Start Menu.

[…]

To disable these silly suggestions, pull up your Windows 10 Settings menu. From there, click on Personalization, and then click on the Start option in the left-hand sidebar. Look for the following option and disable it: “Show suggestions occasionally in Start”

And while you’re in the Settings app, click on Lock screen. If you aren’t already using a picture or a slideshow as the background, select that, and then deselect the option to “Get fun facts, tips, and more from Windows and Cortana on your lock screen.” In other words, you don’t want to get spammed with suggestions or ads.

Finally, head back to the main Settings screen and click on System. From there, click on “Notifications & actions” in the left-hand sidebar. Because Windows can sometimes get a little spammy and/or advertise you Microsoft products via notifications, you’ll want to uncheck “Get tips, tricks, and suggestions as you use Windows” to cut that out of your digital life.

Source: How to Remove Windows 10’s Annoying Ads Masquerading as ‘Suggestions’

Wacom tablet drivers phone home with names, times of every app opened on your computer

Wacom’s official tablet drivers leak to the manufacturer the names of every application opened, and when, on the computers they are connected to.

Software engineer Robert Heaton made this discovery after noticing his drawing board’s fine-print included a privacy policy that gave Wacom permission to, effectively, snoop on him.

Looking deeper, he found that the tablet’s driver logged each app he opened on his Apple Mac and transmitted the data to Google to analyze. To be clear, we’re talking about Wacom’s macOS drivers here: the open-source Linux ones aren’t affected, though it would seem the Windows counterparts are.

[…]

Wacom’s request made me pause. Why does a device that is essentially a mouse need a privacy policy?”

Source: Sketchy behavior? Wacom tablet drivers phone home with names, times of every app opened on your computer • The Register

Google’s Takeout App Leaked Videos To Unrelated Users

In a new privacy-related fuckup, Google told users today that it might’ve accidentally imported your personal photos into another Google user’s account. Whoopsie!

First flagged by Duo Security CTO Jon Oberheide, Google seems to be emailing users who plugged into the company’s native Takeout app to backup their videos, warning that a bug resulted in some of those (hopefully G-rated) videos being backed up to an unrelated user’s account.

For those who used the “download your data” service between November 21 and November 25 of last year, some videos were “incorrectly exported,” the note reads. “If you downloaded your data, it may be incomplete, and it may contain videos that are not yours.”

Source: Google’s Takeout App Leaked Videos To Unrelated Users

Researchers Find ‘Anonymized’ Data Is Even Less Anonymous Than We Thought

Dasha Metropolitansky and Kian Attari, two students at the Harvard John A. Paulson School of Engineering and Applied Sciences, recently built a tool that combs through vast troves of consumer datasets exposed from breaches for a class paper they’ve yet to publish.

“The program takes in a list of personally identifiable information, such as a list of emails or usernames, and searches across the leaks for all the credential data it can find for each person,” Attari said in a press release.

They told Motherboard their tool analyzed thousands of datasets from data scandals ranging from the 2015 hack of Experian, to the hacks and breaches that have plagued services from MyHeritage to porn websites. Despite many of these datasets containing “anonymized” data, the students say that identifying actual users wasn’t all that difficult.

“An individual leak is like a puzzle piece,” Harvard researcher Dasha Metropolitansky told Motherboard. “On its own, it isn’t particularly powerful, but when multiple leaks are brought together, they form a surprisingly clear picture of our identities. People may move on from these leaks, but hackers have long memories.”

For example, while one company might only store usernames, passwords, email addresses, and other basic account information, another company may have stored information on your browsing or location data. Independently they may not identify you, but collectively they reveal numerous intimate details even your closest friends and family may not know.

“We showed that an ‘anonymized’ dataset from one place can easily be linked to a non-anonymized dataset from somewhere else via a column that appears in both datasets,” Metropolitansky said. “So we shouldn’t assume that our personal information is safe just because a company claims to limit how much they collect and store.”

The students told Motherboard they were “astonished” by the sheer volume of total data now available online and on the dark web. Metropolitansky and Attari said that even with privacy scandals now a weekly occurrence, the public is dramatically underestimating the impact on privacy and security these leaks, hacks, and breaches have in total.

Previous studies have shown that even within independent individual anonymized datasets, identifying users isn’t all that difficult.

In one 2019 UK study, researchers were able to develop a machine learning model capable of correctly identifying 99.98 percent of Americans in any anonymized dataset using just 15 characteristics. A different MIT study of anonymized credit card data found that users could be identified 90 percent of the time using just four relatively vague points of information.

Another German study looking at anonymized user vehicle data found that that 15 minutes’ worth of data from brake pedal use could let them identify the right driver, out of 15 options, roughly 90 percent of the time. Another 2017 Stanford and Princeton study showed that deanonymizing user social networking data was also relatively simple.

Individually these data breaches are problematic—cumulatively they’re a bit of a nightmare.

Metropolitansky and Attari also found that despite repeated warnings, the public still isn’t using unique passwords or password managers. Of the 96,000 passwords contained in one of the program’s output datasets—just 26,000 were unique.

The problem is compounded by the fact that the United States still doesn’t have even a basic privacy law for the internet era, thanks in part to relentless lobbying from a cross-industry coalition of corporations eager to keep this profitable status quo intact. As a result, penalties for data breaches and lax security are often too pathetic to drive meaningful change.

Harvard’s researchers told Motherboard there’s several restrictions a meaningful U.S. privacy law could implement to potentially mitigate the harm, including restricting data access to unauthorized employees, maininting better records on data collection and retention, and decentralizing data storage (not keeping corporate and consumer data on the same server).

Until then, we’re left relying on the promises of corporations who’ve repeatedly proven their privacy promises aren’t worth all that much.

Source: Researchers Find ‘Anonymized’ Data Is Even Less Anonymous Than We Thought – VICE

Firefox now shows what telemetry data it’s collecting about you (if any)

There is now a special page in the Firefox browser where users can see what telemetry data Mozilla is collecting from their browser.

Accessible by typing about:telemetry in the browser’s URL address bar, this new section is a recent addition to Firefox.

The page shows deeply technical information about browser settings, installed add-ons, OS/hardware information, browser session details, and running processes.

The information is what you’d expect a software vendor to collect about users in order to fix bugs and keep a statistical track of its userbase.

A Firefox engineer told ZDNet the page was primarily created for selfish reasons, in order to help engineers debug Firefox test installs. However, it was allowed to ship to the stable branch also as a PR move, to put users’ minds at ease about what type of data the browser maker collects from its users.

The move is in tune with what Mozilla has been doing over the past two years, pushing for increased privacy controls in its browser and opening up about its practices, in stark contrast with what other browser makers have been doing in the past decade.

Source: Firefox now shows what telemetry data it’s collecting about you | ZDNet

Alias Privacy “Parasite” 2.0 Adds a Layer of Security to Your Home Assistant

Alias is a teachable “parasite” that gives you more control over your smart assistant’s customization and privacy. Through a simple app, you can train Alias to react to a self-chosen wake-word; once trained, Alias takes control over your home assistant by activating it for you. When you’re not using it, Alias makes sure the assistant is paralyzed and unable to listen to your conversations.

When placed on top of your home assistant, Alias uses two small speakers to interrupt the assistant’s listening with a constant low noise that feeds directly into the microphone of the assistant. When Alias recognizes your user-created wake-word (e.g., “Hey Alias” or “Jarvis” or whatever), it stops the noise and quietly activates the assistant by speaking the original wake-word (e.g., “Alexa” or “Hey Google”).

From here the assistant can be used as normal. Your wake-word is detected by a small neural network program that runs locally on Alias, so the sounds of your home are not uploaded to anyone’s cloud.

Source: Alias Privacy “Parasite” 2.0 Adds a Layer of Security to Your Home Assistant | Make:

Don’t use online DNA tests! If You Ever Used Promethease, Your DNA Data Might Be on MyHeritage – and so will your family’s

When it comes to ways to learn about your DNA, Promethease’s service seemed like one of the safest. They promised anonymity, and to delete your report after 45 days. But now that MyHeritage has bought the company, users are being notified that their DNA data is now on MyHeritage. Wait, what?

It turns out that even though Promethease deleted reports as promised after 45 days, if you created an account, the service held onto your raw data. You now have a MyHeritage account, which you can delete if you like. Check your email. That’s how I found out about mine.

What Promethease does

A while back, I downloaded my raw data from 23andme and gave it to Promethease to find out what interesting things might be in my DNA. Ever since 23andme stopped providing detailed health-related results in 2013, Promethease was a sensible alternative. They used to charge $5 (now up to $12, but that’s still a steal) and they didn’t attempt to explain your results to you. Instead, you could just see what SNPs you had—those are spots where your DNA differs from other people’s—and read on SNPedia, a sort of genetics wikipedia, about what those SNPs might mea

So this means Promethease had access to the raw file you gave it (which you would have gotten from 23andme, Ancestry, or another service), and to the report of SNPs that it created for you. You had the option of paying your fee, downloading your report, and never dealing with the company again; or you could create an account so that you could “regenerate” your report in the future without having to pay again. That means they stored your raw DNA file.

Source: If You Ever Used Promethease, Your DNA Data Might Be on MyHeritage Now

Because your DNA contains information about your whole family, by uploading your DNA you also upload their DNA, making it a whole lot easier to de-anonymise their DNA. It’s a bit like uploading a picture of your family to Facebook with the public settings on and then tagging them, even though the other family members on your picture aren’t on Facebook.

Social media scrapers Clearview Lied About Its Crime-Solving Power In Pitches To Law Enforcement Agencies

A very questionable facial recognition tool being offered to law enforcement was recently exposed by Kashmir Hill for the New York Times. Clearview — created by a developer previously best known for an app that let people put Trump’s “hair” on their own photos — is being pitched to law enforcement agencies as a better AI solution for all their “who TF is this guy” problems.

Clearview doesn’t limit itself to law enforcement databases — ones (partially) filled with known criminals and arrestees. Instead of using known quantities, Clearview scrapes the internet for people’s photos. With the click of an app button, officers are connected to Clearview’s stash of 3 billion photos pulled from public feeds on Twitter, LinkedIn, and Facebook.

Most of the scrapees have already objected to being scraped. While this may violate terms of service, it’s not completely settled that scraping content from public feeds is actually illegal. However, peeved companies can attempt to shut off their firehoses, which is what Twitter is in the process of doing.

Clearview has made some bold statements about its effectiveness — statements that haven’t been independently confirmed. Clearview did not submit its software to NIST’s recent roundup of facial recognition AI, but it most likely would not have fared well. Even more established software performed poorly, misidentifying minorities almost 100 times more often than it did white males.

The company claims it finds matches 75% of the time. That doesn’t actually mean it finds the right person 75% of the time. It only means the software finds someone that matches submitted photos three-quarters of the time. Clearview has provided no stats on its false positive rate. That hasn’t stopped it from lying about its software and its use by law enforcement agencies.

A BuzzFeed report based on public records requests and conversations with the law enforcement agencies says the company’s sales pitches are about 75% bullshit.

Clearview AI, a facial recognition company that says it’s amassed a database of billions of photos, has a fantastic selling point it offers up to police departments nationwide: It cracked a case of alleged terrorism in a New York City subway station last August in a matter of seconds. “How a Terrorism Suspect Was Instantly Identified With Clearview,” read the subject line of a November email sent to law enforcement agencies across all 50 states through a crime alert service, suggesting its technology was integral to the arrest.

Here’s what the NYPD had to say about Clearview’s claims in its marketing materials:

“The NYPD did not use Clearview technology to identify the suspect in the August 16th rice cooker incident,” a department spokesperson told BuzzFeed News. “The NYPD identified the suspect using the Department’s facial recognition practice where a still image from a surveillance video was compared to a pool of lawfully possessed arrest photos.”

The NYPD also said it had no “institutional relationship” with Clearview, contradicting the company’s sales pitch insinuations. The NYPD was not alone in its rejection of Clearview’s claims.

Clearview also claimed to be instrumental in apprehending a suspect wanted for assault. In reality, the suspect turned himself in to the NYPD. The PD again pointed out Clearview played no role in this investigation. It also had nothing to do with solving a subway groping case (the tip that resulted in an arrest was provided to the NYPD by the Guardian Angels) or an alleged “40 cold cases solved” by the NYPD.

The company says it is “working with” over 600 police departments. But BuzzFeed’s investigation has uncovered at least two cases where “working with” simply meant submitting a lead to a PD tip line. Most likely, this is only the tip of the iceberg. As more requested documents roll in, there’s a very good chance this “working with” BS won’t just be a two-off.

Clearview’s background appears to be as shady as its public claims. In addition to its founder’s links to far right groups (first uncovered by Kashmir Hill), its founder pumped up the company’s reputation by deploying a bunch of sock puppets.

Ton-That set up fake LinkedIn profiles to run ads about Clearview, boasting that police officers could search over 1 billion faces in less than a second.

These are definitely not the ethics you want to see from a company pitching dubious facial recognition software to law enforcement agencies. Some agencies may perform enough due diligence to move forward with a more trustworthy company, but others will be impressed with the lower cost and the massive amount of photos in Clearview’s database and move forward with unproven software created by a company that appears to be willing to exaggerate its ability to help cops catch crooks.

If it can’t tell the truth about its contribution to law enforcement agencies, it’s probably not telling the truth about the software’s effectiveness. If cops buy into Clearview’s PR pitches, the collateral damage will be innocent people’s freedom.

Source: Facial Recognition Company Clearview Lied About Its Crime-Solving Power In Pitches To Law Enforcement Agencies | Techdirt

Clearview AI Told Cops To “Run Wild” With Its Creepy Face database, access given away without checks and sold to private firms despite claiming otherwise

Clearview AI, the facial recognition company that claims to have amassed a database of more than 3 billion photos scraped from Facebook, YouTube, and millions of other websites, is scrambling to deal with calls for bans from advocacy groups and legal threats. These troubles come after news reports exposed its questionable data practices and misleading statements about working with law enforcement.

Following stories published in the New York Times and BuzzFeed News, the Manhattan-based startup received cease-and-desist letters from Twitter and the New Jersey attorney general. It was also sued in Illinois in a case seeking class-action status.

Despite its legal woes, Clearview continues to contradict itself, according to documents obtained by BuzzFeed News that are inconsistent with what the company has told the public. In one example, the company, whose code of conduct states that law enforcement should only use its software for criminal investigations, encouraged officers to use it on their friends and family members.

“To have these technologies rolled out by police departments without civilian oversight really raises fundamental questions about democratic accountability,” Albert Fox Cahn, a fellow at New York University and the executive director of the Surveillance Technology Oversight Project, told BuzzFeed News.

In the aftermath of revelations about its technology, Clearview has tried to clean up its image by posting informational webpages, creating a blog, and trotting out surrogates for media interviews, including one in which an investor claimed Clearview was working with “over a thousand independent law enforcement agencies.” Previously, Clearview had stated that the number was around 600.

Clearview has also tried to allay concerns that its technology could be abused or used outside the scope of police investigations. In a code of conduct that the company published on its site earlier this month, it said its users should “only use the Services for law enforcement or security purposes that are authorized by their employer and conducted pursuant to their employment.”

It bolstered that idea with a blog post on Jan. 23, which stated, “While many people have advised us that a public version would be more profitable, we have rejected the idea.”

“Clearview exists to help law enforcement agencies solve the toughest cases, and our technology comes with strict guidelines and safeguards to ensure investigators use it for its intended purpose only,” the post stated.

But in a November email to a police lieutenant in Green Bay, Wisconsin, a company representative encouraged a police officer to use the software on himself and his acquaintances.

“Have you tried taking a selfie with Clearview yet?” the email read. “It’s the best way to quickly see the power of Clearview in real time. Try your friends or family. Or a celebrity like Joe Montana or George Clooney.

“Your Clearview account has unlimited searches. So feel free to run wild with your searches,” the email continued. The city of Green Bay would later agree on a $3,000 license with Clearview.

Via Obtained by BuzzFeed News

An email from Clearview to an officer in Green Bay, Wisconsin, from November 2019.

Hoan Ton-That, the CEO of Clearview, claimed in an email that the company has safeguards on its product.

“As as [sic] safeguard we have an administrative tool for Law Enforcement supervisors and administrators to monitor the searches of a particular department,” Ton-That said. “An administrator can revoke access to an account at any time for any inappropriate use.”

Clearview’s previous correspondence with Green Bay police appeared to contradict what Ton-That told BuzzFeed News. In emails obtained by BuzzFeed News, the company told officers that searches “are always private and never stored in our proprietary database, which is totally separate from the photos you search.”

“So feel free to run wild with your searches.”

“It’s certainly inconsistent to, on the one hand, claim that this is a law enforcement tool and that there are safeguards — and then to, on the other hand, recommend it being used on friends and family,” Clare Garvie, a senior associate at the Georgetown Law’s Center on Privacy and Technology, told BuzzFeed News.

Clearview has also previously instructed police to act in direct violation of the company’s code of conduct, which was outlined in a blog post on Monday. The post stated that law enforcement agencies were “required” to receive permission from a supervisor before creating accounts.

But in a September email sent to police in Green Bay, the company said there was an “Invite User” button in the Clearview app that can be used to give any officer access to the software. The email encouraged police officers to invite as many people as possible, noting that Clearview would give them a demo account “immediately.”

“Feel free to refer as many officers and investigators as you want,” the email said. “No limits. The more people searching, the more successes.”

“Rewarding loyal customers”

Despite its claim last week that it “exists to help law enforcement agencies,” Clearview has also been working with entities outside of law enforcement. Ton-That told BuzzFeed News on Jan. 23 that Clearview was working with “a handful of private companies who use it for security purposes.” Marketing emails from late last year obtained by BuzzFeed News via a public records request showed the startup aided a Georgia-based bank in a case involving the cashing of fraudulent checks.

Earlier this year, a company representative was slated to speak at a Las Vegas gambling conference about casinos’ use of facial recognition as a way of “rewarding loyal customers and enforcing necessary bans.” Initially, Jessica Medeiros Garrison, whose title was stated on the conference website as Clearview’s vice president of public affairs, was listed on a panel that included the head of surveillance for Las Vegas’ Cosmopolitan hotel. Later versions of the conference schedule and Garrison’s bio removed all mentions of Clearview AI. It is unclear if she actually appeared on the panel.

A company spokesperson said Garrison is “a valued member of the Clearview team” but declined to answer questions on any possible work with casinos.

Cease and desist

Clearview has also faced legal threats from private and government entities. Last week, Twitter sent the company a cease-and-desist letter, noting that its claim to have collected photos from its site was in violation of the social network’s terms of service.

“This type of use (scraping Twitter for people’s images/likeness) is not allowed,” a company spokesperson told BuzzFeed News. The company, which asked Clearview to cease scraping and delete all data collected from Twitter, pointed BuzzFeed News to a part of its developer policy, which states it does not allow its data to be used for facial recognition.

On Friday, Clearview received a similar note from the New Jersey attorney general, who called on state law enforcement agencies to stop using the software. The letter also told Clearview to stop using clips of New Jersey Attorney General Gurbir Grewal in a promotional video on its site that claimed that a New Jersey police department used the software in a child predator sting late last year.

[…]

Clearview declined to provide a list of law enforcement agencies that were on free trials or paid contracts, stating that it was more than 600.

“We do not have to be hidden”

That number is lower than what one of Clearview’s investors bragged about on Saturday. David Scalzo, an early investor in Clearview through his firm, Kirenaga Partners, claimed in an interview with Dilbert creator and podcaster Scott Adams that “over a thousand independent law enforcement agencies” were using the software. The investor went on to contradict the company’s public statement that it would not make its tool available to the public, stating “it is inevitable that this digital information will be out there” and “the best thing we can do is get this technology out to everyone.”

[…]

EPIC’s letter came after an Illinois resident sued Clearview in a state district court last Wednesday, alleging the software violated the Illinois Biometric Information Privacy Act by collecting the “identifiers and information” — like facial data gathered from photos accumulated from social media — without permission. Under the law, private companies are not allowed to “collect, capture, purchase,” or receive biometric information about a person without their consent.

The complaint, which also alleged that Clearview violated the constitutional rights of all Americans, asked for class-action recognition on behalf of all US citizens, as well as all Illinois residents whose biometric information was collected. When asked, Ton-That did not comment on the lawsuit.

In legal documents given to police, obtained by BuzzFeed News through a public records request, Clearview argued that it was not subject to states’ biometric data laws including those in Illinois. In a memo to the Atlanta Police Department, a lawyer for Clearview argued that because the company’s clients are public agencies, the use of the startup’s technology could not be regulated by state law, which only governs private entities.

Cahn, the executive director of the Surveillance Technology Oversight Project, said that it was “problematic” for Clearview AI to argue it wasn’t beholden to state biometric laws.

“Those laws regulate the commercial use of these sorts of tools, and the idea that somehow this isn’t a commercial application, simply because the customer is the government, makes no sense,” he said. “This is a company with private funders that will be profiting from the use of our information.”

Under the attention, Clearview added explanations to its site to deal with privacy concerns. It added an email link for people to ask questions about its privacy policy, saying that all requests will go to its data protection officer. When asked by BuzzFeed News, the company declined to name that official.

To process a request, however, Clearview is requesting more personal information: “Please submit name, a headshot and a photo of a government-issued ID to facilitate the processing of your request.“ The company declined to say how it would use that information.

Source: Clearview AI Once Told Cops To “Run Wild” With Its Facial Recognition Tool

Brave, Google, Microsoft, Mozilla gather together to talk web privacy… and why we all shouldn’t get too much of it. Only FF and Brave will give you some.

At the USENIX Enigma conference on Tuesday, representatives of four browser makers, Brave, Google, Microsoft, and Mozilla, gathered to banter about their respective approaches to online privacy, while urging people not to ask for too much of it.

Apple, which has advanced browser privacy standards but was recently informed that its tracking defenses can be used for er, tracking, was conspicuously absent, though it had a tongue-tied representative recruiting for privacy-oriented job positions at the show.

The browser-focused back-and-forth was mostly cordial as the software engineers representing their companies discussed notable privacy features in the various web browsers they worked on. They stressed the benefit of collaboration on web standards and the mutually beneficial effects of competition.

Eric Lawrence, program manager on the Microsoft Edge team, touched on how Microsoft has just jettisoned 25 years of Internet Explorer code to replatform Edge on the open source Chromium project, now the common foundation for 20 or so browsers.

Beside a slide that declared “Microsoft loves the Web,” Lawrence made the case for the new Edge as a modern browser with some well-designed privacy features, including Microsoft’s take on tracking protection, which blocks most trackers in its default setting and can be made more strict, at the potential cost of site compatibility.

A slide at Enigma 2020 saying Microsoft loves the Web;

Edge comes across as a reliable alternative to Chrome and should become more distinct as it evolves. It occupies a difficult space on the privacy continuum, in that it has some nice privacy features but not as many as Brave or Firefox. But Edge may find fans on the strength of the Microsoft brand since, as Lawrence emphasized, Microsoft is not new to privacy concerns.

That said, Microsoft is not far from Google in advocating not biting the hand that feeds the web ecosystem – advertising.

“The web doesn’t exist in a vacuum,” Lawrence warned. “People who are building sites and services have choices for what platforms they target. They can build a mobile application. They can take their content off the open web and put it into a walled garden. And so if we do things with privacy that hurt the open web, we could end up pushing people to less privacy for certain ecosystems.”

Lawrence pointed to a recent report about a popular Android app found to be leaking data. It took time to figure that out, he said, because mobile platforms are less transparent than the web, where it’s easier to scour source code and analyze network behavior.

Justin Schuh, engineering director on Google Chrome for trust and safety, reprised an argument he’s made previously that too much privacy would be harmful to ad-supported businesses.

“Most of the media that we consume is actually funded by advertising today,” Schuh explained. “It has been for a very long time. Now, I’m not here to make the argument that advertising is the best or only way to fund these things. But the truth is that print, radio, and TV, – all these are funded primarily through advertising.”

And so too is the web, he insisted, arguing that advertising is what has made so much online content available to people who otherwise wouldn’t have access to it across the globe.

Schuh said in the context of the web, two trends concern him. One, he claimed, is that content is leaving because it’s easier to monetize in apps – but he didn’t cite a basis for that assertion.

The other is the rise of covert tracking, which arose, as Schuh tells it, because advertisers wanted to track people across multiple devices. So they turned to looking at IP-based fingerprinting and metadata tracking, and the joining of data sets to identify people as they shift between phone, computer, and tablet.

Covert tracking also became more popular, he said, because advertisers wanted to bypass anti-tracking mechanisms. Thus, we have privacy-invading practices like CNAME cloaking, site fingerprinting, hostname rotation, and the like because browser users sought privacy.

Schuh made the case for Google’s Privacy Sandbox proposal, a set of controversial specs being developed ostensibly to enhance privacy by reducing data available for tracking and browser fingerprinting while also giving advertisers the ability to target ads.

“Broadly speaking, advertisers don’t actually need your data,” said Schuh. “All that they really want is to monetize efficiently.”

But given the willingness of advertisers to circumvent user privacy choices, the ad industry’s consistent failure to police bad behavior, and the persistence of ad fraud and malicious ads, it’s difficult to accept that advertisers can be trusted to behave.

Tanvi Vyas, principal engineer at Mozilla, focused on the consequences of the current web ecosystem, where data is gathered to target and manipulate people. She reeled off a list of social harms arising from the status quo.

“Democracies are compromised and elections around the world are being tampered with,” she said. “Populations are manipulated and micro-targeted. Fake news is delivered to just the right audience at the right time. Discrimination flourishes, and emotional harm is inflicted on specific individuals when our algorithms go wrong.”

Thanks, Facebook, Google, and Twitter.

Worse still, Vyas said, the hostile ecosystem has a chilling effect on sophisticated users who understand online tracking and prevents them from taking action. “At Mozilla, we think this is an unacceptable cost for society to pay,” she said.

Vyas described various pro-privacy technologies implemented in Firefox, including Facebook Container, which sandboxes Facebook trackers so they can’t track users on third-party websites. She also argued for legislation to improve online privacy, though Lawrence from his days working on Internet Explorer recalled how privacy rules tied to a privacy scheme known as P3P two decades ago had proved ineffective.

Speaking for Brave, CISO Yan Zhu argued a slightly different approach, though it still involves engaging with the ad industry to some extent.

“The main goal of Brave is we want to repair the privacy problems in the existing ad ecosystem in a way that no other browser has really tried, while giving publishers a revenue stream,” she said. “Basically, we have options to set micropayments to publishers, and also an option to see privacy preserving ads.”

Micropayments have been tried before but they’ve largely failed, assuming you don’t consider in-app payments to be micropayments.

Faced with a plea from an attendee for more of the browser makers to support micropayments instead of relying on ads, Schuh said, “I would absolutely love to see micropayments succeed. I know there have been a bunch of efforts at Google and various other companies to do it. It turns out that the payment industry itself is really, really complicated. And there are players in there that expect a fairly large cut. And so long as that exists, I don’t know if there’s a path forward.”

It now falls to Brave to prove otherwise.

Shortly thereafter, Gabriel DeWitt, VP of product at global ad marketplace Index Exchange, took a turn at the mic in the audience section in which he introduced himself and then lightheartedly asked other attendees not to throw anything at him.

Insisting that his company also cares about user privacy, despite opinions to the contrary, he asked the panelists how he could better collaborate with them.

It’s worth noting that next week, when Chrome 80 debuts, Google intends to introduce changes in the way it handles cookies that will affect advertisers. What’s more, the company has said it plans to phase out cookies entirely in a few years.

Schuh, from Google, elicited a laugh when he said, “I guess I can take this one, because that’s what everyone is expecting.”

We were expecting privacy. We got surveillance capitalism instead.

Source: Brave, Google, Microsoft, Mozilla gather together to talk web privacy… and why we all shouldn’t get too much of it • The Register

Ubiquiti says UniFi routers will beam performance data back to mothership without consent automatically, no opt-out.

Ubiquiti Networks is once again under fire for suddenly rewriting its telemetry policy after changing how its UniFi routers collect data without telling anyone.

The changes were identified in a new help document published on the US manufacturer’s website. The document differentiates between “personal data”, which includes everything that identifies a specific individual, and “other data”, which is everything else.

The document says that while users can continue to opt out of having their “personal data” collected, their “other data” – anonymous performance and crash information – will be “automatically reported”. In other words, you ain’t got no choice.

This is a shift from Ubiquiti’s last statement on data collection three months ago, which promised an opt-out button for all data collection in upcoming versions of its firmware.

A Ubiquiti representative confirmed in a forum post that the changes will automatically affect all firmware beyond 4.1.0, and that users can stop “other data” being collected by manually editing the software’s config file.

“Yes, it should be updated when we go to public release, it’s on our radar,” the rep wrote. “But I can’t guarantee it will be updated in time.”

The drama unfolded when netizens grabbed their pitchforks and headed for the company’s forums to air their grievances. “Come on UBNT,” said user leonardogyn. “PLEASE do not insist on making it hard (or impossible) to fully and easily disable sending of Analytics data. I understand it’s a great tool for you, but PLEASE consider that’s [sic] ultimately us, the users, that *must* have the option to choose to participate on it.”

The same user also pointed out that, even when the “Analytics” opt-out button is selected in the 5.13.9 beta controller software, Ubiquiti is still collecting some data. The person called the opt-out option “a misleading one, not to say a complete lie”.

Other users were similarly outraged. “This was pretty much the straw that broke the camel’s back, to be honest.” said elcid89. “I only use Unifi here at the house, but between the ongoing development instability, frenetic product range, and lack of responsiveness from staff, I’ve been considering junking it for a while now. This made the decision for me – switching over to Cisco.”

One user said that the firmware was still sending their data to two addresses even after they modified the config file.

Source: You spoke, we didn’t listen: Ubiquiti says UniFi routers will beam performance data back to mothership automatically • The Register

Facebook Enables Confusing ‘Off-Facebook Activity’ Privacy Tool, which won’t stop any tracking whatsoever

In a blog post earlier today, the famously privacy-conscious Mark Zuckerberg announced that—in honor of Data Privacy Day, which is apparently a thing—the official rollout of a long-awaited Off-Facebook Activity tool that allows Facebook users to monitor and manage the connections between Facebook profiles and their off-platform activity.

“To help shed more light on these practices that are common yet not always well understood, today we’re introducing a new way to view and control your off-Facebook activity,” Zuckerberg said in the post. “Off-Facebook Activity lets you see a summary of the apps and websites that send us information about your activity, and clear this information from your account if you want to.”

Zuck’s use of the phrases “control your off-Facebook activity” and “clear this information from your account” is kinda misleading—you’re not really controlling or clearing much of anything. By using this tool, you’re just telling Facebook to put the data it has on you into two separate buckets that are otherwise mixed together. Put another way, Facebook is offering a one-stop-shop to opt-out of any ties between the sites and services you peruse daily that have some sort of Facebook software installed and your own-platform activity on Facebook or Instagram.

The only thing you’re clearing is a connection Facebook made between its data and the data it gets from third parties, not the data itself.

Illustration for article titled Facebooks Clear History Tool Doesnt Clear Shit
Image: Facebook

As an ad-tech reporter, my bread and butter involves downloading shit that does god-knows-what with your data, which is why I shouldn’t’ve been surprised that Facebook hoovered data from more 520 partners across the internet—either sites I’d visited or apps I’d downloaded. For Gizmodo alone, Facebook tracked “252 interactions” drawn from the handful of plug-ins our blog has installed. (To be clear, you’re going to run into these kinds of trackers e.v.e.r.y.w.h.e.r.e.—not just on our site.)

These plug-ins—or “business tools,” as Facebook describes them—are the pipeline that the company uses to ascertain your off-platform activity and tie it to your on-platform identity. As Facebook describes it:

– Jane buys a pair of shoes from an online clothing and shoe store.

– The store shares Jane’s activity with us using our business tools.

– We receive Jane’s off-Facebook activity and we save it with her Facebook account. The activity is saved as “visited the Clothes and Shoes website” and “made a purchase”.

– Jane sees an ad on Facebook for a 10% off coupon on her next shoe or clothing purchase from the online store.

Here’s the catch, though: When I hit the handy “clear history” button that Facebook now provides, it won’t do jack shit to stop a given shoe store from sharing my data with Facebook—which explicitly laid this out for me when I hit that button:

Your activity history will be disconnected from your account. We’ll continue to receive your activity from the businesses and organizations you visit in the future.

Yes, it’s confusing. Baffling, really. But basically, Facebook has profiles on users and non-users alike. Those of you who have Facebook profiles can use the new tool to disconnect your Facebook data from the data the company receives from third parties. Facebook will still have that third-party-collected data and it will continue to collect more data, but that bucket of data won’t be connected to your Facebook identity.

Illustration for article titled Facebooks Clear History Tool Doesnt Clear Shit
Screenshot: Gizmodo (Facebook)

The data third parties collect about you technically isn’t Facebook’s responsibility, to begin with. If I buy a pair of new sneakers from Steve Madden where that purchase or browsing data goes is ultimately in Steve Madden’s metaphorical hands. And thanks to the wonders of targeted advertising, even the sneakers I’m purchasing in-store aren’t safe from being added as a data point that can be tied to the collective profile Facebook’s gathered on me as a consumer. Naturally, it behooves whoever runs marketing at Steve Madden—or anywhere, really—to plug in as many of those data points as they possibly can.

For the record, I also tried toggling my off-Facebook activity to keep it from being linked to my account, but was told that, while the company would still be getting this information from third parties, it would just be “disconnected from [my] account.”

Put another way: The way I browse any number of sites and apps will ultimately still make its way to Facebook, and still be used for targeted advertising across… those sites and apps. Only now, my on-Facebook life—the cat groups I join, the statuses I comment on, the concerts I’m “interested” in (but never actually attend)—won’t be a part of that profile.

Or put another way: Facebook just announced that it still has its tentacles in every part of your life in a way that’s impossible to untangle yourself from. Now, it just doesn’t need the social network to do it.

Source: Facebook Enables Confusing ‘Off-Facebook Activity’ Privacy Tool

Leaked AVAST Documents Expose the Secretive Market for Your Web Browsing Data: Google, MS, Pepsi, they all buy it – Really, uninstall it now!

An antivirus program used by hundreds of millions of people around the world is selling highly sensitive web browsing data to many of the world’s biggest companies, a joint investigation by Motherboard and PCMag has found. Our report relies on leaked user data, contracts, and other company documents that show the sale of this data is both highly sensitive and is in many cases supposed to remain confidential between the company selling the data and the clients purchasing it.

The documents, from a subsidiary of the antivirus giant Avast called Jumpshot, shine new light on the secretive sale and supply chain of peoples’ internet browsing histories. They show that the Avast antivirus program installed on a person’s computer collects data, and that Jumpshot repackages it into various different products that are then sold to many of the largest companies in the world. Some past, present, and potential clients include Google, Yelp, Microsoft, McKinsey, Pepsi, Sephora, Home Depot, Condé Nast, Intuit, and many others. Some clients paid millions of dollars for products that include a so-called “All Clicks Feed,” which can track user behavior, clicks, and movement across websites in highly precise detail.

Avast claims to have more than 435 million active users per month, and Jumpshot says it has data from 100 million devices. Avast collects data from users that opt-in and then provides that to Jumpshot, but multiple Avast users told Motherboard they were not aware Avast sold browsing data, raising questions about how informed that consent is.

The data obtained by Motherboard and PCMag includes Google searches, lookups of locations and GPS coordinates on Google Maps, people visiting companies’ LinkedIn pages, particular YouTube videos, and people visiting porn websites. It is possible to determine from the collected data what date and time the anonymized user visited YouPorn and PornHub, and in some cases what search term they entered into the porn site and which specific video they watched.

[…]

Until recently, Avast was collecting the browsing data of its customers who had installed the company’s browser plugin, which is designed to warn users of suspicious websites. Security researcher and AdBlock Plus creator Wladimir Palant published a blog post in October showing that Avast harvest user data with that plugin. Shortly after, browser makers Mozilla, Opera, and Google removed Avast’s and subsidiary AVG’s extensions from their respective browser extension stores. Avast had previously explained this data collection and sharing in a blog and forum post in 2015. Avast has since stopped sending browsing data collected by these extensions to Jumpshot, Avast said in a statement to Motherboard and PCMag.

[…]

However, the data collection is ongoing, the source and documents indicate. Instead of harvesting information through software attached to the browser, Avast is doing it through the anti-virus software itself. Last week, months after it was spotted using its browser extensions to send data to Jumpshot, Avast began asking its existing free antivirus consumers to opt-in to data collection, according to an internal document.

“If they opt-in, that device becomes part of the Jumpshot Panel and all browser-based internet activity will be reported to Jumpshot,” an internal product handbook reads. “What URLs did these devices visit, in what order and when?” it adds, summarising what questions the product may be able to answer.

Senator Ron Wyden, who in December asked Avast why it was selling users’ browsing data, said in a statement, “It is encouraging that Avast has ended some of its most troubling practices after engaging constructively with my office. However I’m concerned that Avast has not yet committed to deleting user data that was collected and shared without the opt-in consent of its users, or to end the sale of sensitive internet browsing data. The only responsible course of action is to be fully transparent with customers going forward, and to purge data that was collected under suspect conditions in the past.”

[…]

On its website and in press releases, Jumpshot names Pepsi, and consulting giants Bain & Company and McKinsey as clients.

As well as Expedia, Intuit, and Loreal, other companies which are not already mentioned in public Jumpshot announcements include coffee company Keurig, YouTube promotion service vidIQ, and consumer insights firm Hitwise. None of those companies responded to a request for comment.

On its website, Jumpshot lists some previous case studies for using its browsing data. Magazine and digital media giant Condé Nast, for example, used Jumpshot’s products to see whether the media company’s advertisements resulted in more purchases on Amazon and elsewhere. Condé Nast did not respond to a request for comment.

ALL THE CLICKS

Jumpshot sells a variety of different products based on data collected by Avast’s antivirus software installed on users’ computers. Clients in the institutional finance sector often buy a feed of the top 10,000 domains that Avast users are visiting to try and spot trends, the product handbook reads.

Another Jumpshot product is the company’s so-called “All Click Feed.” It allows a client to buy information on all of the clicks Jumpshot has seen on a particular domain, like Amazon.com, Walmart.com, Target.com, BestBuy.com, or Ebay.com.

In a tweet sent last month intended to entice new clients, Jumpshot noted that it collects “Every search. Every click. Every buy. On every site” [emphasis Jumpshot’s.]

[…]

One company that purchased the All Clicks Feed is New York-based marketing firm Omnicom Media Group, according to a copy of its contract with Jumpshot. Omnicom paid Jumpshot $2,075,000 for access to data in 2019, the contract shows. It also included another product called “Insight Feed” for 20 different domains. The fee for data in 2020 and then 2021 is listed as $2,225,000 and $2,275,000 respectively, the document adds.

[…]

The internal product handbook says that device IDs do not change for each user, “unless a user completely uninstalls and reinstalls the security software.”

Source: Leaked Documents Expose the Secretive Market for Your Web Browsing Data – VICE

Ring Doorbell App Gives Away your data to 3rd parties, without your knowledge or consent

An investigation by EFF of the Ring doorbell app for Android found it to be packed with third-party trackers sending out a plethora of customers’ personally identifiable information (PII). Four main analytics and marketing companies were discovered to be receiving information such as the names, private IP addresses, mobile network carriers, persistent identifiers, and sensor data on the devices of paying customers.

The danger in sending even small bits of information is that analytics and tracking companies are able to combine these bits together to form a unique picture of the user’s device. This cohesive whole represents a fingerprint that follows the user as they interact with other apps and use their device, in essence providing trackers the ability to spy on what a user is doing in their digital lives and when they are doing it. All this takes place without meaningful user notification or consent and, in most cases, no way to mitigate the damage done. Even when this information is not misused and employed for precisely its stated purpose (in most cases marketing), this can lead to a whole host of social ills.

[…]

Our testing, using Ring for Android version 3.21.1, revealed PII delivery to branch.io, mixpanel.com, appsflyer.com and facebook.com. Facebook, via its Graph API, is alerted when the app is opened and upon device actions such as app deactivation after screen lock due to inactivity. Information delivered to Facebook (even if you don’t have a Facebook account) includes time zone, device model, language preferences, screen resolution, and a unique identifier (anon_id), which persists even when you reset the OS-level advertiser ID.

Branch, which describes itself as a “deep linking” platform, receives a number of unique identifiers (device_fingerprint_id, hardware_id, identity_id) as well as your device’s local IP address, model, screen resolution, and DPI.

AppsFlyer, a big data company focused on the mobile platform, is given a wide array of information upon app launch as well as certain user actions, such as interacting with the “Neighbors” section of the app. This information includes your mobile carrier, when Ring was installed and first launched, a number of unique identifiers, the app you installed from, and whether AppsFlyer tracking came preinstalled on the device. This last bit of information is presumably to determine whether AppsFlyer tracking was included as bloatware on a low-end Android device. Manufacturers often offset the costs of device production by selling consumer data, a practice that disproportionately affects low-income earners and was the subject of a recent petition to Google initiated by Privacy International and co-signed by EFF.

Most alarmingly, AppsFlyer also receives the sensors installed on your device (on our test device, this included the magnetometer, gyroscope, and accelerometer) and current calibration settings.

Ring gives MixPanel the most information by far. Users’ full names, email addresses, device information such as OS version and model, whether bluetooth is enabled, and app settings such as the number of locations a user has Ring devices installed in, are all collected and reported to MixPanel. MixPanel is briefly mentioned in Ring’s list of third party services, but the extent of their data collection is not. None of the other trackers listed in this post are mentioned at all on this page.

Ring also sends information to the Google-owned crash logging service Crashalytics. The exact extent of data sharing with this service is yet to be determined.

Source: Ring Doorbell App Packed with Third-Party Trackers | Electronic Frontier Foundation

Class-action lawsuit filed against creepy Clearview AI startup which scraped everyones social media profiles

A lawsuit — seeking class-action status — was filed this week in Illinois against Clearview AI, a New York-based startup that has scraped social media networks for people’s photos and created one of the biggest facial recognition databases in the world.

The secretive startup was exposed last week in an explosive New York Times report which revealed how Clearview was selling access to “faceprints” and facial recognition software to law enforcement agencies across the US. The startup claimed it could identify a person based on a single photo, revealing their real name, general location, and other identifiers.

The report sparked outrage among US citizens, who had photos collected and added to the Clearview AI database without their consent. The Times reported that the company collected more than three billion photos, from sites such as Facebook, Twitter, YouTube, Venmo, and others.

This week, the company was hit with the first lawsuit in the aftermath of the New York Times exposé.

Lawsuit claims Clearview AI broke BIPA

According to a copy of the complaint obtained by ZDNet, plaintiffs claim Clearview AI broke Illinois privacy laws.

Namely, the New York startup broke the Illinois Biometric Information Privacy Act (BIPA), a law that safeguards state residents from having their biometrics data used without consent.

According to BIPA, companies must obtain explicit consent from Illinois residents before collecting or using any of their biometric information — such as the facial scans Clearview collected from people’s social media photos.

“Plaintiff and the Illinois Class retain a significant interest in ensuring that their biometric identifiers and information, which remain in Defendant Clearview’s possession, are protected from hacks and further unlawful sales and use,” the lawsuit reads.

“Plaintiff therefore seeks to remedy the harms Clearview and the individually-named defendants have already caused, to prevent further damage, and to eliminate the risks to citizens in Illinois and throughout the United States created by Clearview’s business misuse of millions of citizen’s biometric identifiers and information.”

The plaintiffs are asking the court for an injunction against Clearview to stop it from selling the biometric data of Illinois residents, a court order forcing the company to delete any Illinois residents’ data, and punitive damage, to be decided by the court at a later date.

“Defendants’ violation of BIPA was intentional or reckless or, pleaded in the alternative, negligent,” the complaint reads.

Clearview AI did not return a request for comment.

Earlier this week, US lawmakers also sought answers from the company, while Twitter sent a cease-and-desist letter demanding the startup stop collecting user photos from their site and delete any existing images.

Source: Class-action lawsuit filed against controversial Clearview AI startup | ZDNet

London Police Will Start Using Live Facial Recognition Tech Now, Big Brother becomes a computer watching you

The dystopian nightmare begins. Today, London’s Metropolitan Police Service announced it will begin deploying Live Facial Recognition (LFR) tech across the capital in the hopes of locating and arresting wanted peoples.

[…]

The way the system is supposed to work, according to the Metropolitan Police, is the LFR cameras will first be installed in areas where ‘intelligence’ suggests the agency is most likely to locate ‘serious offenders.’ Each deployment will supposedly have a ‘bespoke’ watch list comprising images of wanted suspects for serious and violent offenses. The London police also note the cameras will focus on small, targeted areas to scan folks passing by. According to BBC News, previous trials had taken place in areas such as Stratford’s Westfield shopping mall and the West End area of London. It seems likely the agency is also anticipating some unease, as the cameras will be ‘clearly signposted’ and officers are slated to hand out informational leaflets.

The agency’s statement also emphasizes that the facial recognition tech is not meant to replace policing—just ‘prompt’ officers by suggesting a person in the area may be a fishy individual…based solely on their face. “It is always the decision of an officer whether or not to engage with someone,” the statement reads. On Twitter, the agency also noted in a short video that images that don’t trigger alerts will be immediately deleted.

As with any police-related, Minority Report-esque tech, accuracy is a major concern. While the Metropolitan Police Service claims that 70 percent of suspects were successfully identified and that only one in 1,000 people created a fake alert, not everyone agrees the LFR tech is rock-solid. An independent review from July 2019 found that in six of the trial deployments, only eight of 42 matches were correct for an abysmal 19 percent accuracy. Other problems found by the review included inaccurate watch list information (e.g., people were stopped for cases that had already been resolved), and the criteria for people being included on the watchlist weren’t clearly defined.

Privacy groups aren’t particularly happy with the development. Big Brother Watch, a privacy campaign group that’s been particularly vocal against facial recognition tech, took to Twitter, telling the Metropolitan Police Service they’d “see them in court.”

“This decision represents an enormous expansion of the surveillance state and a serious threat to civil liberties in the UK,” said Silkie Carlo, Big Brother Watch’s director, in a statement. “This is a breath-taking assault on our rights and we will challenge it, including by urgently considering next steps in our ongoing legal claim against the Met and the Home Secretary.”

Meanwhile, another privacy group Liberty, has also voiced resistance to the measure. “Rejected by democracies. Embraced by oppressive regimes. Rolling out facial recognition surveillance tech is a dangerous and sinister step in giving the State unprecedented power to track and monitor any one of us. No thanks,” the group tweeted.

Source: London Police Will Start Using Live Facial Recognition Tech

Clearview has scraped all social media sites illegally and vs TOS, has all your pictures in a massive database (who knows how secure this is?) and a face recognition AI. Is selling access to it to cops, and who knows who else.

What if a stranger could snap your picture on the sidewalk then use an app to quickly discover your name, address and other details? A startup called Clearview AI has made that possible, and its app is currently being used by hundreds of law enforcement agencies in the US, including the FBI, says a Saturday report in The New York Times.

The app, says the Times, works by comparing a photo to a database of more than 3 billion pictures that Clearview says it’s scraped off Facebook, Venmo, YouTube and other sites. It then serves up matches, along with links to the sites where those database photos originally appeared. A name might easily be unearthed, and from there other info could be dug up online.

The size of the Clearview database dwarfs others in use by law enforcement. The FBI’s own database, which taps passport and driver’s license photos, is one of the largest, with over 641 million images of US citizens.

The Clearview app isn’t currently available to the public, but the Times says police officers and Clearview investors think it will be in the future.

The startup said in a statement Tuesday that its “technology is intended only for use by law enforcement and security personnel. It is not intended for use by the general public.”

Source: Clearview app lets strangers find your name, info with snap of a photo, report says – CNET

Using the system involves uploading photos to Clearview AI’s servers, and it’s unclear how secure these are. Although Clearview AI says its customer-support employees will not look at the photos that are uploaded, it appeared to be aware that Kashmir Hill (the Times journalist investigating the piece) was having police search for her face as part of her reporting:

While the company was dodging me, it was also monitoring me. At my request, a number of police officers had run my photo through the Clearview app. They soon received phone calls from company representatives asking if they were talking to the media — a sign that Clearview has the ability and, in this case, the appetite to monitor whom law enforcement is searching for.

The Times reports that the system appears to have gone viral with police departments, with over 600 already signed up. Although there’s been no independent verification of its accuracy, Hill says the system was able to identify photos of her even when she covered the lower half of her face, and that it managed to find photographs of her that she’d never seen before.

One expert quoted by The Times said that the amount of money involved with these systems means that they need to be banned before the abuse of them becomes more widespread. “We’ve relied on industry efforts to self-police and not embrace such a risky technology, but now those dams are breaking because there is so much money on the table,” said a professor of law and computer science at Northeastern University, Woodrow Hartzog, “I don’t see a future where we harness the benefits of face recognition technology without the crippling abuse of the surveillance that comes with it. The only way to stop it is to ban it.”

Source: The Verge

So Clearview has you, even if it violates TOS. How to stop the next guy from getting you in FB – maybe.

It should come as little surprise that any content you offer to the web for public consumption has the potential to be scraped and misused by anyone clever enough to do it. And while that doesn’t make this weekend’s report from The New York Times any less damning, it’s a great reminder about how important it is to really go through the settings for your various social networks and limit how your content is, or can be, accessed by anyone.

I won’t get too deep into the Times’ report; it’s worth reading on its own, since it involves a company (Clearview AI) scraping more than three billion images from millions of websites, including Facebook, and creating a facial-recognition app that does a pretty solid job of identifying people using images from this massive database.

Even though Clearview’s scraping techniques technically violate the terms of service on a number of websites, that hasn’t stopped the company from acquiring images en masse. And it keeps whatever it finds, which means that turning all your online data private isn’t going to help if Clearview has already scanned and grabbed your photos.

Still, something is better than nothing. On Facebook, likely the largest stash of your images, you’re going to want to visit Settings > Privacy and look for the option described: “Do you want search engines outside of Facebook to link to your profile?”

Turn that off, and Clearview won’t be able to grab your images. That’s not the setting I would have expected to use, I confess, which makes me want to go through all of my social networks and rethink how the information I share with them flows out to the greater web.

Lock down your Facebook even more with these settings

Since we’re already here, it’s worth spending a few minutes wading through Facebook’s settings and making sure as much of your content is set to friends-only as possible. That includes changing “Who can see your future posts” to “friends,” using the “Limit Past Posts” option to change everything you’ve previously posted to friends-only, and making sure that only you can see your friends list—to prevent any potential scraping and linking that some third-party might attempt. Similarly, make sure only your friends (or friends of friends) can look you up via your email address or phone number. (You never know!)

You should then visit the Timeline and Tagging settings page and make a few more changes. That includes only allowing friends to see what other people post on your timeline, as well as posts you’re tagged in. And because I’m a bit sensitive about all the crap people tag me in on Facebook, I’d turn on the “Review” options, too. That won’t help your account from being scraped, but it’s a great way to exert more control over your timeline.

Illustration for article titled Change These Facebook Settings to Protect Your Photos From Facial Recognition Software
Screenshot: David Murphy

Finally, even though it also doesn’t prevent companies from scraping your account, pull up the Public postssection of Facebook’s settings page and limit who is allowed to follow you (if you desire). You should also restrict who can comment or like your public information, like posts or other details about your life you share openly on the service.

Illustration for article titled Change These Facebook Settings to Protect Your Photos From Facial Recognition Software
Screenshot: David Murphy

Once I fix Facebook, then what?

Here’s the annoying part. Were I you, I’d take an afternoon or evening and write out all the different places I typically share snippets of my life online. For most, maybe that’s probably a handful of social services: Facebook, Instagram, Twitter, YouTube, Flickr, et cetera.

Once you’ve created your list, I’d dig deep into the settings of each service and see what options you have, if any, for limiting the availability of your content. This might run contrary to how you use the service—if you’re trying to gain lots of Instagram followers, for example, locking your profile to “private” and requiring potential followers to request access might slow your attempts to become the next big Insta-star. However, it should also prevent anyone with a crafty scraping utility to mass-download your photos (and associate them with you, either through some fancy facial-recognition tech, or by linking them to your account).

Source: Change These Facebook Settings to Protect Your Photos From Facial Recognition Software

BlackVue dashcam shows anyone everywhere you are in real time and where you have been in the past

An app that is supposed to be a fun activity for dashcam users to broadcast their camera feeds and drives is actually allowing people to scrape and store the real-time location of drivers across the world.

BlackVue is a dashcam company with its own social network. With a small, internet-connected dashcam installed inside their vehicle, BlackVue users can receive alerts when their camera detects an unusual event such as someone colliding with their parked car. Customers can also allow others to tune into their camera’s feed, letting others “vicariously experience the excitement and pleasure of driving all over the world,” a message displayed inside the app reads.

Users are invited to upload footage of their BlackVue camera spotting people crashing into their cars or other mishaps with the #CaughtOnBlackVue hashtag. It’s kind of like Amazon’s Ring cameras, but for cars. BlackVue exhibited at CES earlier this month, and was previously featured on Innovations with Ed Begley Jr. on the History Channel.

But what BlackVue’s app doesn’t make clear is that it is possible to pull and store users’ GPS locations in real-time over days or even weeks. Motherboard was able to track the movements of some of BlackVue’s customers in the United States.

The news highlights privacy issues that some BlackVue customers or other dashcam users may not be aware of, and more generally the potential dangers of adding an internet and GPS enabled device into your vehicle. It also shows how developers may have one use case for an app, while people can discover others: although BlackVue wanted to create an entertaining app where users could tap into each others’ feeds, they may not have realized that it would be trivially easy to track its customers’ movements in granular detail, at scale, and over time.

BlackVue acts as another example of how surveillance products that are nominally intended to protect a user have been designed in such a way that can end up in a user being spied on, too.

“I don’t think people understand the risk,” Lee Heath, an information security professional and BlackVue user told Motherboard. “I knew about some of the cloud features which I wanted. You can have it automatically connect and upload when events happen. But I had no idea about the sharing” before receiving the device as a gift, he added.

Ordinarily, BlackVue lets anyone create an account and then view a map of cameras that are broadcasting their location and live feed. This broadcasting is not enabled by default, and users have to select the option to do so when setting up or configuring their own camera. Motherboard tuned into live feeds from users in Hong Kong, China, Russia, the U.K, Germany, and elsewhere. BlackVue spokesperson Jeremie Sinic told Motherboard in an email that the users on the map only represent a tiny fraction of BlackVue’s overall customers.

But the actual GPS data that drives the map is available and publicly accessible.

1579127170434-blackvue-user-gps
A screenshot of the location data of one BlackVue user that Motherboard tracked throughout New York. Motherboard has heavily obfuscated the data to protect the individual’s privacy. Image: Motherboard

By reverse engineering the iOS version of the BlackVue app, Motherboard was able to write scripts that pull the GPS location of BlackVue users over a week long period and store the coordinates and other information like the user’s unique identifier. One script could collect the location data of every BlackVue user who had mapping enabled on the eastern half of the United States every two minutes. Motherboard collected data on dozens of customers.

With that data, we were able to build a picture of several BlackVue users’ daily routines: one drove around Manhattan during the day, perhaps as a rideshare driver, before then leaving for Queens in the evening. Another BlackVue user regularly drove around Brooklyn, before parking on a specific block in Queens overnight. The user did this for several different nights, suggesting this may be where the owner lives or stores their vehicle. A third showed someone driving a truck all over South Carolina.

Some customers may use BlackVue as part of a fleet of vehicles; an employer wanting to keep tabs on their delivery trucks as they drive around, for instance. But BlackVue also markets its products to ordinary consumers who want to protect their cars.

1579127955288-blackvue-live-feed
A screenshot of Motherboard accessing someone’s public live feed as the user is driving in public away from their apparent home. Motherboard has redacted the user information to protect individual privacy. Image: Motherboard

BlackVue’s Sinic said that collecting GPS coordinates of multiple users over an extended period of time is not supposed to be possible.

“Our developers have updated the security measures following your report from yesterday that I forwarded,” Sinic said. After this, several of Motherboard’s web requests that previously provided user data stopped working.

In 2018 the company did make some privacy-related changes to its app, meaning users were not broadcasting their camera feeds by default.

“I think BlackVue has decent ideas as far as leaving off by default but allows people to put themselves at risk without understanding,” Heath, the BlackVue user, said.

Motherboard has deleted all of the data collected to preserve individuals’ privacy.

Source: This App Lets Us See Everywhere People Drive – VICE

Skype and Cortana audio listened in on by workers in China with ‘no security measures’

A Microsoft programme to transcribe and vet audio from Skype and Cortana, its voice assistant, ran for years with “no security measures”, according to a former contractor who says he reviewed thousands of potentially sensitive recordings on his personal laptop from his home in Beijing over the two years he worked for the company.

The recordings, both deliberate and accidentally invoked activations of the voice assistant, as well as some Skype phone calls, were simply accessed by Microsoft workers through a web app running in Google’s Chrome browser, on their personal laptops, over the Chinese internet, according to the contractor.

Workers had no cybersecurity help to protect the data from criminal or state interference, and were even instructed to do the work using new Microsoft accounts all with the same password, for ease of management, the former contractor said. Employee vetting was practically nonexistent, he added.

“There were no security measures, I don’t even remember them doing proper KYC [know your customer] on me. I think they just took my Chinese bank account details,” he told the Guardian.

While the grader began by working in an office, he said the contractor that employed him “after a while allowed me to do it from home in Beijing. I judged British English (because I’m British), so I listened to people who had their Microsoft device set to British English, and I had access to all of this from my home laptop with a simple username and password login.” Both username and password were emailed to new contractors in plaintext, he said, with the former following a simple schema and the latter being the same for every employee who joined in any given year.

“They just give me a login over email and I will then have access to Cortana recordings. I could then hypothetically share this login with anyone,” the contractor said. “I heard all kinds of unusual conversations, including what could have been domestic violence. It sounds a bit crazy now, after educating myself on computer security, that they gave me the URL, a username and password sent over email.”

As well as the risks of a rogue employee saving user data themselves or accessing voice recordings on a compromised laptop, Microsoft’s decision to outsource some of the work vetting English recordings to companies based in Beijing raises the additional prospect of the Chinese state gaining access to recordings. “Living in China, working in China, you’re already compromised with nearly everything,” the contractor said. “I never really thought about it.”

Source: Skype audio graded by workers in China with ‘no security measures’ | Technology | The Guardian

Checkpeople, why is a 22GB database containing 56 million US folks’ aggregated personal details sitting on the open internet using a Chinese IP address?

A database containing the personal details of 56.25m US residents – from names and home addresses to phone numbers and ages – has been found on the public internet, served from a computer with a Chinese IP address, bizarrely enough.

The information silo appears to belong to Florida-based CheckPeople.com, which is a typical people-finder website: for a fee, you can enter someone’s name, and it will look up their current and past addresses, phone numbers, email addresses, names of relatives, and even criminal records in some cases, all presumably gathered from public records.

However, all of this information is not only sitting in one place for spammers, miscreants, and other netizens to download in bulk, but it’s being served from an IP address associated with Alibaba’s web hosting wing in Hangzhou, east China, for reasons unknown. It’s a perfect illustration that not only is this sort of personal information in circulation, but it’s also in the hands of foreign adversaries.

It just goes to show how haphazardly people’s privacy is treated these days.

A white-hat hacker operating under the handle Lynx discovered the trove online, and tipped off The Register. He told us he found the 22GB database exposed on the internet, including metadata that links the collection to CheckPeople.com. We have withheld further details of the security blunder for privacy protection reasons.

The repository’s contents are likely scraped from public records, though together provide rather detailed profiles on tens of millions of folks in America. Basically, CheckPeople.com has done the hard work of aggregating public personal records, and this exposed NoSQL database makes that info even easier to crawl and process.

Source: Why is a 22GB database containing 56 million US folks’ personal details sitting on the open internet using a Chinese IP address? Seriously, why? • The Register