Wacom tablet drivers phone home with names, times of every app opened on your computer

Wacom’s official tablet drivers leak to the manufacturer the names of every application opened, and when, on the computers they are connected to.

Software engineer Robert Heaton made this discovery after noticing his drawing board’s fine-print included a privacy policy that gave Wacom permission to, effectively, snoop on him.

Looking deeper, he found that the tablet’s driver logged each app he opened on his Apple Mac and transmitted the data to Google to analyze. To be clear, we’re talking about Wacom’s macOS drivers here: the open-source Linux ones aren’t affected, though it would seem the Windows counterparts are.

[…]

Wacom’s request made me pause. Why does a device that is essentially a mouse need a privacy policy?”

Source: Sketchy behavior? Wacom tablet drivers phone home with names, times of every app opened on your computer • The Register

Google’s Takeout App Leaked Videos To Unrelated Users

In a new privacy-related fuckup, Google told users today that it might’ve accidentally imported your personal photos into another Google user’s account. Whoopsie!

First flagged by Duo Security CTO Jon Oberheide, Google seems to be emailing users who plugged into the company’s native Takeout app to backup their videos, warning that a bug resulted in some of those (hopefully G-rated) videos being backed up to an unrelated user’s account.

For those who used the “download your data” service between November 21 and November 25 of last year, some videos were “incorrectly exported,” the note reads. “If you downloaded your data, it may be incomplete, and it may contain videos that are not yours.”

Source: Google’s Takeout App Leaked Videos To Unrelated Users

Researchers Find ‘Anonymized’ Data Is Even Less Anonymous Than We Thought

Dasha Metropolitansky and Kian Attari, two students at the Harvard John A. Paulson School of Engineering and Applied Sciences, recently built a tool that combs through vast troves of consumer datasets exposed from breaches for a class paper they’ve yet to publish.

“The program takes in a list of personally identifiable information, such as a list of emails or usernames, and searches across the leaks for all the credential data it can find for each person,” Attari said in a press release.

They told Motherboard their tool analyzed thousands of datasets from data scandals ranging from the 2015 hack of Experian, to the hacks and breaches that have plagued services from MyHeritage to porn websites. Despite many of these datasets containing “anonymized” data, the students say that identifying actual users wasn’t all that difficult.

“An individual leak is like a puzzle piece,” Harvard researcher Dasha Metropolitansky told Motherboard. “On its own, it isn’t particularly powerful, but when multiple leaks are brought together, they form a surprisingly clear picture of our identities. People may move on from these leaks, but hackers have long memories.”

For example, while one company might only store usernames, passwords, email addresses, and other basic account information, another company may have stored information on your browsing or location data. Independently they may not identify you, but collectively they reveal numerous intimate details even your closest friends and family may not know.

“We showed that an ‘anonymized’ dataset from one place can easily be linked to a non-anonymized dataset from somewhere else via a column that appears in both datasets,” Metropolitansky said. “So we shouldn’t assume that our personal information is safe just because a company claims to limit how much they collect and store.”

The students told Motherboard they were “astonished” by the sheer volume of total data now available online and on the dark web. Metropolitansky and Attari said that even with privacy scandals now a weekly occurrence, the public is dramatically underestimating the impact on privacy and security these leaks, hacks, and breaches have in total.

Previous studies have shown that even within independent individual anonymized datasets, identifying users isn’t all that difficult.

In one 2019 UK study, researchers were able to develop a machine learning model capable of correctly identifying 99.98 percent of Americans in any anonymized dataset using just 15 characteristics. A different MIT study of anonymized credit card data found that users could be identified 90 percent of the time using just four relatively vague points of information.

Another German study looking at anonymized user vehicle data found that that 15 minutes’ worth of data from brake pedal use could let them identify the right driver, out of 15 options, roughly 90 percent of the time. Another 2017 Stanford and Princeton study showed that deanonymizing user social networking data was also relatively simple.

Individually these data breaches are problematic—cumulatively they’re a bit of a nightmare.

Metropolitansky and Attari also found that despite repeated warnings, the public still isn’t using unique passwords or password managers. Of the 96,000 passwords contained in one of the program’s output datasets—just 26,000 were unique.

The problem is compounded by the fact that the United States still doesn’t have even a basic privacy law for the internet era, thanks in part to relentless lobbying from a cross-industry coalition of corporations eager to keep this profitable status quo intact. As a result, penalties for data breaches and lax security are often too pathetic to drive meaningful change.

Harvard’s researchers told Motherboard there’s several restrictions a meaningful U.S. privacy law could implement to potentially mitigate the harm, including restricting data access to unauthorized employees, maininting better records on data collection and retention, and decentralizing data storage (not keeping corporate and consumer data on the same server).

Until then, we’re left relying on the promises of corporations who’ve repeatedly proven their privacy promises aren’t worth all that much.

Source: Researchers Find ‘Anonymized’ Data Is Even Less Anonymous Than We Thought – VICE

Firefox now shows what telemetry data it’s collecting about you (if any)

There is now a special page in the Firefox browser where users can see what telemetry data Mozilla is collecting from their browser.

Accessible by typing about:telemetry in the browser’s URL address bar, this new section is a recent addition to Firefox.

The page shows deeply technical information about browser settings, installed add-ons, OS/hardware information, browser session details, and running processes.

The information is what you’d expect a software vendor to collect about users in order to fix bugs and keep a statistical track of its userbase.

A Firefox engineer told ZDNet the page was primarily created for selfish reasons, in order to help engineers debug Firefox test installs. However, it was allowed to ship to the stable branch also as a PR move, to put users’ minds at ease about what type of data the browser maker collects from its users.

The move is in tune with what Mozilla has been doing over the past two years, pushing for increased privacy controls in its browser and opening up about its practices, in stark contrast with what other browser makers have been doing in the past decade.

Source: Firefox now shows what telemetry data it’s collecting about you | ZDNet

Alias Privacy “Parasite” 2.0 Adds a Layer of Security to Your Home Assistant

Alias is a teachable “parasite” that gives you more control over your smart assistant’s customization and privacy. Through a simple app, you can train Alias to react to a self-chosen wake-word; once trained, Alias takes control over your home assistant by activating it for you. When you’re not using it, Alias makes sure the assistant is paralyzed and unable to listen to your conversations.

When placed on top of your home assistant, Alias uses two small speakers to interrupt the assistant’s listening with a constant low noise that feeds directly into the microphone of the assistant. When Alias recognizes your user-created wake-word (e.g., “Hey Alias” or “Jarvis” or whatever), it stops the noise and quietly activates the assistant by speaking the original wake-word (e.g., “Alexa” or “Hey Google”).

From here the assistant can be used as normal. Your wake-word is detected by a small neural network program that runs locally on Alias, so the sounds of your home are not uploaded to anyone’s cloud.

Source: Alias Privacy “Parasite” 2.0 Adds a Layer of Security to Your Home Assistant | Make:

Don’t use online DNA tests! If You Ever Used Promethease, Your DNA Data Might Be on MyHeritage – and so will your family’s

When it comes to ways to learn about your DNA, Promethease’s service seemed like one of the safest. They promised anonymity, and to delete your report after 45 days. But now that MyHeritage has bought the company, users are being notified that their DNA data is now on MyHeritage. Wait, what?

It turns out that even though Promethease deleted reports as promised after 45 days, if you created an account, the service held onto your raw data. You now have a MyHeritage account, which you can delete if you like. Check your email. That’s how I found out about mine.

What Promethease does

A while back, I downloaded my raw data from 23andme and gave it to Promethease to find out what interesting things might be in my DNA. Ever since 23andme stopped providing detailed health-related results in 2013, Promethease was a sensible alternative. They used to charge $5 (now up to $12, but that’s still a steal) and they didn’t attempt to explain your results to you. Instead, you could just see what SNPs you had—those are spots where your DNA differs from other people’s—and read on SNPedia, a sort of genetics wikipedia, about what those SNPs might mea

So this means Promethease had access to the raw file you gave it (which you would have gotten from 23andme, Ancestry, or another service), and to the report of SNPs that it created for you. You had the option of paying your fee, downloading your report, and never dealing with the company again; or you could create an account so that you could “regenerate” your report in the future without having to pay again. That means they stored your raw DNA file.

Source: If You Ever Used Promethease, Your DNA Data Might Be on MyHeritage Now

Because your DNA contains information about your whole family, by uploading your DNA you also upload their DNA, making it a whole lot easier to de-anonymise their DNA. It’s a bit like uploading a picture of your family to Facebook with the public settings on and then tagging them, even though the other family members on your picture aren’t on Facebook.

Social media scrapers Clearview Lied About Its Crime-Solving Power In Pitches To Law Enforcement Agencies

A very questionable facial recognition tool being offered to law enforcement was recently exposed by Kashmir Hill for the New York Times. Clearview — created by a developer previously best known for an app that let people put Trump’s “hair” on their own photos — is being pitched to law enforcement agencies as a better AI solution for all their “who TF is this guy” problems.

Clearview doesn’t limit itself to law enforcement databases — ones (partially) filled with known criminals and arrestees. Instead of using known quantities, Clearview scrapes the internet for people’s photos. With the click of an app button, officers are connected to Clearview’s stash of 3 billion photos pulled from public feeds on Twitter, LinkedIn, and Facebook.

Most of the scrapees have already objected to being scraped. While this may violate terms of service, it’s not completely settled that scraping content from public feeds is actually illegal. However, peeved companies can attempt to shut off their firehoses, which is what Twitter is in the process of doing.

Clearview has made some bold statements about its effectiveness — statements that haven’t been independently confirmed. Clearview did not submit its software to NIST’s recent roundup of facial recognition AI, but it most likely would not have fared well. Even more established software performed poorly, misidentifying minorities almost 100 times more often than it did white males.

The company claims it finds matches 75% of the time. That doesn’t actually mean it finds the right person 75% of the time. It only means the software finds someone that matches submitted photos three-quarters of the time. Clearview has provided no stats on its false positive rate. That hasn’t stopped it from lying about its software and its use by law enforcement agencies.

A BuzzFeed report based on public records requests and conversations with the law enforcement agencies says the company’s sales pitches are about 75% bullshit.

Clearview AI, a facial recognition company that says it’s amassed a database of billions of photos, has a fantastic selling point it offers up to police departments nationwide: It cracked a case of alleged terrorism in a New York City subway station last August in a matter of seconds. “How a Terrorism Suspect Was Instantly Identified With Clearview,” read the subject line of a November email sent to law enforcement agencies across all 50 states through a crime alert service, suggesting its technology was integral to the arrest.

Here’s what the NYPD had to say about Clearview’s claims in its marketing materials:

“The NYPD did not use Clearview technology to identify the suspect in the August 16th rice cooker incident,” a department spokesperson told BuzzFeed News. “The NYPD identified the suspect using the Department’s facial recognition practice where a still image from a surveillance video was compared to a pool of lawfully possessed arrest photos.”

The NYPD also said it had no “institutional relationship” with Clearview, contradicting the company’s sales pitch insinuations. The NYPD was not alone in its rejection of Clearview’s claims.

Clearview also claimed to be instrumental in apprehending a suspect wanted for assault. In reality, the suspect turned himself in to the NYPD. The PD again pointed out Clearview played no role in this investigation. It also had nothing to do with solving a subway groping case (the tip that resulted in an arrest was provided to the NYPD by the Guardian Angels) or an alleged “40 cold cases solved” by the NYPD.

The company says it is “working with” over 600 police departments. But BuzzFeed’s investigation has uncovered at least two cases where “working with” simply meant submitting a lead to a PD tip line. Most likely, this is only the tip of the iceberg. As more requested documents roll in, there’s a very good chance this “working with” BS won’t just be a two-off.

Clearview’s background appears to be as shady as its public claims. In addition to its founder’s links to far right groups (first uncovered by Kashmir Hill), its founder pumped up the company’s reputation by deploying a bunch of sock puppets.

Ton-That set up fake LinkedIn profiles to run ads about Clearview, boasting that police officers could search over 1 billion faces in less than a second.

These are definitely not the ethics you want to see from a company pitching dubious facial recognition software to law enforcement agencies. Some agencies may perform enough due diligence to move forward with a more trustworthy company, but others will be impressed with the lower cost and the massive amount of photos in Clearview’s database and move forward with unproven software created by a company that appears to be willing to exaggerate its ability to help cops catch crooks.

If it can’t tell the truth about its contribution to law enforcement agencies, it’s probably not telling the truth about the software’s effectiveness. If cops buy into Clearview’s PR pitches, the collateral damage will be innocent people’s freedom.

Source: Facial Recognition Company Clearview Lied About Its Crime-Solving Power In Pitches To Law Enforcement Agencies | Techdirt

Clearview AI Told Cops To “Run Wild” With Its Creepy Face database, access given away without checks and sold to private firms despite claiming otherwise

Clearview AI, the facial recognition company that claims to have amassed a database of more than 3 billion photos scraped from Facebook, YouTube, and millions of other websites, is scrambling to deal with calls for bans from advocacy groups and legal threats. These troubles come after news reports exposed its questionable data practices and misleading statements about working with law enforcement.

Following stories published in the New York Times and BuzzFeed News, the Manhattan-based startup received cease-and-desist letters from Twitter and the New Jersey attorney general. It was also sued in Illinois in a case seeking class-action status.

Despite its legal woes, Clearview continues to contradict itself, according to documents obtained by BuzzFeed News that are inconsistent with what the company has told the public. In one example, the company, whose code of conduct states that law enforcement should only use its software for criminal investigations, encouraged officers to use it on their friends and family members.

“To have these technologies rolled out by police departments without civilian oversight really raises fundamental questions about democratic accountability,” Albert Fox Cahn, a fellow at New York University and the executive director of the Surveillance Technology Oversight Project, told BuzzFeed News.

In the aftermath of revelations about its technology, Clearview has tried to clean up its image by posting informational webpages, creating a blog, and trotting out surrogates for media interviews, including one in which an investor claimed Clearview was working with “over a thousand independent law enforcement agencies.” Previously, Clearview had stated that the number was around 600.

Clearview has also tried to allay concerns that its technology could be abused or used outside the scope of police investigations. In a code of conduct that the company published on its site earlier this month, it said its users should “only use the Services for law enforcement or security purposes that are authorized by their employer and conducted pursuant to their employment.”

It bolstered that idea with a blog post on Jan. 23, which stated, “While many people have advised us that a public version would be more profitable, we have rejected the idea.”

“Clearview exists to help law enforcement agencies solve the toughest cases, and our technology comes with strict guidelines and safeguards to ensure investigators use it for its intended purpose only,” the post stated.

But in a November email to a police lieutenant in Green Bay, Wisconsin, a company representative encouraged a police officer to use the software on himself and his acquaintances.

“Have you tried taking a selfie with Clearview yet?” the email read. “It’s the best way to quickly see the power of Clearview in real time. Try your friends or family. Or a celebrity like Joe Montana or George Clooney.

“Your Clearview account has unlimited searches. So feel free to run wild with your searches,” the email continued. The city of Green Bay would later agree on a $3,000 license with Clearview.

Via Obtained by BuzzFeed News

An email from Clearview to an officer in Green Bay, Wisconsin, from November 2019.

Hoan Ton-That, the CEO of Clearview, claimed in an email that the company has safeguards on its product.

“As as [sic] safeguard we have an administrative tool for Law Enforcement supervisors and administrators to monitor the searches of a particular department,” Ton-That said. “An administrator can revoke access to an account at any time for any inappropriate use.”

Clearview’s previous correspondence with Green Bay police appeared to contradict what Ton-That told BuzzFeed News. In emails obtained by BuzzFeed News, the company told officers that searches “are always private and never stored in our proprietary database, which is totally separate from the photos you search.”

“So feel free to run wild with your searches.”

“It’s certainly inconsistent to, on the one hand, claim that this is a law enforcement tool and that there are safeguards — and then to, on the other hand, recommend it being used on friends and family,” Clare Garvie, a senior associate at the Georgetown Law’s Center on Privacy and Technology, told BuzzFeed News.

Clearview has also previously instructed police to act in direct violation of the company’s code of conduct, which was outlined in a blog post on Monday. The post stated that law enforcement agencies were “required” to receive permission from a supervisor before creating accounts.

But in a September email sent to police in Green Bay, the company said there was an “Invite User” button in the Clearview app that can be used to give any officer access to the software. The email encouraged police officers to invite as many people as possible, noting that Clearview would give them a demo account “immediately.”

“Feel free to refer as many officers and investigators as you want,” the email said. “No limits. The more people searching, the more successes.”

“Rewarding loyal customers”

Despite its claim last week that it “exists to help law enforcement agencies,” Clearview has also been working with entities outside of law enforcement. Ton-That told BuzzFeed News on Jan. 23 that Clearview was working with “a handful of private companies who use it for security purposes.” Marketing emails from late last year obtained by BuzzFeed News via a public records request showed the startup aided a Georgia-based bank in a case involving the cashing of fraudulent checks.

Earlier this year, a company representative was slated to speak at a Las Vegas gambling conference about casinos’ use of facial recognition as a way of “rewarding loyal customers and enforcing necessary bans.” Initially, Jessica Medeiros Garrison, whose title was stated on the conference website as Clearview’s vice president of public affairs, was listed on a panel that included the head of surveillance for Las Vegas’ Cosmopolitan hotel. Later versions of the conference schedule and Garrison’s bio removed all mentions of Clearview AI. It is unclear if she actually appeared on the panel.

A company spokesperson said Garrison is “a valued member of the Clearview team” but declined to answer questions on any possible work with casinos.

Cease and desist

Clearview has also faced legal threats from private and government entities. Last week, Twitter sent the company a cease-and-desist letter, noting that its claim to have collected photos from its site was in violation of the social network’s terms of service.

“This type of use (scraping Twitter for people’s images/likeness) is not allowed,” a company spokesperson told BuzzFeed News. The company, which asked Clearview to cease scraping and delete all data collected from Twitter, pointed BuzzFeed News to a part of its developer policy, which states it does not allow its data to be used for facial recognition.

On Friday, Clearview received a similar note from the New Jersey attorney general, who called on state law enforcement agencies to stop using the software. The letter also told Clearview to stop using clips of New Jersey Attorney General Gurbir Grewal in a promotional video on its site that claimed that a New Jersey police department used the software in a child predator sting late last year.

[…]

Clearview declined to provide a list of law enforcement agencies that were on free trials or paid contracts, stating that it was more than 600.

“We do not have to be hidden”

That number is lower than what one of Clearview’s investors bragged about on Saturday. David Scalzo, an early investor in Clearview through his firm, Kirenaga Partners, claimed in an interview with Dilbert creator and podcaster Scott Adams that “over a thousand independent law enforcement agencies” were using the software. The investor went on to contradict the company’s public statement that it would not make its tool available to the public, stating “it is inevitable that this digital information will be out there” and “the best thing we can do is get this technology out to everyone.”

[…]

EPIC’s letter came after an Illinois resident sued Clearview in a state district court last Wednesday, alleging the software violated the Illinois Biometric Information Privacy Act by collecting the “identifiers and information” — like facial data gathered from photos accumulated from social media — without permission. Under the law, private companies are not allowed to “collect, capture, purchase,” or receive biometric information about a person without their consent.

The complaint, which also alleged that Clearview violated the constitutional rights of all Americans, asked for class-action recognition on behalf of all US citizens, as well as all Illinois residents whose biometric information was collected. When asked, Ton-That did not comment on the lawsuit.

In legal documents given to police, obtained by BuzzFeed News through a public records request, Clearview argued that it was not subject to states’ biometric data laws including those in Illinois. In a memo to the Atlanta Police Department, a lawyer for Clearview argued that because the company’s clients are public agencies, the use of the startup’s technology could not be regulated by state law, which only governs private entities.

Cahn, the executive director of the Surveillance Technology Oversight Project, said that it was “problematic” for Clearview AI to argue it wasn’t beholden to state biometric laws.

“Those laws regulate the commercial use of these sorts of tools, and the idea that somehow this isn’t a commercial application, simply because the customer is the government, makes no sense,” he said. “This is a company with private funders that will be profiting from the use of our information.”

Under the attention, Clearview added explanations to its site to deal with privacy concerns. It added an email link for people to ask questions about its privacy policy, saying that all requests will go to its data protection officer. When asked by BuzzFeed News, the company declined to name that official.

To process a request, however, Clearview is requesting more personal information: “Please submit name, a headshot and a photo of a government-issued ID to facilitate the processing of your request.“ The company declined to say how it would use that information.

Source: Clearview AI Once Told Cops To “Run Wild” With Its Facial Recognition Tool

Brave, Google, Microsoft, Mozilla gather together to talk web privacy… and why we all shouldn’t get too much of it. Only FF and Brave will give you some.

At the USENIX Enigma conference on Tuesday, representatives of four browser makers, Brave, Google, Microsoft, and Mozilla, gathered to banter about their respective approaches to online privacy, while urging people not to ask for too much of it.

Apple, which has advanced browser privacy standards but was recently informed that its tracking defenses can be used for er, tracking, was conspicuously absent, though it had a tongue-tied representative recruiting for privacy-oriented job positions at the show.

The browser-focused back-and-forth was mostly cordial as the software engineers representing their companies discussed notable privacy features in the various web browsers they worked on. They stressed the benefit of collaboration on web standards and the mutually beneficial effects of competition.

Eric Lawrence, program manager on the Microsoft Edge team, touched on how Microsoft has just jettisoned 25 years of Internet Explorer code to replatform Edge on the open source Chromium project, now the common foundation for 20 or so browsers.

Beside a slide that declared “Microsoft loves the Web,” Lawrence made the case for the new Edge as a modern browser with some well-designed privacy features, including Microsoft’s take on tracking protection, which blocks most trackers in its default setting and can be made more strict, at the potential cost of site compatibility.

A slide at Enigma 2020 saying Microsoft loves the Web;

Edge comes across as a reliable alternative to Chrome and should become more distinct as it evolves. It occupies a difficult space on the privacy continuum, in that it has some nice privacy features but not as many as Brave or Firefox. But Edge may find fans on the strength of the Microsoft brand since, as Lawrence emphasized, Microsoft is not new to privacy concerns.

That said, Microsoft is not far from Google in advocating not biting the hand that feeds the web ecosystem – advertising.

“The web doesn’t exist in a vacuum,” Lawrence warned. “People who are building sites and services have choices for what platforms they target. They can build a mobile application. They can take their content off the open web and put it into a walled garden. And so if we do things with privacy that hurt the open web, we could end up pushing people to less privacy for certain ecosystems.”

Lawrence pointed to a recent report about a popular Android app found to be leaking data. It took time to figure that out, he said, because mobile platforms are less transparent than the web, where it’s easier to scour source code and analyze network behavior.

Justin Schuh, engineering director on Google Chrome for trust and safety, reprised an argument he’s made previously that too much privacy would be harmful to ad-supported businesses.

“Most of the media that we consume is actually funded by advertising today,” Schuh explained. “It has been for a very long time. Now, I’m not here to make the argument that advertising is the best or only way to fund these things. But the truth is that print, radio, and TV, – all these are funded primarily through advertising.”

And so too is the web, he insisted, arguing that advertising is what has made so much online content available to people who otherwise wouldn’t have access to it across the globe.

Schuh said in the context of the web, two trends concern him. One, he claimed, is that content is leaving because it’s easier to monetize in apps – but he didn’t cite a basis for that assertion.

The other is the rise of covert tracking, which arose, as Schuh tells it, because advertisers wanted to track people across multiple devices. So they turned to looking at IP-based fingerprinting and metadata tracking, and the joining of data sets to identify people as they shift between phone, computer, and tablet.

Covert tracking also became more popular, he said, because advertisers wanted to bypass anti-tracking mechanisms. Thus, we have privacy-invading practices like CNAME cloaking, site fingerprinting, hostname rotation, and the like because browser users sought privacy.

Schuh made the case for Google’s Privacy Sandbox proposal, a set of controversial specs being developed ostensibly to enhance privacy by reducing data available for tracking and browser fingerprinting while also giving advertisers the ability to target ads.

“Broadly speaking, advertisers don’t actually need your data,” said Schuh. “All that they really want is to monetize efficiently.”

But given the willingness of advertisers to circumvent user privacy choices, the ad industry’s consistent failure to police bad behavior, and the persistence of ad fraud and malicious ads, it’s difficult to accept that advertisers can be trusted to behave.

Tanvi Vyas, principal engineer at Mozilla, focused on the consequences of the current web ecosystem, where data is gathered to target and manipulate people. She reeled off a list of social harms arising from the status quo.

“Democracies are compromised and elections around the world are being tampered with,” she said. “Populations are manipulated and micro-targeted. Fake news is delivered to just the right audience at the right time. Discrimination flourishes, and emotional harm is inflicted on specific individuals when our algorithms go wrong.”

Thanks, Facebook, Google, and Twitter.

Worse still, Vyas said, the hostile ecosystem has a chilling effect on sophisticated users who understand online tracking and prevents them from taking action. “At Mozilla, we think this is an unacceptable cost for society to pay,” she said.

Vyas described various pro-privacy technologies implemented in Firefox, including Facebook Container, which sandboxes Facebook trackers so they can’t track users on third-party websites. She also argued for legislation to improve online privacy, though Lawrence from his days working on Internet Explorer recalled how privacy rules tied to a privacy scheme known as P3P two decades ago had proved ineffective.

Speaking for Brave, CISO Yan Zhu argued a slightly different approach, though it still involves engaging with the ad industry to some extent.

“The main goal of Brave is we want to repair the privacy problems in the existing ad ecosystem in a way that no other browser has really tried, while giving publishers a revenue stream,” she said. “Basically, we have options to set micropayments to publishers, and also an option to see privacy preserving ads.”

Micropayments have been tried before but they’ve largely failed, assuming you don’t consider in-app payments to be micropayments.

Faced with a plea from an attendee for more of the browser makers to support micropayments instead of relying on ads, Schuh said, “I would absolutely love to see micropayments succeed. I know there have been a bunch of efforts at Google and various other companies to do it. It turns out that the payment industry itself is really, really complicated. And there are players in there that expect a fairly large cut. And so long as that exists, I don’t know if there’s a path forward.”

It now falls to Brave to prove otherwise.

Shortly thereafter, Gabriel DeWitt, VP of product at global ad marketplace Index Exchange, took a turn at the mic in the audience section in which he introduced himself and then lightheartedly asked other attendees not to throw anything at him.

Insisting that his company also cares about user privacy, despite opinions to the contrary, he asked the panelists how he could better collaborate with them.

It’s worth noting that next week, when Chrome 80 debuts, Google intends to introduce changes in the way it handles cookies that will affect advertisers. What’s more, the company has said it plans to phase out cookies entirely in a few years.

Schuh, from Google, elicited a laugh when he said, “I guess I can take this one, because that’s what everyone is expecting.”

We were expecting privacy. We got surveillance capitalism instead.

Source: Brave, Google, Microsoft, Mozilla gather together to talk web privacy… and why we all shouldn’t get too much of it • The Register

Ubiquiti says UniFi routers will beam performance data back to mothership without consent automatically, no opt-out.

Ubiquiti Networks is once again under fire for suddenly rewriting its telemetry policy after changing how its UniFi routers collect data without telling anyone.

The changes were identified in a new help document published on the US manufacturer’s website. The document differentiates between “personal data”, which includes everything that identifies a specific individual, and “other data”, which is everything else.

The document says that while users can continue to opt out of having their “personal data” collected, their “other data” – anonymous performance and crash information – will be “automatically reported”. In other words, you ain’t got no choice.

This is a shift from Ubiquiti’s last statement on data collection three months ago, which promised an opt-out button for all data collection in upcoming versions of its firmware.

A Ubiquiti representative confirmed in a forum post that the changes will automatically affect all firmware beyond 4.1.0, and that users can stop “other data” being collected by manually editing the software’s config file.

“Yes, it should be updated when we go to public release, it’s on our radar,” the rep wrote. “But I can’t guarantee it will be updated in time.”

The drama unfolded when netizens grabbed their pitchforks and headed for the company’s forums to air their grievances. “Come on UBNT,” said user leonardogyn. “PLEASE do not insist on making it hard (or impossible) to fully and easily disable sending of Analytics data. I understand it’s a great tool for you, but PLEASE consider that’s [sic] ultimately us, the users, that *must* have the option to choose to participate on it.”

The same user also pointed out that, even when the “Analytics” opt-out button is selected in the 5.13.9 beta controller software, Ubiquiti is still collecting some data. The person called the opt-out option “a misleading one, not to say a complete lie”.

Other users were similarly outraged. “This was pretty much the straw that broke the camel’s back, to be honest.” said elcid89. “I only use Unifi here at the house, but between the ongoing development instability, frenetic product range, and lack of responsiveness from staff, I’ve been considering junking it for a while now. This made the decision for me – switching over to Cisco.”

One user said that the firmware was still sending their data to two addresses even after they modified the config file.

Source: You spoke, we didn’t listen: Ubiquiti says UniFi routers will beam performance data back to mothership automatically • The Register

Facebook Enables Confusing ‘Off-Facebook Activity’ Privacy Tool, which won’t stop any tracking whatsoever

In a blog post earlier today, the famously privacy-conscious Mark Zuckerberg announced that—in honor of Data Privacy Day, which is apparently a thing—the official rollout of a long-awaited Off-Facebook Activity tool that allows Facebook users to monitor and manage the connections between Facebook profiles and their off-platform activity.

“To help shed more light on these practices that are common yet not always well understood, today we’re introducing a new way to view and control your off-Facebook activity,” Zuckerberg said in the post. “Off-Facebook Activity lets you see a summary of the apps and websites that send us information about your activity, and clear this information from your account if you want to.”

Zuck’s use of the phrases “control your off-Facebook activity” and “clear this information from your account” is kinda misleading—you’re not really controlling or clearing much of anything. By using this tool, you’re just telling Facebook to put the data it has on you into two separate buckets that are otherwise mixed together. Put another way, Facebook is offering a one-stop-shop to opt-out of any ties between the sites and services you peruse daily that have some sort of Facebook software installed and your own-platform activity on Facebook or Instagram.

The only thing you’re clearing is a connection Facebook made between its data and the data it gets from third parties, not the data itself.

Illustration for article titled Facebooks Clear History Tool Doesnt Clear Shit
Image: Facebook

As an ad-tech reporter, my bread and butter involves downloading shit that does god-knows-what with your data, which is why I shouldn’t’ve been surprised that Facebook hoovered data from more 520 partners across the internet—either sites I’d visited or apps I’d downloaded. For Gizmodo alone, Facebook tracked “252 interactions” drawn from the handful of plug-ins our blog has installed. (To be clear, you’re going to run into these kinds of trackers e.v.e.r.y.w.h.e.r.e.—not just on our site.)

These plug-ins—or “business tools,” as Facebook describes them—are the pipeline that the company uses to ascertain your off-platform activity and tie it to your on-platform identity. As Facebook describes it:

– Jane buys a pair of shoes from an online clothing and shoe store.

– The store shares Jane’s activity with us using our business tools.

– We receive Jane’s off-Facebook activity and we save it with her Facebook account. The activity is saved as “visited the Clothes and Shoes website” and “made a purchase”.

– Jane sees an ad on Facebook for a 10% off coupon on her next shoe or clothing purchase from the online store.

Here’s the catch, though: When I hit the handy “clear history” button that Facebook now provides, it won’t do jack shit to stop a given shoe store from sharing my data with Facebook—which explicitly laid this out for me when I hit that button:

Your activity history will be disconnected from your account. We’ll continue to receive your activity from the businesses and organizations you visit in the future.

Yes, it’s confusing. Baffling, really. But basically, Facebook has profiles on users and non-users alike. Those of you who have Facebook profiles can use the new tool to disconnect your Facebook data from the data the company receives from third parties. Facebook will still have that third-party-collected data and it will continue to collect more data, but that bucket of data won’t be connected to your Facebook identity.

Illustration for article titled Facebooks Clear History Tool Doesnt Clear Shit
Screenshot: Gizmodo (Facebook)

The data third parties collect about you technically isn’t Facebook’s responsibility, to begin with. If I buy a pair of new sneakers from Steve Madden where that purchase or browsing data goes is ultimately in Steve Madden’s metaphorical hands. And thanks to the wonders of targeted advertising, even the sneakers I’m purchasing in-store aren’t safe from being added as a data point that can be tied to the collective profile Facebook’s gathered on me as a consumer. Naturally, it behooves whoever runs marketing at Steve Madden—or anywhere, really—to plug in as many of those data points as they possibly can.

For the record, I also tried toggling my off-Facebook activity to keep it from being linked to my account, but was told that, while the company would still be getting this information from third parties, it would just be “disconnected from [my] account.”

Put another way: The way I browse any number of sites and apps will ultimately still make its way to Facebook, and still be used for targeted advertising across… those sites and apps. Only now, my on-Facebook life—the cat groups I join, the statuses I comment on, the concerts I’m “interested” in (but never actually attend)—won’t be a part of that profile.

Or put another way: Facebook just announced that it still has its tentacles in every part of your life in a way that’s impossible to untangle yourself from. Now, it just doesn’t need the social network to do it.

Source: Facebook Enables Confusing ‘Off-Facebook Activity’ Privacy Tool

Leaked AVAST Documents Expose the Secretive Market for Your Web Browsing Data: Google, MS, Pepsi, they all buy it – Really, uninstall it now!

An antivirus program used by hundreds of millions of people around the world is selling highly sensitive web browsing data to many of the world’s biggest companies, a joint investigation by Motherboard and PCMag has found. Our report relies on leaked user data, contracts, and other company documents that show the sale of this data is both highly sensitive and is in many cases supposed to remain confidential between the company selling the data and the clients purchasing it.

The documents, from a subsidiary of the antivirus giant Avast called Jumpshot, shine new light on the secretive sale and supply chain of peoples’ internet browsing histories. They show that the Avast antivirus program installed on a person’s computer collects data, and that Jumpshot repackages it into various different products that are then sold to many of the largest companies in the world. Some past, present, and potential clients include Google, Yelp, Microsoft, McKinsey, Pepsi, Sephora, Home Depot, Condé Nast, Intuit, and many others. Some clients paid millions of dollars for products that include a so-called “All Clicks Feed,” which can track user behavior, clicks, and movement across websites in highly precise detail.

Avast claims to have more than 435 million active users per month, and Jumpshot says it has data from 100 million devices. Avast collects data from users that opt-in and then provides that to Jumpshot, but multiple Avast users told Motherboard they were not aware Avast sold browsing data, raising questions about how informed that consent is.

The data obtained by Motherboard and PCMag includes Google searches, lookups of locations and GPS coordinates on Google Maps, people visiting companies’ LinkedIn pages, particular YouTube videos, and people visiting porn websites. It is possible to determine from the collected data what date and time the anonymized user visited YouPorn and PornHub, and in some cases what search term they entered into the porn site and which specific video they watched.

[…]

Until recently, Avast was collecting the browsing data of its customers who had installed the company’s browser plugin, which is designed to warn users of suspicious websites. Security researcher and AdBlock Plus creator Wladimir Palant published a blog post in October showing that Avast harvest user data with that plugin. Shortly after, browser makers Mozilla, Opera, and Google removed Avast’s and subsidiary AVG’s extensions from their respective browser extension stores. Avast had previously explained this data collection and sharing in a blog and forum post in 2015. Avast has since stopped sending browsing data collected by these extensions to Jumpshot, Avast said in a statement to Motherboard and PCMag.

[…]

However, the data collection is ongoing, the source and documents indicate. Instead of harvesting information through software attached to the browser, Avast is doing it through the anti-virus software itself. Last week, months after it was spotted using its browser extensions to send data to Jumpshot, Avast began asking its existing free antivirus consumers to opt-in to data collection, according to an internal document.

“If they opt-in, that device becomes part of the Jumpshot Panel and all browser-based internet activity will be reported to Jumpshot,” an internal product handbook reads. “What URLs did these devices visit, in what order and when?” it adds, summarising what questions the product may be able to answer.

Senator Ron Wyden, who in December asked Avast why it was selling users’ browsing data, said in a statement, “It is encouraging that Avast has ended some of its most troubling practices after engaging constructively with my office. However I’m concerned that Avast has not yet committed to deleting user data that was collected and shared without the opt-in consent of its users, or to end the sale of sensitive internet browsing data. The only responsible course of action is to be fully transparent with customers going forward, and to purge data that was collected under suspect conditions in the past.”

[…]

On its website and in press releases, Jumpshot names Pepsi, and consulting giants Bain & Company and McKinsey as clients.

As well as Expedia, Intuit, and Loreal, other companies which are not already mentioned in public Jumpshot announcements include coffee company Keurig, YouTube promotion service vidIQ, and consumer insights firm Hitwise. None of those companies responded to a request for comment.

On its website, Jumpshot lists some previous case studies for using its browsing data. Magazine and digital media giant Condé Nast, for example, used Jumpshot’s products to see whether the media company’s advertisements resulted in more purchases on Amazon and elsewhere. Condé Nast did not respond to a request for comment.

ALL THE CLICKS

Jumpshot sells a variety of different products based on data collected by Avast’s antivirus software installed on users’ computers. Clients in the institutional finance sector often buy a feed of the top 10,000 domains that Avast users are visiting to try and spot trends, the product handbook reads.

Another Jumpshot product is the company’s so-called “All Click Feed.” It allows a client to buy information on all of the clicks Jumpshot has seen on a particular domain, like Amazon.com, Walmart.com, Target.com, BestBuy.com, or Ebay.com.

In a tweet sent last month intended to entice new clients, Jumpshot noted that it collects “Every search. Every click. Every buy. On every site” [emphasis Jumpshot’s.]

[…]

One company that purchased the All Clicks Feed is New York-based marketing firm Omnicom Media Group, according to a copy of its contract with Jumpshot. Omnicom paid Jumpshot $2,075,000 for access to data in 2019, the contract shows. It also included another product called “Insight Feed” for 20 different domains. The fee for data in 2020 and then 2021 is listed as $2,225,000 and $2,275,000 respectively, the document adds.

[…]

The internal product handbook says that device IDs do not change for each user, “unless a user completely uninstalls and reinstalls the security software.”

Source: Leaked Documents Expose the Secretive Market for Your Web Browsing Data – VICE

Ring Doorbell App Gives Away your data to 3rd parties, without your knowledge or consent

An investigation by EFF of the Ring doorbell app for Android found it to be packed with third-party trackers sending out a plethora of customers’ personally identifiable information (PII). Four main analytics and marketing companies were discovered to be receiving information such as the names, private IP addresses, mobile network carriers, persistent identifiers, and sensor data on the devices of paying customers.

The danger in sending even small bits of information is that analytics and tracking companies are able to combine these bits together to form a unique picture of the user’s device. This cohesive whole represents a fingerprint that follows the user as they interact with other apps and use their device, in essence providing trackers the ability to spy on what a user is doing in their digital lives and when they are doing it. All this takes place without meaningful user notification or consent and, in most cases, no way to mitigate the damage done. Even when this information is not misused and employed for precisely its stated purpose (in most cases marketing), this can lead to a whole host of social ills.

[…]

Our testing, using Ring for Android version 3.21.1, revealed PII delivery to branch.io, mixpanel.com, appsflyer.com and facebook.com. Facebook, via its Graph API, is alerted when the app is opened and upon device actions such as app deactivation after screen lock due to inactivity. Information delivered to Facebook (even if you don’t have a Facebook account) includes time zone, device model, language preferences, screen resolution, and a unique identifier (anon_id), which persists even when you reset the OS-level advertiser ID.

Branch, which describes itself as a “deep linking” platform, receives a number of unique identifiers (device_fingerprint_id, hardware_id, identity_id) as well as your device’s local IP address, model, screen resolution, and DPI.

AppsFlyer, a big data company focused on the mobile platform, is given a wide array of information upon app launch as well as certain user actions, such as interacting with the “Neighbors” section of the app. This information includes your mobile carrier, when Ring was installed and first launched, a number of unique identifiers, the app you installed from, and whether AppsFlyer tracking came preinstalled on the device. This last bit of information is presumably to determine whether AppsFlyer tracking was included as bloatware on a low-end Android device. Manufacturers often offset the costs of device production by selling consumer data, a practice that disproportionately affects low-income earners and was the subject of a recent petition to Google initiated by Privacy International and co-signed by EFF.

Most alarmingly, AppsFlyer also receives the sensors installed on your device (on our test device, this included the magnetometer, gyroscope, and accelerometer) and current calibration settings.

Ring gives MixPanel the most information by far. Users’ full names, email addresses, device information such as OS version and model, whether bluetooth is enabled, and app settings such as the number of locations a user has Ring devices installed in, are all collected and reported to MixPanel. MixPanel is briefly mentioned in Ring’s list of third party services, but the extent of their data collection is not. None of the other trackers listed in this post are mentioned at all on this page.

Ring also sends information to the Google-owned crash logging service Crashalytics. The exact extent of data sharing with this service is yet to be determined.

Source: Ring Doorbell App Packed with Third-Party Trackers | Electronic Frontier Foundation

Class-action lawsuit filed against creepy Clearview AI startup which scraped everyones social media profiles

A lawsuit — seeking class-action status — was filed this week in Illinois against Clearview AI, a New York-based startup that has scraped social media networks for people’s photos and created one of the biggest facial recognition databases in the world.

The secretive startup was exposed last week in an explosive New York Times report which revealed how Clearview was selling access to “faceprints” and facial recognition software to law enforcement agencies across the US. The startup claimed it could identify a person based on a single photo, revealing their real name, general location, and other identifiers.

The report sparked outrage among US citizens, who had photos collected and added to the Clearview AI database without their consent. The Times reported that the company collected more than three billion photos, from sites such as Facebook, Twitter, YouTube, Venmo, and others.

This week, the company was hit with the first lawsuit in the aftermath of the New York Times exposé.

Lawsuit claims Clearview AI broke BIPA

According to a copy of the complaint obtained by ZDNet, plaintiffs claim Clearview AI broke Illinois privacy laws.

Namely, the New York startup broke the Illinois Biometric Information Privacy Act (BIPA), a law that safeguards state residents from having their biometrics data used without consent.

According to BIPA, companies must obtain explicit consent from Illinois residents before collecting or using any of their biometric information — such as the facial scans Clearview collected from people’s social media photos.

“Plaintiff and the Illinois Class retain a significant interest in ensuring that their biometric identifiers and information, which remain in Defendant Clearview’s possession, are protected from hacks and further unlawful sales and use,” the lawsuit reads.

“Plaintiff therefore seeks to remedy the harms Clearview and the individually-named defendants have already caused, to prevent further damage, and to eliminate the risks to citizens in Illinois and throughout the United States created by Clearview’s business misuse of millions of citizen’s biometric identifiers and information.”

The plaintiffs are asking the court for an injunction against Clearview to stop it from selling the biometric data of Illinois residents, a court order forcing the company to delete any Illinois residents’ data, and punitive damage, to be decided by the court at a later date.

“Defendants’ violation of BIPA was intentional or reckless or, pleaded in the alternative, negligent,” the complaint reads.

Clearview AI did not return a request for comment.

Earlier this week, US lawmakers also sought answers from the company, while Twitter sent a cease-and-desist letter demanding the startup stop collecting user photos from their site and delete any existing images.

Source: Class-action lawsuit filed against controversial Clearview AI startup | ZDNet

London Police Will Start Using Live Facial Recognition Tech Now, Big Brother becomes a computer watching you

The dystopian nightmare begins. Today, London’s Metropolitan Police Service announced it will begin deploying Live Facial Recognition (LFR) tech across the capital in the hopes of locating and arresting wanted peoples.

[…]

The way the system is supposed to work, according to the Metropolitan Police, is the LFR cameras will first be installed in areas where ‘intelligence’ suggests the agency is most likely to locate ‘serious offenders.’ Each deployment will supposedly have a ‘bespoke’ watch list comprising images of wanted suspects for serious and violent offenses. The London police also note the cameras will focus on small, targeted areas to scan folks passing by. According to BBC News, previous trials had taken place in areas such as Stratford’s Westfield shopping mall and the West End area of London. It seems likely the agency is also anticipating some unease, as the cameras will be ‘clearly signposted’ and officers are slated to hand out informational leaflets.

The agency’s statement also emphasizes that the facial recognition tech is not meant to replace policing—just ‘prompt’ officers by suggesting a person in the area may be a fishy individual…based solely on their face. “It is always the decision of an officer whether or not to engage with someone,” the statement reads. On Twitter, the agency also noted in a short video that images that don’t trigger alerts will be immediately deleted.

As with any police-related, Minority Report-esque tech, accuracy is a major concern. While the Metropolitan Police Service claims that 70 percent of suspects were successfully identified and that only one in 1,000 people created a fake alert, not everyone agrees the LFR tech is rock-solid. An independent review from July 2019 found that in six of the trial deployments, only eight of 42 matches were correct for an abysmal 19 percent accuracy. Other problems found by the review included inaccurate watch list information (e.g., people were stopped for cases that had already been resolved), and the criteria for people being included on the watchlist weren’t clearly defined.

Privacy groups aren’t particularly happy with the development. Big Brother Watch, a privacy campaign group that’s been particularly vocal against facial recognition tech, took to Twitter, telling the Metropolitan Police Service they’d “see them in court.”

“This decision represents an enormous expansion of the surveillance state and a serious threat to civil liberties in the UK,” said Silkie Carlo, Big Brother Watch’s director, in a statement. “This is a breath-taking assault on our rights and we will challenge it, including by urgently considering next steps in our ongoing legal claim against the Met and the Home Secretary.”

Meanwhile, another privacy group Liberty, has also voiced resistance to the measure. “Rejected by democracies. Embraced by oppressive regimes. Rolling out facial recognition surveillance tech is a dangerous and sinister step in giving the State unprecedented power to track and monitor any one of us. No thanks,” the group tweeted.

Source: London Police Will Start Using Live Facial Recognition Tech

Clearview has scraped all social media sites illegally and vs TOS, has all your pictures in a massive database (who knows how secure this is?) and a face recognition AI. Is selling access to it to cops, and who knows who else.

What if a stranger could snap your picture on the sidewalk then use an app to quickly discover your name, address and other details? A startup called Clearview AI has made that possible, and its app is currently being used by hundreds of law enforcement agencies in the US, including the FBI, says a Saturday report in The New York Times.

The app, says the Times, works by comparing a photo to a database of more than 3 billion pictures that Clearview says it’s scraped off Facebook, Venmo, YouTube and other sites. It then serves up matches, along with links to the sites where those database photos originally appeared. A name might easily be unearthed, and from there other info could be dug up online.

The size of the Clearview database dwarfs others in use by law enforcement. The FBI’s own database, which taps passport and driver’s license photos, is one of the largest, with over 641 million images of US citizens.

The Clearview app isn’t currently available to the public, but the Times says police officers and Clearview investors think it will be in the future.

The startup said in a statement Tuesday that its “technology is intended only for use by law enforcement and security personnel. It is not intended for use by the general public.”

Source: Clearview app lets strangers find your name, info with snap of a photo, report says – CNET

Using the system involves uploading photos to Clearview AI’s servers, and it’s unclear how secure these are. Although Clearview AI says its customer-support employees will not look at the photos that are uploaded, it appeared to be aware that Kashmir Hill (the Times journalist investigating the piece) was having police search for her face as part of her reporting:

While the company was dodging me, it was also monitoring me. At my request, a number of police officers had run my photo through the Clearview app. They soon received phone calls from company representatives asking if they were talking to the media — a sign that Clearview has the ability and, in this case, the appetite to monitor whom law enforcement is searching for.

The Times reports that the system appears to have gone viral with police departments, with over 600 already signed up. Although there’s been no independent verification of its accuracy, Hill says the system was able to identify photos of her even when she covered the lower half of her face, and that it managed to find photographs of her that she’d never seen before.

One expert quoted by The Times said that the amount of money involved with these systems means that they need to be banned before the abuse of them becomes more widespread. “We’ve relied on industry efforts to self-police and not embrace such a risky technology, but now those dams are breaking because there is so much money on the table,” said a professor of law and computer science at Northeastern University, Woodrow Hartzog, “I don’t see a future where we harness the benefits of face recognition technology without the crippling abuse of the surveillance that comes with it. The only way to stop it is to ban it.”

Source: The Verge

So Clearview has you, even if it violates TOS. How to stop the next guy from getting you in FB – maybe.

It should come as little surprise that any content you offer to the web for public consumption has the potential to be scraped and misused by anyone clever enough to do it. And while that doesn’t make this weekend’s report from The New York Times any less damning, it’s a great reminder about how important it is to really go through the settings for your various social networks and limit how your content is, or can be, accessed by anyone.

I won’t get too deep into the Times’ report; it’s worth reading on its own, since it involves a company (Clearview AI) scraping more than three billion images from millions of websites, including Facebook, and creating a facial-recognition app that does a pretty solid job of identifying people using images from this massive database.

Even though Clearview’s scraping techniques technically violate the terms of service on a number of websites, that hasn’t stopped the company from acquiring images en masse. And it keeps whatever it finds, which means that turning all your online data private isn’t going to help if Clearview has already scanned and grabbed your photos.

Still, something is better than nothing. On Facebook, likely the largest stash of your images, you’re going to want to visit Settings > Privacy and look for the option described: “Do you want search engines outside of Facebook to link to your profile?”

Turn that off, and Clearview won’t be able to grab your images. That’s not the setting I would have expected to use, I confess, which makes me want to go through all of my social networks and rethink how the information I share with them flows out to the greater web.

Lock down your Facebook even more with these settings

Since we’re already here, it’s worth spending a few minutes wading through Facebook’s settings and making sure as much of your content is set to friends-only as possible. That includes changing “Who can see your future posts” to “friends,” using the “Limit Past Posts” option to change everything you’ve previously posted to friends-only, and making sure that only you can see your friends list—to prevent any potential scraping and linking that some third-party might attempt. Similarly, make sure only your friends (or friends of friends) can look you up via your email address or phone number. (You never know!)

You should then visit the Timeline and Tagging settings page and make a few more changes. That includes only allowing friends to see what other people post on your timeline, as well as posts you’re tagged in. And because I’m a bit sensitive about all the crap people tag me in on Facebook, I’d turn on the “Review” options, too. That won’t help your account from being scraped, but it’s a great way to exert more control over your timeline.

Illustration for article titled Change These Facebook Settings to Protect Your Photos From Facial Recognition Software
Screenshot: David Murphy

Finally, even though it also doesn’t prevent companies from scraping your account, pull up the Public postssection of Facebook’s settings page and limit who is allowed to follow you (if you desire). You should also restrict who can comment or like your public information, like posts or other details about your life you share openly on the service.

Illustration for article titled Change These Facebook Settings to Protect Your Photos From Facial Recognition Software
Screenshot: David Murphy

Once I fix Facebook, then what?

Here’s the annoying part. Were I you, I’d take an afternoon or evening and write out all the different places I typically share snippets of my life online. For most, maybe that’s probably a handful of social services: Facebook, Instagram, Twitter, YouTube, Flickr, et cetera.

Once you’ve created your list, I’d dig deep into the settings of each service and see what options you have, if any, for limiting the availability of your content. This might run contrary to how you use the service—if you’re trying to gain lots of Instagram followers, for example, locking your profile to “private” and requiring potential followers to request access might slow your attempts to become the next big Insta-star. However, it should also prevent anyone with a crafty scraping utility to mass-download your photos (and associate them with you, either through some fancy facial-recognition tech, or by linking them to your account).

Source: Change These Facebook Settings to Protect Your Photos From Facial Recognition Software

BlackVue dashcam shows anyone everywhere you are in real time and where you have been in the past

An app that is supposed to be a fun activity for dashcam users to broadcast their camera feeds and drives is actually allowing people to scrape and store the real-time location of drivers across the world.

BlackVue is a dashcam company with its own social network. With a small, internet-connected dashcam installed inside their vehicle, BlackVue users can receive alerts when their camera detects an unusual event such as someone colliding with their parked car. Customers can also allow others to tune into their camera’s feed, letting others “vicariously experience the excitement and pleasure of driving all over the world,” a message displayed inside the app reads.

Users are invited to upload footage of their BlackVue camera spotting people crashing into their cars or other mishaps with the #CaughtOnBlackVue hashtag. It’s kind of like Amazon’s Ring cameras, but for cars. BlackVue exhibited at CES earlier this month, and was previously featured on Innovations with Ed Begley Jr. on the History Channel.

But what BlackVue’s app doesn’t make clear is that it is possible to pull and store users’ GPS locations in real-time over days or even weeks. Motherboard was able to track the movements of some of BlackVue’s customers in the United States.

The news highlights privacy issues that some BlackVue customers or other dashcam users may not be aware of, and more generally the potential dangers of adding an internet and GPS enabled device into your vehicle. It also shows how developers may have one use case for an app, while people can discover others: although BlackVue wanted to create an entertaining app where users could tap into each others’ feeds, they may not have realized that it would be trivially easy to track its customers’ movements in granular detail, at scale, and over time.

BlackVue acts as another example of how surveillance products that are nominally intended to protect a user have been designed in such a way that can end up in a user being spied on, too.

“I don’t think people understand the risk,” Lee Heath, an information security professional and BlackVue user told Motherboard. “I knew about some of the cloud features which I wanted. You can have it automatically connect and upload when events happen. But I had no idea about the sharing” before receiving the device as a gift, he added.

Ordinarily, BlackVue lets anyone create an account and then view a map of cameras that are broadcasting their location and live feed. This broadcasting is not enabled by default, and users have to select the option to do so when setting up or configuring their own camera. Motherboard tuned into live feeds from users in Hong Kong, China, Russia, the U.K, Germany, and elsewhere. BlackVue spokesperson Jeremie Sinic told Motherboard in an email that the users on the map only represent a tiny fraction of BlackVue’s overall customers.

But the actual GPS data that drives the map is available and publicly accessible.

1579127170434-blackvue-user-gps
A screenshot of the location data of one BlackVue user that Motherboard tracked throughout New York. Motherboard has heavily obfuscated the data to protect the individual’s privacy. Image: Motherboard

By reverse engineering the iOS version of the BlackVue app, Motherboard was able to write scripts that pull the GPS location of BlackVue users over a week long period and store the coordinates and other information like the user’s unique identifier. One script could collect the location data of every BlackVue user who had mapping enabled on the eastern half of the United States every two minutes. Motherboard collected data on dozens of customers.

With that data, we were able to build a picture of several BlackVue users’ daily routines: one drove around Manhattan during the day, perhaps as a rideshare driver, before then leaving for Queens in the evening. Another BlackVue user regularly drove around Brooklyn, before parking on a specific block in Queens overnight. The user did this for several different nights, suggesting this may be where the owner lives or stores their vehicle. A third showed someone driving a truck all over South Carolina.

Some customers may use BlackVue as part of a fleet of vehicles; an employer wanting to keep tabs on their delivery trucks as they drive around, for instance. But BlackVue also markets its products to ordinary consumers who want to protect their cars.

1579127955288-blackvue-live-feed
A screenshot of Motherboard accessing someone’s public live feed as the user is driving in public away from their apparent home. Motherboard has redacted the user information to protect individual privacy. Image: Motherboard

BlackVue’s Sinic said that collecting GPS coordinates of multiple users over an extended period of time is not supposed to be possible.

“Our developers have updated the security measures following your report from yesterday that I forwarded,” Sinic said. After this, several of Motherboard’s web requests that previously provided user data stopped working.

In 2018 the company did make some privacy-related changes to its app, meaning users were not broadcasting their camera feeds by default.

“I think BlackVue has decent ideas as far as leaving off by default but allows people to put themselves at risk without understanding,” Heath, the BlackVue user, said.

Motherboard has deleted all of the data collected to preserve individuals’ privacy.

Source: This App Lets Us See Everywhere People Drive – VICE

Skype and Cortana audio listened in on by workers in China with ‘no security measures’

A Microsoft programme to transcribe and vet audio from Skype and Cortana, its voice assistant, ran for years with “no security measures”, according to a former contractor who says he reviewed thousands of potentially sensitive recordings on his personal laptop from his home in Beijing over the two years he worked for the company.

The recordings, both deliberate and accidentally invoked activations of the voice assistant, as well as some Skype phone calls, were simply accessed by Microsoft workers through a web app running in Google’s Chrome browser, on their personal laptops, over the Chinese internet, according to the contractor.

Workers had no cybersecurity help to protect the data from criminal or state interference, and were even instructed to do the work using new Microsoft accounts all with the same password, for ease of management, the former contractor said. Employee vetting was practically nonexistent, he added.

“There were no security measures, I don’t even remember them doing proper KYC [know your customer] on me. I think they just took my Chinese bank account details,” he told the Guardian.

While the grader began by working in an office, he said the contractor that employed him “after a while allowed me to do it from home in Beijing. I judged British English (because I’m British), so I listened to people who had their Microsoft device set to British English, and I had access to all of this from my home laptop with a simple username and password login.” Both username and password were emailed to new contractors in plaintext, he said, with the former following a simple schema and the latter being the same for every employee who joined in any given year.

“They just give me a login over email and I will then have access to Cortana recordings. I could then hypothetically share this login with anyone,” the contractor said. “I heard all kinds of unusual conversations, including what could have been domestic violence. It sounds a bit crazy now, after educating myself on computer security, that they gave me the URL, a username and password sent over email.”

As well as the risks of a rogue employee saving user data themselves or accessing voice recordings on a compromised laptop, Microsoft’s decision to outsource some of the work vetting English recordings to companies based in Beijing raises the additional prospect of the Chinese state gaining access to recordings. “Living in China, working in China, you’re already compromised with nearly everything,” the contractor said. “I never really thought about it.”

Source: Skype audio graded by workers in China with ‘no security measures’ | Technology | The Guardian

Checkpeople, why is a 22GB database containing 56 million US folks’ aggregated personal details sitting on the open internet using a Chinese IP address?

A database containing the personal details of 56.25m US residents – from names and home addresses to phone numbers and ages – has been found on the public internet, served from a computer with a Chinese IP address, bizarrely enough.

The information silo appears to belong to Florida-based CheckPeople.com, which is a typical people-finder website: for a fee, you can enter someone’s name, and it will look up their current and past addresses, phone numbers, email addresses, names of relatives, and even criminal records in some cases, all presumably gathered from public records.

However, all of this information is not only sitting in one place for spammers, miscreants, and other netizens to download in bulk, but it’s being served from an IP address associated with Alibaba’s web hosting wing in Hangzhou, east China, for reasons unknown. It’s a perfect illustration that not only is this sort of personal information in circulation, but it’s also in the hands of foreign adversaries.

It just goes to show how haphazardly people’s privacy is treated these days.

A white-hat hacker operating under the handle Lynx discovered the trove online, and tipped off The Register. He told us he found the 22GB database exposed on the internet, including metadata that links the collection to CheckPeople.com. We have withheld further details of the security blunder for privacy protection reasons.

The repository’s contents are likely scraped from public records, though together provide rather detailed profiles on tens of millions of folks in America. Basically, CheckPeople.com has done the hard work of aggregating public personal records, and this exposed NoSQL database makes that info even easier to crawl and process.

Source: Why is a 22GB database containing 56 million US folks’ personal details sitting on the open internet using a Chinese IP address? Seriously, why? • The Register

Lawsuit against cinema for refusing cash – and thus slurping private data

Michiel Jonker from Arnhem has sued a cinema that has moved location and since then refuses to accept cash at the cash register. All payments have to be made by pin. Jonker feels that this forces visitors to allow the cinema to process personal data.

He tried something of the sort in 2018 which was turned down as the personal data authority in NL decided that no-one was required to accept cash as legal tender.

Jonker is now saying that it should be if the data can be used to profile his movie preferences afterwards.

Good luck to him, I agree that cash is legal tender and the move to a cash free society is a privacy nightmare and potentially disastrous – see Hong Kong, for example.

Source: Rechtszaak tegen weigering van contant geld door bioscoop – Emerce

Amazon fired four workers who secretly snooped on Ring doorbell camera footage

Amazon’s Ring home security camera biz says it has fired multiple employees caught covertly watching video feeds from customer devices.

The admission came in a letter [PDF] sent in response to questions raised by US Senators critical of Ring’s privacy practices.

Ring recounted how, on four separate occasions, workers were let go for overstepping their access privileges and poring over customer video files and other data inappropriately.

“Over the last four years, Ring has received four complaints or inquiries regarding a team member’s access to Ring video data,” the gizmo flinger wrote.

“Although each of the individuals involved in these incidents was authorized to view video data, the attempted access to that data exceeded what was necessary for their job functions.

“In each instance, once Ring was made aware of the alleged conduct, Ring promptly investigated the incident, and after determining that the individual violated company policy, terminated the individual.”

This comes as Amazon attempts to justify its internal policies, particularly employee access to user video information for support and research-and-development purposes.

Source: Ring of fired: Amazon axes four workers who secretly snooped on netizens’ surveillance camera footage • The Register

DHS Plan to Collect DNA From Migrant Detainees Will Begin Soon – because centralised databases with personally sensitive data in them are a great idea. Just ask the Jews how useful they were during WWII

The Trump administration’s plan to collect DNA evidence from migrants detained in U.S. Customs and Borders Protection (CBP) and Immigration and Customs Enforcement (ICE) facilities will commence soon in the form of a 90-day pilot program in Detroit and Southwest Texas, CNN reported on Monday.

News of the plan first emerged in October, when the Department of Homeland Security told reporters that it wanted to collect DNA from migrants to detect “fraudulent family units,” including refugees applying for asylum at U.S. ports of entry. ICE started using DNA tests to screen asylum seekers at the border last year over similar concerns, claiming that the tests were designed to fight human traffickers. The tests will apply to those detained both temporarily and for longer periods of time, covering nearly all people held by immigration officials.

DHS announced the pilot program in a privacy assessment posted to its website on Monday. Per CNN, the pilot is a legal necessity before the agency revokes rules enacted in 2010 that exempt “migrants in who weren’t facing criminal charges or those pending deportation proceedings” from the DNA Fingerprint Act of 2005, which will apply the program nationally. The pilot will involve U.S. Border Patrol agents collecting DNA from individuals aged 14-79 who are arrested and processed, as well as customs officers collecting DNA from individuals subject to continued detention or further proceedings.

According to the privacy assessment, U.S. citizens and permanent residents “who are being arrested or facing criminal charges” may have DNA collected by CBP or ICE personnel. All collected DNA will be sent to the FBI and stored in its Combined DNA Index System (CODIS), a set of national genetic information databases that includes forensic data, missing persons, and convicts, where it would be held for as long as the government sees fit.

Those who refuse to submit to DNA testing could face class A misdemeanor charges, the DHS wrote.

DHS acknowledged that because it has to mail the DNA samples to the FBI for processing and comparison against CODIS entries, it is unlikely that agents will be able to use the DNA for “public safety or investigative purposes prior to either an individual’s removal to his or her home country, release into the interior of the United States, or transfer to another federal agency.” ACLU attorney Stephen Kang told the New York Times that DHS appeared to be creating “DNA bank of immigrants that have come through custody for no clear reason,” raising “a lot of very serious, practical concerns, I think, and real questions about coercion.”

The Times noted that last year, Border Patrol law enforcement directorate chief Brian Hastings wrote that even after policies and procedures were implemented, Border Patrol agents remained “not currently trained on DNA collection measures, health and safety precautions, or the appropriate handling of DNA samples for processing.”

U.S. immigration authorities held a record number of children over the fiscal year that ended in September 2019, with some 76,020 minors without their parents present detained. According to ICE, over 41,000 people were in DHS custody at the end of 2019 (in mid-2019, the number shot to over 55,000).

“That kind of mass collection alters the purpose of DNA collection from one of criminal investigation basically to population surveillance, which is basically contrary to our basic notions of a free, trusting, autonomous society,” ACLU Speech, Privacy, and Technology Project staff attorney Vera Eidelman told the Times last year.

Source: DHS Plan to Collect DNA From Migrant Detainees Will Begin Soon

Twelve Million Phones, One Dataset (no, not  your phone companies’), Zero Privacy – The New York Times

Every minute of every day, everywhere on the planet, dozens of companies — largely unregulated, little scrutinized — are logging the movements of tens of millions of people with mobile phones and storing the information in gigantic data files. The Times Privacy Project obtained one such file, by far the largest and most sensitive ever to be reviewed by journalists. It holds more than 50 billion location pings from the phones of more than 12 million Americans as they moved through several major cities, including Washington, New York, San Francisco and Los Angeles.

Each piece of information in this file represents the precise location of a single smartphone over a period of several months in 2016 and 2017. The data was provided to Times Opinion by sources who asked to remain anonymous because they were not authorized to share it and could face severe penalties for doing so. The sources of the information said they had grown alarmed about how it might be abused and urgently wanted to inform the public and lawmakers.

[Related: How to Track President Trump — Read more about the national security risks found in the data.]

After spending months sifting through the data, tracking the movements of people across the country and speaking with dozens of data companies, technologists, lawyers and academics who study this field, we feel the same sense of alarm. In the cities that the data file covers, it tracks people from nearly every neighborhood and block, whether they live in mobile homes in Alexandria, Va., or luxury towers in Manhattan.

One search turned up more than a dozen people visiting the Playboy Mansion, some overnight. Without much effort we spotted visitors to the estates of Johnny Depp, Tiger Woods and Arnold Schwarzenegger, connecting the devices’ owners to the residences indefinitely.

If you lived in one of the cities the dataset covers and use apps that share your location — anything from weather apps to local news apps to coupon savers — you could be in there, too.

If you could see the full trove, you might never use your phone the same way again.

A typical day at Grand Central Terminal
in New York City
Satellite imagery: Microsoft

The data reviewed by Times Opinion didn’t come from a telecom or giant tech company, nor did it come from a governmental surveillance operation. It originated from a location data company, one of dozens quietly collecting precise movements using software slipped onto mobile phone apps. You’ve probably never heard of most of the companies — and yet to anyone who has access to this data, your life is an open book. They can see the places you go every moment of the day, whom you meet with or spend the night with, where you pray, whether you visit a methadone clinic, a psychiatrist’s office or a massage parlor.

[…]

The companies that collect all this information on your movements justify their business on the basis of three claims: People consent to be tracked, the data is anonymous and the data is secure.

None of those claims hold up, based on the file we’ve obtained and our review of company practices.

Yes, the location data contains billions of data points with no identifiable information like names or email addresses. But it’s child’s play to connect real names to the dots that appear on the maps.

[…]

In most cases, ascertaining a home location and an office location was enough to identify a person. Consider your daily commute: Would any other smartphone travel directly between your house and your office every day?

Describing location data as anonymous is “a completely false claim” that has been debunked in multiple studies, Paul Ohm, a law professor and privacy researcher at the Georgetown University Law Center, told us. “Really precise, longitudinal geolocation information is absolutely impossible to anonymize.”

“D.N.A.,” he added, “is probably the only thing that’s harder to anonymize than precise geolocation information.”

[Work in the location tracking industry? Seen an abuse of data? We want to hear from you. Using a non-work phone or computer, contact us on a secure line at 440-295-5934, @charliewarzel on Wire or email Charlie Warzel and Stuart A. Thompson directly.]

Yet companies continue to claim that the data are anonymous. In marketing materials and at trade conferences, anonymity is a major selling point — key to allaying concerns over such invasive monitoring.

To evaluate the companies’ claims, we turned most of our attention to identifying people in positions of power. With the help of publicly available information, like home addresses, we easily identified and then tracked scores of notables. We followed military officials with security clearances as they drove home at night. We tracked law enforcement officers as they took their kids to school. We watched high-powered lawyers (and their guests) as they traveled from private jets to vacation properties. We did not name any of the people we identified without their permission.

The data set is large enough that it surely points to scandal and crime but our purpose wasn’t to dig up dirt. We wanted to document the risk of underregulated surveillance.

Watching dots move across a map sometimes revealed hints of faltering marriages, evidence of drug addiction, records of visits to psychological facilities.

Connecting a sanitized ping to an actual human in time and place could feel like reading someone else’s diary.

[…]

The inauguration weekend yielded a trove of personal stories and experiences: elite attendees at presidential ceremonies, religious observers at church services, supporters assembling across the National Mall — all surveilled and recorded permanently in rigorous detail.

Protesters were tracked just as rigorously. After the pings of Trump supporters, basking in victory, vanished from the National Mall on Friday evening, they were replaced hours later by those of participants in the Women’s March, as a crowd of nearly half a million descended on the capital. Examining just a photo from the event, you might be hard-pressed to tie a face to a name. But in our data, pings at the protest connected to clear trails through the data, documenting the lives of protesters in the months before and after the protest, including where they lived and worked.

[…]

Inauguration Day weekend was marked by other protests — and riots. Hundreds of protesters, some in black hoods and masks, gathered north of the National Mall that Friday, eventually setting fire to a limousine near Franklin Square. The data documented those rioters, too. Filtering the data to that precise time and location led us to the doorsteps of some who were there. Police were present as well, many with faces obscured by riot gear. The data led us to the homes of at least two police officers who had been at the scene.

As revealing as our searches of Washington were, we were relying on just one slice of data, sourced from one company, focused on one city, covering less than one year. Location data companies collect orders of magnitude more information every day than the totality of what Times Opinion received.

Data firms also typically draw on other sources of information that we didn’t use. We lacked the mobile advertising IDs or other identifiers that advertisers often combine with demographic information like home ZIP codes, age, gender, even phone numbers and emails to create detailed audience profiles used in targeted advertising. When datasets are combined, privacy risks can be amplified. Whatever protections existed in the location dataset can crumble with the addition of only one or two other sources.

There are dozens of companies profiting off such data daily across the world — by collecting it directly from smartphones, creating new technology to better capture the data or creating audience profiles for targeted advertising.

The full collection of companies can feel dizzying, as it’s constantly changing and seems impossible to pin down. Many use technical and nuanced language that may be confusing to average smartphone users.

While many of them have been involved in the business of tracking us for years, the companies themselves are unfamiliar to most Americans. (Companies can work with data derived from GPS sensors, Bluetooth beacons and other sources. Not all companies in the location data business collect, buy, sell or work with granular location data.)

A Selection of Companies Working

in the Location Data Business

Sources: MightySignal, LUMA Partners and AppFigures.

Location data companies generally downplay the risks of collecting such revealing information at scale. Many also say they’re not very concerned about potential regulation or software updates that could make it more difficult to collect location data.

[…]

Does it really matter that your information isn’t actually anonymous? Location data companies argue that your data is safe — that it poses no real risk because it’s stored on guarded servers. This assurance has been undermined by the parade of publicly reported data breaches — to say nothing of breaches that don’t make headlines. In truth, sensitive information can be easily transferred or leaked, as evidenced by this very story.

We’re constantly shedding data, for example, by surfing the internet or making credit card purchases. But location data is different. Our precise locations are used fleetingly in the moment for a targeted ad or notification, but then repurposed indefinitely for much more profitable ends, like tying your purchases to billboard ads you drove past on the freeway. Many apps that use your location, like weather services, work perfectly well without your precise location — but collecting your location feeds a lucrative secondary business of analyzing, licensing and transferring that information to third parties.

The data contains simple information like date, latitude and longitude, making it easy to inspect, download and transfer. Note: Values are randomized to protect sources and device owners.

For many Americans, the only real risk they face from having their information exposed would be embarrassment or inconvenience. But for others, like survivors of abuse, the risks could be substantial. And who can say what practices or relationships any given individual might want to keep private, to withhold from friends, family, employers or the government? We found hundreds of pings in mosques and churches, abortion clinics, queer spaces and other sensitive areas.

In one case, we observed a change in the regular movements of a Microsoft engineer. He made a visit one Tuesday afternoon to the main Seattle campus of a Microsoft competitor, Amazon. The following month, he started a new job at Amazon. It took minutes to identify him as Ben Broili, a manager now for Amazon Prime Air, a drone delivery service.

“I can’t say I’m surprised,” Mr. Broili told us in early December. “But knowing that you all can get ahold of it and comb through and place me to see where I work and live — that’s weird.” That we could so easily discern that Mr. Broili was out on a job interview raises some obvious questions, like: Could the internal location surveillance of executives and employees become standard corporate practice?

[…]

If this kind of location data makes it easy to keep tabs on employees, it makes it just as simple to stalk celebrities. Their private conduct — even in the dead of night, in residences and far from paparazzi — could come under even closer scrutiny.

Reporters hoping to evade other forms of surveillance by meeting in person with a source might want to rethink that practice. Every major newsroom covered by the data contained dozens of pings; we easily traced one Washington Post journalist through Arlington, Va.

In other cases, there were detours to hotels and late-night visits to the homes of prominent people. One person, plucked from the data in Los Angeles nearly at random, was found traveling to and from roadside motels multiple times, for visits of only a few hours each time.

While these pointillist pings don’t in themselves reveal a complete picture, a lot can be gleaned by examining the date, time and length of time at each point.

Large data companies like Foursquare — perhaps the most familiar name in the location data business — say they don’t sell detailed location data like the kind reviewed for this story but rather use it to inform analysis, such as measuring whether you entered a store after seeing an ad on your mobile phone.

But a number of companies do sell the detailed data. Buyers are typically data brokers and advertising companies. But some of them have little to do with consumer advertising, including financial institutions, geospatial analysis companies and real estate investment firms that can process and analyze such large quantities of information. They might pay more than $1 million for a tranche of data, according to a former location data company employee who agreed to speak anonymously.

Location data is also collected and shared alongside a mobile advertising ID, a supposedly anonymous identifier about 30 digits long that allows advertisers and other businesses to tie activity together across apps. The ID is also used to combine location trails with other information like your name, home address, email, phone number or even an identifier tied to your Wi-Fi network.

The data can change hands in almost real time, so fast that your location could be transferred from your smartphone to the app’s servers and exported to third parties in milliseconds. This is how, for example, you might see an ad for a new car some time after walking through a dealership.

That data can then be resold, copied, pirated and abused. There’s no way you can ever retrieve it.

Location data is about far more than consumers seeing a few more relevant ads. This information provides critical intelligence for big businesses. The Weather Channel app’s parent company, for example, analyzed users’ location data for hedge funds, according to a lawsuit filed in Los Angeles this year that was triggered by Times reporting. And Foursquare received much attention in 2016 after using its data trove to predict that after an E. coli crisis, Chipotle’s sales would drop by 30 percent in the coming months. Its same-store sales ultimately fell 29.7 percent.

Much of the concern over location data has focused on telecom giants like Verizon and AT&T, which have been selling location data to third parties for years. Last year, Motherboard, Vice’s technology website, found that once the data was sold, it was being shared to help bounty hunters find specific cellphones in real time. The resulting scandal forced the telecom giants to pledge they would stop selling location movements to data brokers.

Yet no law prohibits them from doing so.

[…]

If this information is so sensitive, why is it collected in the first place?

For brands, following someone’s precise movements is key to understanding the “customer journey” — every step of the process from seeing an ad to buying a product. It’s the Holy Grail of advertising, one marketer said, the complete picture that connects all of our interests and online activity with our real-world actions.

Once they have the complete customer journey, companies know a lot about what we want, what we buy and what made us buy it. Other groups have begun to find ways to use it too. Political campaigns could analyze the interests and demographics of rally attendees and use that information to shape their messages to try to manipulate particular groups. Governments around the world could have a new tool to identify protestors.

Pointillist location data also has some clear benefits to society. Researchers can use the raw data to provide key insights for transportation studies and government planners. The City Council of Portland, Ore., unanimously approved a deal to study traffic and transit by monitoring millions of cellphones. Unicef announced a plan to use aggregated mobile location data to study epidemics, natural disasters and demographics.

For individual consumers, the value of constant tracking is less tangible. And the lack of transparency from the advertising and tech industries raises still more concerns.

Does a coupon app need to sell second-by-second location data to other companies to be profitable? Does that really justify allowing companies to track millions and potentially expose our private lives?

Data companies say users consent to tracking when they agree to share their location. But those consent screens rarely make clear how the data is being packaged and sold. If companies were clearer about what they were doing with the data, would anyone agree to share it?

What about data collected years ago, before hacks and leaks made privacy a forefront issue? Should it still be used, or should it be deleted for good?

If it’s possible that data stored securely today can easily be hacked, leaked or stolen, is this kind of data worth that risk?

Is all of this surveillance and risk worth it merely so that we can be served slightly more relevant ads? Or so that hedge fund managers can get richer?

The companies profiting from our every move can’t be expected to voluntarily limit their practices. Congress has to step in to protect Americans’ needs as consumers and rights as citizens.

Until then, one thing is certain: We are living in the world’s most advanced surveillance system. This system wasn’t created deliberately. It was built through the interplay of technological advance and the profit motive. It was built to make money. The greatest trick technology companies ever played was persuading society to surveil itself.

Source: Opinion | Twelve Million Phones, One Dataset, Zero Privacy – The New York Times

Private equity buys Lastpass owner LogMeIn – will they start monetising your logins?

Remote access, collaboration and password manager provider LogMeIn has been sold to a private equity outfit for $4.3bn.

A consortium led by private equity firm Francisco Partners (along with Evergreen, the PE arm of tech activist investor Elliott Management), will pay $86.05 in cash for each LogMeIn share – a 25 per cent premium on prices before talk about the takeover surfaced in September.

LogMeIn’s board of directors is in favour of the buy. Chief executive Bill Wagner said the deal recognised the value of the firm and would provide for: “both our core and growth assets”.

The sale should close in mid-2020, subject to the usual shareholder and regulatory hurdles. Logmein also has 45 days to look at alternative offers.

In 2018 LogMeIn made revenues of $1.2bn and profits of $446m.

The company runs a bunch of subsidiaries which offer collaboration software and web meetings products, virtual telephony services, remote technical support, and customer service bots as well as several identity and password manager products.

Logmein bought LastPass, which now claims 18.6 million users, for $110m in 2015. That purchase raised concerns about exactly how LastPass’s new owner would exploit the user data it held, and today’s news is unlikely to allay any of those fears.

The next year, LogMeIn merged with Citrix’s GoTo business, a year after its spinoff.

Source: Log us out: Private equity snaffles Lastpass owner LogMeIn • The Register