Internet Society told to halt .org sale to dodgy companies… by its own advisory council

The Internet Society’s own members are now opposing its sale of the .org internet registry to an unknown private equity firm.

The Chapters Advisory Council, the official voice of Internet Society (ISOC) members, will vote this month on whether to approve a formal recommendation that the society “not proceed [with the sale] unless a number of conditions are met.”

Those conditions largely comprise the publication of additional details and transparency regarding ISOC’s controversial sell-off of .org. Despite months of requests, neither the society nor the proposed purchaser, Ethos Capital, have disclosed critical elements of the deal, including who would actually own the registry if the sale went through.

Meanwhile, word has reached us that Ethos Capital attempted to broker a secret peace treaty this coming weekend in Washington DC by inviting key individuals to a closed-door meeting with the goal of thrashing out an agreement all sides would be happy with. After Ethos insisted the meeting be kept brief, and a number of those opposed to the sale declined to attend, Ethos’s funding for attendees’ flights and accommodation was suddenly withdrawn, and the plan to hold a confab fell apart, we understand.

ISOC – and .org’s current operator, the ISOC-controlled Public Interest Registry (PIR) – are still hoping to push DNS overseer ICANN to make a decision on the .org sale before the end of the month. But that looks increasingly unlikely following an aggressive letter from ICANN’s external lawyers last week insisting ICANN will take as much time as it feels necessary to review the deal.

The overall lack of transparency around the $1.13bn deal has led California’s Attorney General to demand documents relating to the sale – and ISOC’s chapters are demanding the same information as a pre-condition to any sale in their proposed advice to the ISOC board.

That information includes: full details of the transaction; a financial breakdown of what Ethos Capital intends to do with .org’s 10 million internet addresses; binding commitments on limiting price increases and free speech protections; and publication of the bylaws and related corporate documents for both the replacement to the current registry operator, PIR, and the proposed “Stewardship Council” which Ethos claims will give .org users a say in future decisions.

Disregarded

“There is a feeling amongst chapters that ISOC seems to have disregarded community participation, failed to properly account for the potential community impact, and misread the community mindset around the .ORG TLD,” the Chapters Advisory Council’s proposed advice to the ISOC board – a copy of which The Register has seen – states.

Although the advisory council has no legal ability to stop ISOC, if the proposed advice is approved by vote, and the CEO and board of trustees push ahead with the sale regardless, it could have severe repercussions for the organization’s non-profit status, and would further undermine ISOC’s position that the sale will “support the Internet Society’s vision that the Internet is for everyone.”

[…]

That lack of transparency was never more clear than when the ISOC board claimed to have met for two weeks in November to discuss the Ethos Capital offer to buy .org, but made no mention of the proposal and only made ISOC members and chapters aware of the decision after it had been made.

With a spotlight on ISOC’s secretive deliberations – and with board members now claiming they are subject to a non-disclosure agreement over the sale – the organization has added skeleton minutes that provide little or no insight into deliberations. It is not clear when those minutes were added – no update date is provided.

“The primary purpose of the Chapters Advisory Council shall be to channel and facilitate advice and recommendations to and from the President and Board of Trustees of the Internet Society in a bottom up manner, on any matters of concern or interest to the Chapter AC and ISOC Chapters,” reads the official description of the council on ISOC’s website.

With Ethos having failed to broker a secret deal, and ICANN indicating that it will consider the public interest in deciding whether to approve the sale, if ISOC’s advisory council does vote to advise the board not to move forward with the sale, the Internet Society will face a stark choice: stick by the secretive billionaires funding the purchase of .org with the added risk of blowing up the entire organization; or walk away from the deal.

Source: Revolution, comrades: Internet Society told to halt .org sale… by its own advisory council • The Register

Google allows random company to DMCA sites with the word ‘Did’ in it, de-indexes (deletes) them without warning or recourse.

In 2018, Target wrote an article about Ada Lovelace, the daughter of Lord Byron who some credit as being the world’s first computer programmer, despite being born in 1815. Unfortunately, however, those who search for that article today using Google won’t find it.

As the image below shows, the original Tweet announcing the article is still present in Google’s indexes but the article itself has been removed, thanks to a copyright infringement complaint that also claimed several other victims.

While there could be dozens of reasons the article infringed someone’s copyrights, the facts are so absurd as to be almost unbelievable. Sinclair’s article was deleted because an anti-piracy company working on behalf of a TV company decided that since its title (What Did Ada Lovelace’s Program Actually Do?) contained the word ‘DID’, it must be illegal.

This monumental screw-up was announced on Twitter by Sinclair himself, who complained that “Computers are stupid folks. Too bad Google has decided they are in charge.”

At risk of running counter to Sinclair’s claim, in this case – as Lovelace herself would’ve hopefully agreed – it is people who are stupid, not computers. The proof for that can be found in the DMCA complaint sent to Google by RightsHero, an anti-piracy company working on behalf of Zee TV, an Indian pay-TV channel that airs Dance India Dance.

Now in its seventh season, Dance India Dance is a dance competition reality show that is often referred to as DID. And now, of course, you can see where this is going. Because Target and at least 11 other sites dared to use the word in its original context, RightsHero flagged the pages as infringing and asked Google to deindex them.

But things only get worse from here.

Look up the word ‘did’ in any dictionary and you will never find the definition listed as an acronym for Dance India Dance. Instead, you’ll find the explanation as “past of do” or something broadly along those lines. However, if the complaint sent to Google had achieved its intended effect, finding out that would’ve been more difficult too.

Lo, here it is in its full glory.

As we can see, the notice not only claims Target’s article is infringing the copyrights of Dance India Dance (sorry, DID), but also no less than four online dictionaries explaining what the word ‘did’ actually means. (Spoiler: None say ‘Dance India Dance’).

Perhaps worse still, some of the other allegedly-infringing articles were published by some pretty serious information resources including:

-USGS Earthquake Hazards Program of the U.S. Geological Survey (Did You Feel It? (DYFI) collects information from people who felt an earthquake and creates maps that show what people experienced and the extent of damage)

– The US Department of Education (Did (or will) you file a Schedule 1 with your 2018 tax return?)

– Nature.com (Did pangolins spread the China coronavirus to people?)

Considering the scale of the problem here, we tried to contact RightsHero for comment. However, the only anti-piracy company bearing that name has a next-to-useless website that provides no information on where the company is, who owns it, who runs it, or how those people can be contacted.

In the absence of any action by RightsHero, Sinclair Target was left with a single option – issue a counterclaim to Google in the hope of having his page restored.

“I’ve submitted a counter-claim, which seemed to be the only thing I could do,” Target told TorrentFreak.

“Got a cheery confirmation email from Google saying, ‘Thanks for contacting us!’ and that it might be a while until the issue is resolved. I assume that’s because this is the point where finally a decision has to be made by a human being. It is annoying indeed.”

Finally, it’s interesting to take a line from Target’s analysis of Lovelace’s program. “She thought carefully about how operations could be organized into groups that could be repeated, thereby inventing the loop,” he writes.

10 DELETE “DID”
20 PROFIT?
30 GOTO 10

Source: Don’t Use the Word ‘Did’ or a Dumb Anti-Piracy Company Will Delete You From Google – TorrentFreak

How Big Companies Spy on Your Emails

The popular Edison email app, which is in the top 100 productivity apps on the Apple app store, scrapes users’ email inboxes and sells products based off that information to clients in the finance, travel, and e-Commerce sectors. The contents of Edison users’ inboxes are of particular interest to companies who can buy the data to make better investment decisions, according to a J.P. Morgan document obtained by Motherboard.

On its website Edison says that it does “process” users’ emails, but some users did not know that when using the Edison app the company scrapes their inbox for profit. Motherboard has also obtained documentation that provides more specifics about how two other popular apps—Cleanfox and Slice—sell products based on users’ emails to corporate clients.

Source: How Big Companies Spy on Your Emails – VICE

The advertising industry is systematically breaking the law says Norweigan consumer council

Based on the findings, more than 20 consumer and civil society organisations in Europe and from different parts of the world are urging their authorities to investigate the practices of the online advertising industry.

The report uncovers how every time we use apps, hundreds of shadowy entities are receiving personal data about our interests, habits, and behaviour. This information is used to profile consumers, which can be used for targeted advertising, but may also lead to discrimination, manipulation and exploitation.

– These practices are out of control and in breach of European data protection legislation. The extent of tracking makes it impossible for us to make informed choices about how our personal data is collected, shared and used, says Finn Myrstad, director of digital policy in the Norwegian Consumer Council.

The Norwegian Consumer Council is now filing formal complaints against Grindr, a dating app for gay, bi, trans, and queer people and companies that were receiving personal data through the app;  Twitter`s MoPub, AT&T’s AppNexus, OpenX, AdColony and Smaato. The complaints are directed to the Norwegian Data Protection Authority for breaches of the General Data Protection Regulation.

[…]

Every time you open an app like Grindr advertisement networks get your GPS location, device identifiers and even the fact that you use a gay dating app. This is an insane violation of users’ EU privacy rights, says Max Schrems, founder of the European privacy non-profit NGO noyb.

The harmful effects of profiling

Many actors in the online advertising industry collect information about us from a variety of places, including web browsing, connected devices, and social media. When combined, this data provides a complex picture of individuals, revealing what we do in our daily lives, our secret desires, and our most vulnerable moments.

–  This massive commercial surveillance is systematically at odds with our fundamental rights  and can be used to discriminate, manipulate and exploit us. The widespread tracking also has the potential to seriously degrade consumer trust in digital services, says Myrstad.

– Furthermore, in a recent Amnesty International report, Amnesty showed how these data-driven business models are a serious threat to human rights such as freedom of opinion and expression, freedom of thought, and the right to equality and non-discrimination.

[…]

– The situation is completely out of control. In order to shift the significant power imbalance between consumers and third party companies, the current practices of extensive tracking and profiling have to end, says Myrstad.

– There are very few actions consumers can take to limit or prevent the massive tracking and data sharing that is happening all across the internet. Authorities must take active enforcement measures to protect consumers against the illegal exploitation of personal data.

Source: New study: The advertising industry is systematically breaking the law : Forbrukerrådet

Netflix Loses Bid to Dismiss $25 Million Lawsuit Over ‘Black Mirror: Bandersnatch’ because someone feels they own the phrase: choose your own adventure

Chooseco LLC, a children’s book publisher, filed its complaint in January 2019. According to the plaintiff, it has been using the mark since the 1980s and has sold more than 265 million copies of its Choose Your Own Adventure books. 20th Century Fox holds options for movie versions, and Chooseco alleges that Netflix actively pursued a license. Instead of getting one, Netflix released Bandersnatch, which allows audiences to select the direction of the plot. Claiming $25 million in damages, Chooseco suggested that Bandersnatch viewers have been confused about association with its famous brand, particularly because of marketing around the movie as well as a scene where the main character — a video game developer — tells his father that the work he’s developing is based on a Choose Your Own Adventure book.

In reaction to the lawsuit, Netflix raised a First Amendment defense, particularly the balancing test in Rogers v. Grimaldi, whereby unless a work has no artistic relevance, the use of a mark must be misleading for it to be actionable.

U.S. District Court Judge William Sessions agrees that Bandersnatch is an artistic work even if Netflix derived profit from exploiting the Charlie Brooker film.

And the judge says that use of the trademark has artistic relevance.

“Here, the protagonist of Bandersnatch attempts to convert the fictional book ‘Bandersnatch’ into a videogame, placing the book at the center of the film’s plot,” states the ruling. “Netflix used Chooseco’s mark to describe the interactive narrative structure shared by the book, the videogame, and the film itself. Moreover, Netflix intended this narrative structure to comment on the mounting influence technology has in modern day life. In addition, the mental imagery associated with Chooseco’s mark adds to Bandersnatch’s 1980s aesthetic. Thus, Netflix’s use of Chooseco’s mark clears the purposely-low threshold of Rogers’ artistic relevance prong.”

Thus, the final question is whether Netflix’s film is explicitly misleading. Judge Sessions doesn’t believe it’s appropriate to dismiss the case prematurely without exploring factual issues in discovery.

“Here, Chooseco has sufficiently alleged that consumers associate its mark with interactive books and that the mark covers other forms of interactive media, including films,” continues the decision. “The protagonist in Bandersnatch explicitly stated that the fictitious book at the center of the film’s plot was a ‘Choose Your Own Adventure’ book. In addition, the book, the videogame, and the film itself all employ the same type of interactivity as Chooseco’s products. The similarity between Chooseco’s products, Netflix’s film, and the fictitious book Netflix described as a ‘Choose Your Own Adventure’ book increases the likelihood of consumer confusion.”

Netflix also attempted to defend its use of “Choose Your Own Adventure” as descriptive fair use. Here, too, the judge believes that factual exploration is appropriate.

Writes Sessions, “The physical characteristics and context of the use demonstrate that it is at least plausible Netflix used the term to attract public attention by associating the film with Chooseco’s book series.”

The decision adds that while Netflix contends that the phrase in question has been used by others to describe a branch of storytelling, that argument entails consideration of facts outside of Chooseco’s complaint, which at this stage must be accepted as true.

“Additionally, choose your own adventure arguably is not purely descriptive of narrative techniques — it requires at least some imagination to link the phrase to interactive plotlines,” writes Sessions. “Moreover, any descriptive aspects of the phrase may stem from Chooseco’s mark itself. In other words, the phrase may only have descriptive qualities because Chooseco attached it to its popular interactive book series. The Court lacks the facts necessary to determine whether consumers perceive the phrase in a descriptive sense or whether they simply associate it with Chooseco’s brand.”

Here’s the full decision allowing Chooseco’s Lanham Act and unfair competition claims to proceed.

The ruling may be surprising to some, particularly as there’s a line of cases where studios have escaped trademark claims over content. For example, see Warner Bros.’ win a few years ago over “Clean Slate” in The Dark Knight Rises. If Netflix and Chooseco can’t come to a settlement, many of these issues may be re-explored at the summary judgment round.

Source: Netflix Loses Bid to Dismiss $25 Million Lawsuit Over ‘Black Mirror: Bandersnatch’ | Hollywood Reporter

Wow, copyright law is beyond strange.

Data Protection Authority Investigates Avast for Selling Users’ Browsing and Maps History

On Tuesday, the Czech data protection authority announced an investigation into antivirus company Avast, which was harvesting the browsing history of over 100 million users and then selling products based on that data to a slew of different companies including Google, Microsoft, and Home Depot. The move comes after a joint Motherboard and PCMag investigation uncovered details of the data collection through a series of leaked documents.

“On the basis of the information revealed describing the practices of Avast Software s.r.o., which was supposed to sell data on the activities of anti-virus users through its ‘Jumpshot division’ the Office initiated a preliminary investigation of the case,” a statement from the Czech national data protection authority on its website reads. Under the European General Protection Regulation (GDPR) and national laws, the Czech Republic, like other EU states, has a data protection authority to enforce things like mishandling of personal data. With GDPR, companies can be fined for data abuses.

“At the moment we are collecting information on the whole case. There is a suspicion of a serious and extensive breach of the protection of users’ personal data. Based on the findings, further steps will be taken and general public will be informed in due time,“ added Ms Ivana Janů, President of the Czech Office for Personal Data Protection, in the statement. Avast is a Czech company.

Motherboard and PCMag’s investigation found that the data sold included Avast users’ Google searches and Google Maps lookups, particular YouTube videos, and people visiting specific porn videos. The data was anonymized, but multiple experts said it could be possible to unmask the identity of users, especially when that data, sold by Avast’s subsidiary Jumpshot, was combined with other data that its clients may possess.

Days after the investigation, Avast bought back a 35 percent stake in Jumpshot worth $61 million, and shuttered Jumpshot. Avast’s valuation fell by a quarter, will incur costs between $15 and $25 million, and the closure Jumpshot will cut annual revenues by around $36 million and underlying profits by $7 million, The Times reported.

Source: Data Protection Authority Investigates Avast for Selling Users’ Browsing History – VICE

Super-leaker Snowden punts free PDF* of tell-all NSA book with censored parts about China restored, underlined

Snowden’s bestseller Permanent Record is now available as a free download in Chinese after Communist Party censors cut out all the parts of the former IT admin’s memoir referring to China’s Great Firewall censorship system. The Great Firewall is one of the main means, in the digital era, by which the party maintains its iron grip on the world’s most populous nation’s internet viewing.

Thumbing his nose at the communists, Snowden has today released a 400-page PDF of the entire book – complete with the deleted sections restored and underlined so ordinary Chinese can see precisely what their ruling class doesn’t want them to read about.

In case Snowden’s embedded tweet above disappears at some point in the future, the PDF is hosted at a.temporaryrecord.com. Readers not fluent in Simplified Chinese will be disappointed to learn that they’ll have to pay for the book – even though doing so will end up enriching the US government and the NSA rather than Snowden himself. Although he’s banked his advance, royalties will go to Uncle Sam.

Source: Super-leaker Snowden punts free PDF* of tell-all NSA book with censored parts about China restored, underlined • The Register

Instagram-Scraping Clearview AI Wants To Sell Its Facial Recognition Software To Authoritarian Regimes

As legal pressures and US lawmaker scrutiny mounts, Clearview AI, the facial recognition company that claims to have a database of more than 3 billion photos scraped from websites and social media, is looking to grow around the world.

A document obtained via a public records request reveals that Clearview has been touting a “rapid international expansion” to prospective clients using a map that highlights how it either has expanded, or plans to expand, to at least 22 more countries, some of which have committed human rights abuses.

The document, part of a presentation given to the North Miami Police Department in November 2019, includes the United Arab Emirates, a country historically hostile to political dissidents, and Qatar and Singapore, the penal codes of which criminalize homosexuality.

Clearview CEO Hoan Ton-That declined to explain whether Clearview is currently working in these countries or hopes to work in them. He did confirm that the company, which had previously claimed that it was working with 600 law enforcement agencies, has relationships with two countries on the map.

Source: Instagram-Scraping Clearview AI Wants To Sell Its Facial Recognition Software To Authoritarian Regimes

Almost Every Website You Visit Records Exactly How Your Mouse Moves

When you visit any website, its owner will know where you click, what you type, and how you move your mouse. That’s how websites work: In order to perform actions based on user input, they have to know what that input is.

On its own, that information isn’t all that useful, but many websites today use a service that pulls all of this data together to create session replays of a user’s every move. The result is a video that feels like standing over a user’s shoulder and watching them use the site directly — and what sites can glean from these sorts of tracking tools may surprise you.

Session replay services have been around for over a decade and are widely used. One service, called FullStory, lists popular sites like Zillow, TeeSpring, and Jane as clients on its website. Another, called LogRocket, boasts Airbnb, Reddit, and CarFax, and a third called Inspectlet lists Shopify, ABC, and eBay among its users. They bill themselves as tools for designing sites that are easy to use and increase desired user behavior, such as buying an item. If many users add items to their cart, but then abandon the purchase at a certain rough part of the checkout process, for instance, the service helps site owners figure out how to change the site’s design to nudge users over the checkout line.

Source: Almost Every Website You Visit Records Exactly How Your Mouse Moves

Block these kinds of sites using things like ublock origin, privacy badger, ghostery, facebook container, chameleon, noscript

US gov buys all US cell phone location data, wants to use it for deportations

The American Civil Liberties Union plans to fight newly revealed practices by the Department of Homeland Security which used commercially available cell phone location data to track suspected illegal immigrants.

“DHS should not be accessing our location information without a warrant, regardless whether they obtain it by paying or for free. The failure to get a warrant undermines Supreme Court precedent establishing that the government must demonstrate probable cause to a judge before getting some of our most sensitive information, especially our cell phone location history,” said Nathan Freed Wessler, a staff attorney with the ACLU’s Speech, Privacy, and Technology Project.

Earlier today, The Wall Street Journal reported that Homeland Security, through its Immigration and Customs Enforcement (ICE) and Customs and Border Protection (CBP) agencies, was buying geolocation data from commercial entities to investigate suspects of alleged immigration violations.

The location data, which aggregators acquire from cellphone apps, including games, weather, shopping and search services, is being used by Homeland Security to detect undocumented immigrants and others entering the U.S. unlawfully, the Journal reported.

According to privacy experts interviewed by the Journal, because the data is publicly available for purchase, the government practices don’t appear to violate the law — despite being what may be the largest dragnet ever conducted by the U.S. government using the aggregated data of its citizens.

It’s also an example of how the commercial surveillance apparatus put in place by private corporations in Democratic societies can be legally accessed by state agencies to create the same kind of surveillance networks used in more authoritarian countries like China, India and Russia.

“This is a classic situation where creeping commercial surveillance in the private sector is now bleeding directly over into government,” Alan Butler, general counsel of the Electronic Privacy Information Center, a think tank that pushes for stronger privacy laws, told the newspaper.

Source: ACLU says it’ll fight DHS efforts to use app locations for deportations | TechCrunch

How to Remove Windows 10’s Annoying Ads Masquerading as ‘Suggestions’

In a perfect world, every new computer with Windows 10 on it—or every new installation of Windows 10—would arrive free of annoying applications and other bloatware that few people need. (Sorry, Candy Crush Saga.) It would also be free of annoying advertising. While that’s not to say that Microsoft is dropping big banners for Coke or something in your OS, it is frustrating to see it shilling for its Edge browser in your Start Menu.

[…]

To disable these silly suggestions, pull up your Windows 10 Settings menu. From there, click on Personalization, and then click on the Start option in the left-hand sidebar. Look for the following option and disable it: “Show suggestions occasionally in Start”

And while you’re in the Settings app, click on Lock screen. If you aren’t already using a picture or a slideshow as the background, select that, and then deselect the option to “Get fun facts, tips, and more from Windows and Cortana on your lock screen.” In other words, you don’t want to get spammed with suggestions or ads.

Finally, head back to the main Settings screen and click on System. From there, click on “Notifications & actions” in the left-hand sidebar. Because Windows can sometimes get a little spammy and/or advertise you Microsoft products via notifications, you’ll want to uncheck “Get tips, tricks, and suggestions as you use Windows” to cut that out of your digital life.

Source: How to Remove Windows 10’s Annoying Ads Masquerading as ‘Suggestions’

Apple’s Independent Repair Program Is Invasive to Shops and Their Customers, Contract Shows

Last August, in what was widely hailed a victory for the right-to-repair movement, Apple announced it would begin selling parts, tools, and diagnostic services to independent repair shops in addition to its “authorized” repair partners. Apple’s so-called Independent Repair Provider (IRP) program had its limitations, but was still seen as a step forward for a company that’s fought independent repair for years.

Recently, Motherboard obtained a copy of the contract businesses are required to sign before being admitted to Apple’s IRP Program. The contract, which has not previously been made public, sheds new light on a program Apple initially touted as increasing access to repair but has been remarkably silent on ever since. It contains terms that lawyers and repair advocates described as “onerous” and “crazy”; terms that could give Apple significant control over businesses that choose to participate. Concerningly, the contract is also invasive from a consumer privacy standpoint.

In order to join the program, the contract states independent repair shops must agree to unannounced audits and inspections by Apple, which are intended, at least in part, to search for and identify the use of “prohibited” repair parts, which Apple can impose fines for. If they leave the program, Apple reserves the right to continue inspecting repair shops for up to five years after a repair shop leaves the program. Apple also requires repair shops in the program to share information about their customers at Apple’s request, including names, phone numbers, and home addresses.

[…]

Participating repair shops must allow Apple to audit their facilities “at any time,” including during normal business hours. According to the contract, Apple may continue conducting audits, which can involve interviewing the repair shop’s employees, for five years following termination of the contract.

These audits go beyond Apple dropping in on businesses to interrogate workers. The contract requires that IRPs “maintain an electronic service database and/or written documentation” of customer information to assist Apple in its investigations. According to the contract, that database must include the names, phone numbers, email addresses and physical addresses of customers, stipulations that gave Perzanowski “serious misgivings.” As he noted, “some consumers may prefer an independent repair shop, in part, to reduce the data Apple maintains about them.”

[…]

the one-sidedness of Apple’s terms are evident from the outset, when it defines its “agreement” with independent repair businesses to include any additional documents Apple chooses to release in the future.

“Like Darth Vader, they can alter the deal and you can only pray they don’t alter it any further,” Walsh said.

Source: Apple’s Independent Repair Program Is Invasive to Shops and Their Customers, Contract Shows – VICE

Wacom tablet drivers phone home with names, times of every app opened on your computer

Wacom’s official tablet drivers leak to the manufacturer the names of every application opened, and when, on the computers they are connected to.

Software engineer Robert Heaton made this discovery after noticing his drawing board’s fine-print included a privacy policy that gave Wacom permission to, effectively, snoop on him.

Looking deeper, he found that the tablet’s driver logged each app he opened on his Apple Mac and transmitted the data to Google to analyze. To be clear, we’re talking about Wacom’s macOS drivers here: the open-source Linux ones aren’t affected, though it would seem the Windows counterparts are.

[…]

Wacom’s request made me pause. Why does a device that is essentially a mouse need a privacy policy?”

Source: Sketchy behavior? Wacom tablet drivers phone home with names, times of every app opened on your computer • The Register

Google’s Takeout App Leaked Videos To Unrelated Users

In a new privacy-related fuckup, Google told users today that it might’ve accidentally imported your personal photos into another Google user’s account. Whoopsie!

First flagged by Duo Security CTO Jon Oberheide, Google seems to be emailing users who plugged into the company’s native Takeout app to backup their videos, warning that a bug resulted in some of those (hopefully G-rated) videos being backed up to an unrelated user’s account.

For those who used the “download your data” service between November 21 and November 25 of last year, some videos were “incorrectly exported,” the note reads. “If you downloaded your data, it may be incomplete, and it may contain videos that are not yours.”

Source: Google’s Takeout App Leaked Videos To Unrelated Users

Researchers Find ‘Anonymized’ Data Is Even Less Anonymous Than We Thought

Dasha Metropolitansky and Kian Attari, two students at the Harvard John A. Paulson School of Engineering and Applied Sciences, recently built a tool that combs through vast troves of consumer datasets exposed from breaches for a class paper they’ve yet to publish.

“The program takes in a list of personally identifiable information, such as a list of emails or usernames, and searches across the leaks for all the credential data it can find for each person,” Attari said in a press release.

They told Motherboard their tool analyzed thousands of datasets from data scandals ranging from the 2015 hack of Experian, to the hacks and breaches that have plagued services from MyHeritage to porn websites. Despite many of these datasets containing “anonymized” data, the students say that identifying actual users wasn’t all that difficult.

“An individual leak is like a puzzle piece,” Harvard researcher Dasha Metropolitansky told Motherboard. “On its own, it isn’t particularly powerful, but when multiple leaks are brought together, they form a surprisingly clear picture of our identities. People may move on from these leaks, but hackers have long memories.”

For example, while one company might only store usernames, passwords, email addresses, and other basic account information, another company may have stored information on your browsing or location data. Independently they may not identify you, but collectively they reveal numerous intimate details even your closest friends and family may not know.

“We showed that an ‘anonymized’ dataset from one place can easily be linked to a non-anonymized dataset from somewhere else via a column that appears in both datasets,” Metropolitansky said. “So we shouldn’t assume that our personal information is safe just because a company claims to limit how much they collect and store.”

The students told Motherboard they were “astonished” by the sheer volume of total data now available online and on the dark web. Metropolitansky and Attari said that even with privacy scandals now a weekly occurrence, the public is dramatically underestimating the impact on privacy and security these leaks, hacks, and breaches have in total.

Previous studies have shown that even within independent individual anonymized datasets, identifying users isn’t all that difficult.

In one 2019 UK study, researchers were able to develop a machine learning model capable of correctly identifying 99.98 percent of Americans in any anonymized dataset using just 15 characteristics. A different MIT study of anonymized credit card data found that users could be identified 90 percent of the time using just four relatively vague points of information.

Another German study looking at anonymized user vehicle data found that that 15 minutes’ worth of data from brake pedal use could let them identify the right driver, out of 15 options, roughly 90 percent of the time. Another 2017 Stanford and Princeton study showed that deanonymizing user social networking data was also relatively simple.

Individually these data breaches are problematic—cumulatively they’re a bit of a nightmare.

Metropolitansky and Attari also found that despite repeated warnings, the public still isn’t using unique passwords or password managers. Of the 96,000 passwords contained in one of the program’s output datasets—just 26,000 were unique.

The problem is compounded by the fact that the United States still doesn’t have even a basic privacy law for the internet era, thanks in part to relentless lobbying from a cross-industry coalition of corporations eager to keep this profitable status quo intact. As a result, penalties for data breaches and lax security are often too pathetic to drive meaningful change.

Harvard’s researchers told Motherboard there’s several restrictions a meaningful U.S. privacy law could implement to potentially mitigate the harm, including restricting data access to unauthorized employees, maininting better records on data collection and retention, and decentralizing data storage (not keeping corporate and consumer data on the same server).

Until then, we’re left relying on the promises of corporations who’ve repeatedly proven their privacy promises aren’t worth all that much.

Source: Researchers Find ‘Anonymized’ Data Is Even Less Anonymous Than We Thought – VICE

Firefox now shows what telemetry data it’s collecting about you (if any)

There is now a special page in the Firefox browser where users can see what telemetry data Mozilla is collecting from their browser.

Accessible by typing about:telemetry in the browser’s URL address bar, this new section is a recent addition to Firefox.

The page shows deeply technical information about browser settings, installed add-ons, OS/hardware information, browser session details, and running processes.

The information is what you’d expect a software vendor to collect about users in order to fix bugs and keep a statistical track of its userbase.

A Firefox engineer told ZDNet the page was primarily created for selfish reasons, in order to help engineers debug Firefox test installs. However, it was allowed to ship to the stable branch also as a PR move, to put users’ minds at ease about what type of data the browser maker collects from its users.

The move is in tune with what Mozilla has been doing over the past two years, pushing for increased privacy controls in its browser and opening up about its practices, in stark contrast with what other browser makers have been doing in the past decade.

Source: Firefox now shows what telemetry data it’s collecting about you | ZDNet

CIA Employee Accused Of Leaking Vault 7 cyber security tooling To WikiLeaks in 2017 Goes On Trial

The trial of a former Central Intelligence Agency software engineer who allegedly leaked thousands of pages of documents to WikiLeaks was set to begin Monday in federal court in New York. The leak has been described as one of the largest in the CIA’s history.

Joshua Schulte has pleaded not guilty to 11 criminal counts, including illegal transmission of unlawfully possessed national defense information and theft of government property.

WikiLeaks started publishing the documents, which it called “Vault 7,” in March 2017. Many of the documents are highly technical, and appear to describe agency practices for hacking a number of different targets.

As NPR’s Camila Domonoske and Greg Myre reported at the time, the documents are said to be to be internal guides to creating and using many kinds of hacking tools, “from turning smart TVs into bugs to designing customized USB drives to extract information from computers.”

Schulte’s lawyers did not respond to NPR’s requests for comment about the case.

In court filings ahead of the trial, they have expressed frustration at the pace with which they are required to review materials surfaced during the discovery process.

Some of the charges against Schulte stem from the Espionage Act, and defense lawyers say they are unconstitutionally overbroad and vague. They also said the law was intended to be used to prosecute those who transmit government secrets to foreign governments, and that it shouldn’t apply to leaking to WikiLeaks. The judge rejected those arguments.

“As alleged, Schulte utterly betrayed this nation and downright violated his victims,” William F. Sweeney Jr., the assistant director-in-charge of the FBI’s New York Field Office, said in a statement when the charges were announced. “As an employee of the CIA, Schulte took an oath to protect this country, but he blatantly endangered it by the transmission of Classified Information.”

Prosecutors have said that when Schulte was working at the CIA, he developed classified cyber tools, including tools to covertly gather data from computers.

The leak allegedly happened during a time of rising tension between Schulte and his CIA colleagues.

In the summer of 2015, according to prosecutors, Schulte started having “significant problems” in his group that stemmed from a feud with one of his colleagues. The feud deepened after the colleague reportedly complained about Schulte to management. Prosecutors say Schulte accused the employee of making a death threat against him and eventually filed a protective order against that person. They were reassigned to different teams.

Because of his reassignment, Schulte’s access to previous projects was revoked. But prosecutors say he reinstated his own administrative privileges. Management at the Center for Cyber Intelligence discovered it, and they attempted to revoke privileges and change passwords. But they missed credentials for one computer network, according to prosecutors, and in April 2016, Schulte allegedly stole vast quantities of information from the network and passed the data along to WikiLeaks.

The judge has granted measures to protect the anonymity of certain witnesses from the CIA who are expected to testify. During those sessions, the courtroom will be closed to press, except for two pool reporters who have agreed not to disclose the physical characteristics of these witnesses. Other reporters in an adjoining courtroom will be able to see a video feed that won’t show images of the witnesses.

Federal prosecutors originally indicted Schulte in 2017 on charges of receiving and possessing child pornography. They said they discovered more than 10,000 images and videos of child pornography encrypted on Schulte’s personal computer.

One of the prosecutors, Matthew Laroche, said at a hearing in 2017 that Schulte is “someone who’s shown himself to condone sexually dangerous behavior and has shown a proclivity to collect thousands of images of child pornography.”

In July 2019, the court severed the child pornography-related charges from the rest of the case, meaning that those accusations will be addressed at a separate trial.

Source: Ex-CIA Employee Accused Of Leaking Documents To WikiLeaks Goes On Trial : NPR

Alias Privacy “Parasite” 2.0 Adds a Layer of Security to Your Home Assistant

Alias is a teachable “parasite” that gives you more control over your smart assistant’s customization and privacy. Through a simple app, you can train Alias to react to a self-chosen wake-word; once trained, Alias takes control over your home assistant by activating it for you. When you’re not using it, Alias makes sure the assistant is paralyzed and unable to listen to your conversations.

When placed on top of your home assistant, Alias uses two small speakers to interrupt the assistant’s listening with a constant low noise that feeds directly into the microphone of the assistant. When Alias recognizes your user-created wake-word (e.g., “Hey Alias” or “Jarvis” or whatever), it stops the noise and quietly activates the assistant by speaking the original wake-word (e.g., “Alexa” or “Hey Google”).

From here the assistant can be used as normal. Your wake-word is detected by a small neural network program that runs locally on Alias, so the sounds of your home are not uploaded to anyone’s cloud.

Source: Alias Privacy “Parasite” 2.0 Adds a Layer of Security to Your Home Assistant | Make:

Don’t use online DNA tests! If You Ever Used Promethease, Your DNA Data Might Be on MyHeritage – and so will your family’s

When it comes to ways to learn about your DNA, Promethease’s service seemed like one of the safest. They promised anonymity, and to delete your report after 45 days. But now that MyHeritage has bought the company, users are being notified that their DNA data is now on MyHeritage. Wait, what?

It turns out that even though Promethease deleted reports as promised after 45 days, if you created an account, the service held onto your raw data. You now have a MyHeritage account, which you can delete if you like. Check your email. That’s how I found out about mine.

What Promethease does

A while back, I downloaded my raw data from 23andme and gave it to Promethease to find out what interesting things might be in my DNA. Ever since 23andme stopped providing detailed health-related results in 2013, Promethease was a sensible alternative. They used to charge $5 (now up to $12, but that’s still a steal) and they didn’t attempt to explain your results to you. Instead, you could just see what SNPs you had—those are spots where your DNA differs from other people’s—and read on SNPedia, a sort of genetics wikipedia, about what those SNPs might mea

So this means Promethease had access to the raw file you gave it (which you would have gotten from 23andme, Ancestry, or another service), and to the report of SNPs that it created for you. You had the option of paying your fee, downloading your report, and never dealing with the company again; or you could create an account so that you could “regenerate” your report in the future without having to pay again. That means they stored your raw DNA file.

Source: If You Ever Used Promethease, Your DNA Data Might Be on MyHeritage Now

Because your DNA contains information about your whole family, by uploading your DNA you also upload their DNA, making it a whole lot easier to de-anonymise their DNA. It’s a bit like uploading a picture of your family to Facebook with the public settings on and then tagging them, even though the other family members on your picture aren’t on Facebook.

Social media scrapers Clearview Lied About Its Crime-Solving Power In Pitches To Law Enforcement Agencies

A very questionable facial recognition tool being offered to law enforcement was recently exposed by Kashmir Hill for the New York Times. Clearview — created by a developer previously best known for an app that let people put Trump’s “hair” on their own photos — is being pitched to law enforcement agencies as a better AI solution for all their “who TF is this guy” problems.

Clearview doesn’t limit itself to law enforcement databases — ones (partially) filled with known criminals and arrestees. Instead of using known quantities, Clearview scrapes the internet for people’s photos. With the click of an app button, officers are connected to Clearview’s stash of 3 billion photos pulled from public feeds on Twitter, LinkedIn, and Facebook.

Most of the scrapees have already objected to being scraped. While this may violate terms of service, it’s not completely settled that scraping content from public feeds is actually illegal. However, peeved companies can attempt to shut off their firehoses, which is what Twitter is in the process of doing.

Clearview has made some bold statements about its effectiveness — statements that haven’t been independently confirmed. Clearview did not submit its software to NIST’s recent roundup of facial recognition AI, but it most likely would not have fared well. Even more established software performed poorly, misidentifying minorities almost 100 times more often than it did white males.

The company claims it finds matches 75% of the time. That doesn’t actually mean it finds the right person 75% of the time. It only means the software finds someone that matches submitted photos three-quarters of the time. Clearview has provided no stats on its false positive rate. That hasn’t stopped it from lying about its software and its use by law enforcement agencies.

A BuzzFeed report based on public records requests and conversations with the law enforcement agencies says the company’s sales pitches are about 75% bullshit.

Clearview AI, a facial recognition company that says it’s amassed a database of billions of photos, has a fantastic selling point it offers up to police departments nationwide: It cracked a case of alleged terrorism in a New York City subway station last August in a matter of seconds. “How a Terrorism Suspect Was Instantly Identified With Clearview,” read the subject line of a November email sent to law enforcement agencies across all 50 states through a crime alert service, suggesting its technology was integral to the arrest.

Here’s what the NYPD had to say about Clearview’s claims in its marketing materials:

“The NYPD did not use Clearview technology to identify the suspect in the August 16th rice cooker incident,” a department spokesperson told BuzzFeed News. “The NYPD identified the suspect using the Department’s facial recognition practice where a still image from a surveillance video was compared to a pool of lawfully possessed arrest photos.”

The NYPD also said it had no “institutional relationship” with Clearview, contradicting the company’s sales pitch insinuations. The NYPD was not alone in its rejection of Clearview’s claims.

Clearview also claimed to be instrumental in apprehending a suspect wanted for assault. In reality, the suspect turned himself in to the NYPD. The PD again pointed out Clearview played no role in this investigation. It also had nothing to do with solving a subway groping case (the tip that resulted in an arrest was provided to the NYPD by the Guardian Angels) or an alleged “40 cold cases solved” by the NYPD.

The company says it is “working with” over 600 police departments. But BuzzFeed’s investigation has uncovered at least two cases where “working with” simply meant submitting a lead to a PD tip line. Most likely, this is only the tip of the iceberg. As more requested documents roll in, there’s a very good chance this “working with” BS won’t just be a two-off.

Clearview’s background appears to be as shady as its public claims. In addition to its founder’s links to far right groups (first uncovered by Kashmir Hill), its founder pumped up the company’s reputation by deploying a bunch of sock puppets.

Ton-That set up fake LinkedIn profiles to run ads about Clearview, boasting that police officers could search over 1 billion faces in less than a second.

These are definitely not the ethics you want to see from a company pitching dubious facial recognition software to law enforcement agencies. Some agencies may perform enough due diligence to move forward with a more trustworthy company, but others will be impressed with the lower cost and the massive amount of photos in Clearview’s database and move forward with unproven software created by a company that appears to be willing to exaggerate its ability to help cops catch crooks.

If it can’t tell the truth about its contribution to law enforcement agencies, it’s probably not telling the truth about the software’s effectiveness. If cops buy into Clearview’s PR pitches, the collateral damage will be innocent people’s freedom.

Source: Facial Recognition Company Clearview Lied About Its Crime-Solving Power In Pitches To Law Enforcement Agencies | Techdirt

Clearview AI Told Cops To “Run Wild” With Its Creepy Face database, access given away without checks and sold to private firms despite claiming otherwise

Clearview AI, the facial recognition company that claims to have amassed a database of more than 3 billion photos scraped from Facebook, YouTube, and millions of other websites, is scrambling to deal with calls for bans from advocacy groups and legal threats. These troubles come after news reports exposed its questionable data practices and misleading statements about working with law enforcement.

Following stories published in the New York Times and BuzzFeed News, the Manhattan-based startup received cease-and-desist letters from Twitter and the New Jersey attorney general. It was also sued in Illinois in a case seeking class-action status.

Despite its legal woes, Clearview continues to contradict itself, according to documents obtained by BuzzFeed News that are inconsistent with what the company has told the public. In one example, the company, whose code of conduct states that law enforcement should only use its software for criminal investigations, encouraged officers to use it on their friends and family members.

“To have these technologies rolled out by police departments without civilian oversight really raises fundamental questions about democratic accountability,” Albert Fox Cahn, a fellow at New York University and the executive director of the Surveillance Technology Oversight Project, told BuzzFeed News.

In the aftermath of revelations about its technology, Clearview has tried to clean up its image by posting informational webpages, creating a blog, and trotting out surrogates for media interviews, including one in which an investor claimed Clearview was working with “over a thousand independent law enforcement agencies.” Previously, Clearview had stated that the number was around 600.

Clearview has also tried to allay concerns that its technology could be abused or used outside the scope of police investigations. In a code of conduct that the company published on its site earlier this month, it said its users should “only use the Services for law enforcement or security purposes that are authorized by their employer and conducted pursuant to their employment.”

It bolstered that idea with a blog post on Jan. 23, which stated, “While many people have advised us that a public version would be more profitable, we have rejected the idea.”

“Clearview exists to help law enforcement agencies solve the toughest cases, and our technology comes with strict guidelines and safeguards to ensure investigators use it for its intended purpose only,” the post stated.

But in a November email to a police lieutenant in Green Bay, Wisconsin, a company representative encouraged a police officer to use the software on himself and his acquaintances.

“Have you tried taking a selfie with Clearview yet?” the email read. “It’s the best way to quickly see the power of Clearview in real time. Try your friends or family. Or a celebrity like Joe Montana or George Clooney.

“Your Clearview account has unlimited searches. So feel free to run wild with your searches,” the email continued. The city of Green Bay would later agree on a $3,000 license with Clearview.

Via Obtained by BuzzFeed News

An email from Clearview to an officer in Green Bay, Wisconsin, from November 2019.

Hoan Ton-That, the CEO of Clearview, claimed in an email that the company has safeguards on its product.

“As as [sic] safeguard we have an administrative tool for Law Enforcement supervisors and administrators to monitor the searches of a particular department,” Ton-That said. “An administrator can revoke access to an account at any time for any inappropriate use.”

Clearview’s previous correspondence with Green Bay police appeared to contradict what Ton-That told BuzzFeed News. In emails obtained by BuzzFeed News, the company told officers that searches “are always private and never stored in our proprietary database, which is totally separate from the photos you search.”

“So feel free to run wild with your searches.”

“It’s certainly inconsistent to, on the one hand, claim that this is a law enforcement tool and that there are safeguards — and then to, on the other hand, recommend it being used on friends and family,” Clare Garvie, a senior associate at the Georgetown Law’s Center on Privacy and Technology, told BuzzFeed News.

Clearview has also previously instructed police to act in direct violation of the company’s code of conduct, which was outlined in a blog post on Monday. The post stated that law enforcement agencies were “required” to receive permission from a supervisor before creating accounts.

But in a September email sent to police in Green Bay, the company said there was an “Invite User” button in the Clearview app that can be used to give any officer access to the software. The email encouraged police officers to invite as many people as possible, noting that Clearview would give them a demo account “immediately.”

“Feel free to refer as many officers and investigators as you want,” the email said. “No limits. The more people searching, the more successes.”

“Rewarding loyal customers”

Despite its claim last week that it “exists to help law enforcement agencies,” Clearview has also been working with entities outside of law enforcement. Ton-That told BuzzFeed News on Jan. 23 that Clearview was working with “a handful of private companies who use it for security purposes.” Marketing emails from late last year obtained by BuzzFeed News via a public records request showed the startup aided a Georgia-based bank in a case involving the cashing of fraudulent checks.

Earlier this year, a company representative was slated to speak at a Las Vegas gambling conference about casinos’ use of facial recognition as a way of “rewarding loyal customers and enforcing necessary bans.” Initially, Jessica Medeiros Garrison, whose title was stated on the conference website as Clearview’s vice president of public affairs, was listed on a panel that included the head of surveillance for Las Vegas’ Cosmopolitan hotel. Later versions of the conference schedule and Garrison’s bio removed all mentions of Clearview AI. It is unclear if she actually appeared on the panel.

A company spokesperson said Garrison is “a valued member of the Clearview team” but declined to answer questions on any possible work with casinos.

Cease and desist

Clearview has also faced legal threats from private and government entities. Last week, Twitter sent the company a cease-and-desist letter, noting that its claim to have collected photos from its site was in violation of the social network’s terms of service.

“This type of use (scraping Twitter for people’s images/likeness) is not allowed,” a company spokesperson told BuzzFeed News. The company, which asked Clearview to cease scraping and delete all data collected from Twitter, pointed BuzzFeed News to a part of its developer policy, which states it does not allow its data to be used for facial recognition.

On Friday, Clearview received a similar note from the New Jersey attorney general, who called on state law enforcement agencies to stop using the software. The letter also told Clearview to stop using clips of New Jersey Attorney General Gurbir Grewal in a promotional video on its site that claimed that a New Jersey police department used the software in a child predator sting late last year.

[…]

Clearview declined to provide a list of law enforcement agencies that were on free trials or paid contracts, stating that it was more than 600.

“We do not have to be hidden”

That number is lower than what one of Clearview’s investors bragged about on Saturday. David Scalzo, an early investor in Clearview through his firm, Kirenaga Partners, claimed in an interview with Dilbert creator and podcaster Scott Adams that “over a thousand independent law enforcement agencies” were using the software. The investor went on to contradict the company’s public statement that it would not make its tool available to the public, stating “it is inevitable that this digital information will be out there” and “the best thing we can do is get this technology out to everyone.”

[…]

EPIC’s letter came after an Illinois resident sued Clearview in a state district court last Wednesday, alleging the software violated the Illinois Biometric Information Privacy Act by collecting the “identifiers and information” — like facial data gathered from photos accumulated from social media — without permission. Under the law, private companies are not allowed to “collect, capture, purchase,” or receive biometric information about a person without their consent.

The complaint, which also alleged that Clearview violated the constitutional rights of all Americans, asked for class-action recognition on behalf of all US citizens, as well as all Illinois residents whose biometric information was collected. When asked, Ton-That did not comment on the lawsuit.

In legal documents given to police, obtained by BuzzFeed News through a public records request, Clearview argued that it was not subject to states’ biometric data laws including those in Illinois. In a memo to the Atlanta Police Department, a lawyer for Clearview argued that because the company’s clients are public agencies, the use of the startup’s technology could not be regulated by state law, which only governs private entities.

Cahn, the executive director of the Surveillance Technology Oversight Project, said that it was “problematic” for Clearview AI to argue it wasn’t beholden to state biometric laws.

“Those laws regulate the commercial use of these sorts of tools, and the idea that somehow this isn’t a commercial application, simply because the customer is the government, makes no sense,” he said. “This is a company with private funders that will be profiting from the use of our information.”

Under the attention, Clearview added explanations to its site to deal with privacy concerns. It added an email link for people to ask questions about its privacy policy, saying that all requests will go to its data protection officer. When asked by BuzzFeed News, the company declined to name that official.

To process a request, however, Clearview is requesting more personal information: “Please submit name, a headshot and a photo of a government-issued ID to facilitate the processing of your request.“ The company declined to say how it would use that information.

Source: Clearview AI Once Told Cops To “Run Wild” With Its Facial Recognition Tool

Brave, Google, Microsoft, Mozilla gather together to talk web privacy… and why we all shouldn’t get too much of it. Only FF and Brave will give you some.

At the USENIX Enigma conference on Tuesday, representatives of four browser makers, Brave, Google, Microsoft, and Mozilla, gathered to banter about their respective approaches to online privacy, while urging people not to ask for too much of it.

Apple, which has advanced browser privacy standards but was recently informed that its tracking defenses can be used for er, tracking, was conspicuously absent, though it had a tongue-tied representative recruiting for privacy-oriented job positions at the show.

The browser-focused back-and-forth was mostly cordial as the software engineers representing their companies discussed notable privacy features in the various web browsers they worked on. They stressed the benefit of collaboration on web standards and the mutually beneficial effects of competition.

Eric Lawrence, program manager on the Microsoft Edge team, touched on how Microsoft has just jettisoned 25 years of Internet Explorer code to replatform Edge on the open source Chromium project, now the common foundation for 20 or so browsers.

Beside a slide that declared “Microsoft loves the Web,” Lawrence made the case for the new Edge as a modern browser with some well-designed privacy features, including Microsoft’s take on tracking protection, which blocks most trackers in its default setting and can be made more strict, at the potential cost of site compatibility.

A slide at Enigma 2020 saying Microsoft loves the Web;

Edge comes across as a reliable alternative to Chrome and should become more distinct as it evolves. It occupies a difficult space on the privacy continuum, in that it has some nice privacy features but not as many as Brave or Firefox. But Edge may find fans on the strength of the Microsoft brand since, as Lawrence emphasized, Microsoft is not new to privacy concerns.

That said, Microsoft is not far from Google in advocating not biting the hand that feeds the web ecosystem – advertising.

“The web doesn’t exist in a vacuum,” Lawrence warned. “People who are building sites and services have choices for what platforms they target. They can build a mobile application. They can take their content off the open web and put it into a walled garden. And so if we do things with privacy that hurt the open web, we could end up pushing people to less privacy for certain ecosystems.”

Lawrence pointed to a recent report about a popular Android app found to be leaking data. It took time to figure that out, he said, because mobile platforms are less transparent than the web, where it’s easier to scour source code and analyze network behavior.

Justin Schuh, engineering director on Google Chrome for trust and safety, reprised an argument he’s made previously that too much privacy would be harmful to ad-supported businesses.

“Most of the media that we consume is actually funded by advertising today,” Schuh explained. “It has been for a very long time. Now, I’m not here to make the argument that advertising is the best or only way to fund these things. But the truth is that print, radio, and TV, – all these are funded primarily through advertising.”

And so too is the web, he insisted, arguing that advertising is what has made so much online content available to people who otherwise wouldn’t have access to it across the globe.

Schuh said in the context of the web, two trends concern him. One, he claimed, is that content is leaving because it’s easier to monetize in apps – but he didn’t cite a basis for that assertion.

The other is the rise of covert tracking, which arose, as Schuh tells it, because advertisers wanted to track people across multiple devices. So they turned to looking at IP-based fingerprinting and metadata tracking, and the joining of data sets to identify people as they shift between phone, computer, and tablet.

Covert tracking also became more popular, he said, because advertisers wanted to bypass anti-tracking mechanisms. Thus, we have privacy-invading practices like CNAME cloaking, site fingerprinting, hostname rotation, and the like because browser users sought privacy.

Schuh made the case for Google’s Privacy Sandbox proposal, a set of controversial specs being developed ostensibly to enhance privacy by reducing data available for tracking and browser fingerprinting while also giving advertisers the ability to target ads.

“Broadly speaking, advertisers don’t actually need your data,” said Schuh. “All that they really want is to monetize efficiently.”

But given the willingness of advertisers to circumvent user privacy choices, the ad industry’s consistent failure to police bad behavior, and the persistence of ad fraud and malicious ads, it’s difficult to accept that advertisers can be trusted to behave.

Tanvi Vyas, principal engineer at Mozilla, focused on the consequences of the current web ecosystem, where data is gathered to target and manipulate people. She reeled off a list of social harms arising from the status quo.

“Democracies are compromised and elections around the world are being tampered with,” she said. “Populations are manipulated and micro-targeted. Fake news is delivered to just the right audience at the right time. Discrimination flourishes, and emotional harm is inflicted on specific individuals when our algorithms go wrong.”

Thanks, Facebook, Google, and Twitter.

Worse still, Vyas said, the hostile ecosystem has a chilling effect on sophisticated users who understand online tracking and prevents them from taking action. “At Mozilla, we think this is an unacceptable cost for society to pay,” she said.

Vyas described various pro-privacy technologies implemented in Firefox, including Facebook Container, which sandboxes Facebook trackers so they can’t track users on third-party websites. She also argued for legislation to improve online privacy, though Lawrence from his days working on Internet Explorer recalled how privacy rules tied to a privacy scheme known as P3P two decades ago had proved ineffective.

Speaking for Brave, CISO Yan Zhu argued a slightly different approach, though it still involves engaging with the ad industry to some extent.

“The main goal of Brave is we want to repair the privacy problems in the existing ad ecosystem in a way that no other browser has really tried, while giving publishers a revenue stream,” she said. “Basically, we have options to set micropayments to publishers, and also an option to see privacy preserving ads.”

Micropayments have been tried before but they’ve largely failed, assuming you don’t consider in-app payments to be micropayments.

Faced with a plea from an attendee for more of the browser makers to support micropayments instead of relying on ads, Schuh said, “I would absolutely love to see micropayments succeed. I know there have been a bunch of efforts at Google and various other companies to do it. It turns out that the payment industry itself is really, really complicated. And there are players in there that expect a fairly large cut. And so long as that exists, I don’t know if there’s a path forward.”

It now falls to Brave to prove otherwise.

Shortly thereafter, Gabriel DeWitt, VP of product at global ad marketplace Index Exchange, took a turn at the mic in the audience section in which he introduced himself and then lightheartedly asked other attendees not to throw anything at him.

Insisting that his company also cares about user privacy, despite opinions to the contrary, he asked the panelists how he could better collaborate with them.

It’s worth noting that next week, when Chrome 80 debuts, Google intends to introduce changes in the way it handles cookies that will affect advertisers. What’s more, the company has said it plans to phase out cookies entirely in a few years.

Schuh, from Google, elicited a laugh when he said, “I guess I can take this one, because that’s what everyone is expecting.”

We were expecting privacy. We got surveillance capitalism instead.

Source: Brave, Google, Microsoft, Mozilla gather together to talk web privacy… and why we all shouldn’t get too much of it • The Register

Ubiquiti says UniFi routers will beam performance data back to mothership without consent automatically, no opt-out.

Ubiquiti Networks is once again under fire for suddenly rewriting its telemetry policy after changing how its UniFi routers collect data without telling anyone.

The changes were identified in a new help document published on the US manufacturer’s website. The document differentiates between “personal data”, which includes everything that identifies a specific individual, and “other data”, which is everything else.

The document says that while users can continue to opt out of having their “personal data” collected, their “other data” – anonymous performance and crash information – will be “automatically reported”. In other words, you ain’t got no choice.

This is a shift from Ubiquiti’s last statement on data collection three months ago, which promised an opt-out button for all data collection in upcoming versions of its firmware.

A Ubiquiti representative confirmed in a forum post that the changes will automatically affect all firmware beyond 4.1.0, and that users can stop “other data” being collected by manually editing the software’s config file.

“Yes, it should be updated when we go to public release, it’s on our radar,” the rep wrote. “But I can’t guarantee it will be updated in time.”

The drama unfolded when netizens grabbed their pitchforks and headed for the company’s forums to air their grievances. “Come on UBNT,” said user leonardogyn. “PLEASE do not insist on making it hard (or impossible) to fully and easily disable sending of Analytics data. I understand it’s a great tool for you, but PLEASE consider that’s [sic] ultimately us, the users, that *must* have the option to choose to participate on it.”

The same user also pointed out that, even when the “Analytics” opt-out button is selected in the 5.13.9 beta controller software, Ubiquiti is still collecting some data. The person called the opt-out option “a misleading one, not to say a complete lie”.

Other users were similarly outraged. “This was pretty much the straw that broke the camel’s back, to be honest.” said elcid89. “I only use Unifi here at the house, but between the ongoing development instability, frenetic product range, and lack of responsiveness from staff, I’ve been considering junking it for a while now. This made the decision for me – switching over to Cisco.”

One user said that the firmware was still sending their data to two addresses even after they modified the config file.

Source: You spoke, we didn’t listen: Ubiquiti says UniFi routers will beam performance data back to mothership automatically • The Register

Facebook Enables Confusing ‘Off-Facebook Activity’ Privacy Tool, which won’t stop any tracking whatsoever

In a blog post earlier today, the famously privacy-conscious Mark Zuckerberg announced that—in honor of Data Privacy Day, which is apparently a thing—the official rollout of a long-awaited Off-Facebook Activity tool that allows Facebook users to monitor and manage the connections between Facebook profiles and their off-platform activity.

“To help shed more light on these practices that are common yet not always well understood, today we’re introducing a new way to view and control your off-Facebook activity,” Zuckerberg said in the post. “Off-Facebook Activity lets you see a summary of the apps and websites that send us information about your activity, and clear this information from your account if you want to.”

Zuck’s use of the phrases “control your off-Facebook activity” and “clear this information from your account” is kinda misleading—you’re not really controlling or clearing much of anything. By using this tool, you’re just telling Facebook to put the data it has on you into two separate buckets that are otherwise mixed together. Put another way, Facebook is offering a one-stop-shop to opt-out of any ties between the sites and services you peruse daily that have some sort of Facebook software installed and your own-platform activity on Facebook or Instagram.

The only thing you’re clearing is a connection Facebook made between its data and the data it gets from third parties, not the data itself.

Illustration for article titled Facebooks Clear History Tool Doesnt Clear Shit
Image: Facebook

As an ad-tech reporter, my bread and butter involves downloading shit that does god-knows-what with your data, which is why I shouldn’t’ve been surprised that Facebook hoovered data from more 520 partners across the internet—either sites I’d visited or apps I’d downloaded. For Gizmodo alone, Facebook tracked “252 interactions” drawn from the handful of plug-ins our blog has installed. (To be clear, you’re going to run into these kinds of trackers e.v.e.r.y.w.h.e.r.e.—not just on our site.)

These plug-ins—or “business tools,” as Facebook describes them—are the pipeline that the company uses to ascertain your off-platform activity and tie it to your on-platform identity. As Facebook describes it:

– Jane buys a pair of shoes from an online clothing and shoe store.

– The store shares Jane’s activity with us using our business tools.

– We receive Jane’s off-Facebook activity and we save it with her Facebook account. The activity is saved as “visited the Clothes and Shoes website” and “made a purchase”.

– Jane sees an ad on Facebook for a 10% off coupon on her next shoe or clothing purchase from the online store.

Here’s the catch, though: When I hit the handy “clear history” button that Facebook now provides, it won’t do jack shit to stop a given shoe store from sharing my data with Facebook—which explicitly laid this out for me when I hit that button:

Your activity history will be disconnected from your account. We’ll continue to receive your activity from the businesses and organizations you visit in the future.

Yes, it’s confusing. Baffling, really. But basically, Facebook has profiles on users and non-users alike. Those of you who have Facebook profiles can use the new tool to disconnect your Facebook data from the data the company receives from third parties. Facebook will still have that third-party-collected data and it will continue to collect more data, but that bucket of data won’t be connected to your Facebook identity.

Illustration for article titled Facebooks Clear History Tool Doesnt Clear Shit
Screenshot: Gizmodo (Facebook)

The data third parties collect about you technically isn’t Facebook’s responsibility, to begin with. If I buy a pair of new sneakers from Steve Madden where that purchase or browsing data goes is ultimately in Steve Madden’s metaphorical hands. And thanks to the wonders of targeted advertising, even the sneakers I’m purchasing in-store aren’t safe from being added as a data point that can be tied to the collective profile Facebook’s gathered on me as a consumer. Naturally, it behooves whoever runs marketing at Steve Madden—or anywhere, really—to plug in as many of those data points as they possibly can.

For the record, I also tried toggling my off-Facebook activity to keep it from being linked to my account, but was told that, while the company would still be getting this information from third parties, it would just be “disconnected from [my] account.”

Put another way: The way I browse any number of sites and apps will ultimately still make its way to Facebook, and still be used for targeted advertising across… those sites and apps. Only now, my on-Facebook life—the cat groups I join, the statuses I comment on, the concerts I’m “interested” in (but never actually attend)—won’t be a part of that profile.

Or put another way: Facebook just announced that it still has its tentacles in every part of your life in a way that’s impossible to untangle yourself from. Now, it just doesn’t need the social network to do it.

Source: Facebook Enables Confusing ‘Off-Facebook Activity’ Privacy Tool