The US Senate has voted to give law enforcement agencies access to web browsing data without a warrant, dramatically expanding the government’s surveillance powers in the midst of the COVID-19 pandemic.
The power grab was led by Senate majority leader Mitch McConnell as part of a reauthorization of the Patriot Act, which gives federal agencies broad domestic surveillance powers. Sens. Ron Wyden (D-OR) and Steve Daines (R-MT) attempted to remove the expanded powers from the bill with a bipartisan amendment.
But in a shock upset, the privacy-preserving amendment fell short by a single vote after several senators who would have voted “Yes” failed to show up to the session, including Bernie Sanders. 9 Democratic senators also voted “No,” causing the amendment to fall short of the 60-vote threshold it needed to pass.
“The Patriot Act should be repealed in its entirety, set on fire and buried in the ground,” Evan Greer, the deputy director of Fight For The Future, told Motherboard. “It’s one of the worst laws passed in the last century, and there is zero evidence that the mass surveillance programs it enables have ever saved a single human life.”
Privacy Enhancements for Android (PE for Android) is a platform for exploring concepts in regulating access to private information on mobile devices. The goal is to create an extensible privacy system that abstracts away the details of various privacy-preserving technologies. PE for Android allows app developers to safely leverage state-of-the-art privacy techniques without knowledge of esoteric underlying technologies. Further, PE for Android helps users to take ownership of their private information by presenting them with more intuitive controls and permission enforcement. The platform was developed as a fork of the Android Open Source Project (AOSP) release for Android 9 “Pie” and can be installed as a Generic System Image (GSI) on a Project Treble-compliant device.
Under DARPA’s Brandeis program, a team of researchers led by Two Six Labs and Raytheon BBN Technologies have developed a platform called Privacy Enhancements for Android (PE for Android) to explore more expressive concepts in regulating access to private information on mobile devices. PE for Android seeks to create an extensible privacy system that abstracts away the details of various privacy-preserving technologies, allowing application developers to utilize state-of-the-art privacy techniques, such as secure multi-party computation and differential privacy, without knowledge of their underlying esoteric technologies. Importantly, PE for Android allows mobile device users to take ownership of their private information by presenting them with more intuitive controls and permission enforcement options.
You can’t make access to your website’s content dependent on a visitor agreeing that you can process their data — aka a ‘consent cookie wall’. Not if you need to be compliant with European data protection law.
That’s the unambiguous message from the European Data Protection Board (EDPB), which has published updated guidelines on the rules around online consent to process people’s data.
Under pan-EU law, consent is one of six lawful bases that data controllers can use when processing people’s personal data.
But in order for consent to be legally valid under Europe’s General Data Protection Regulation (GDPR) there are specific standards to meet: It must be clear and informed, specific and freely given.
Hence cookie walls that demand ‘consent’ as the price for getting inside the club are not only an oxymoron but run into a legal brick wall.
No consent behind a cookie wall
The regional cookie wall has been crumbling for some time, as we reported last year — when the Dutch DPA clarified its guidance to ban cookie walls.
The updated guidelines from the EDPB look intended to hammer the point home. The steering body’s role is to provide guidance to national data protection agencies to encourage a more consistent application of data protection rules.
The EDPB’s intervention should — should! — remove any inconsistencies of interpretation on the updated points by national agencies of the bloc’s 27 Member States. (Though compliance with EU data protection law tends to be a process; aka it’s a marathon not a sprint, though on the cookie wall issues the ‘runners’ have been going around the tracks for a considerable time now.)
As we noted in our report on the Dutch clarification last year, the Internet Advertising Bureau Europe was operating a full cookie wall — instructing visitors to ‘agree’ to its data processing terms if they wished to view the content.
The problem that we pointed out is that that wasn’t a free choice. Yet EU law requires a free choice for consent to be legally valid. So it’s interesting to note the IAB Europe has, at some point since, updated its cookie consent implementation — removing the cookie wall and offering a fairly clear (if nudged) choice to visitors to either accept or deny cookies for “aggregated statistics”…
As we said at the time the writing was on the wall for consent cookie walls.
The EDPB document includes the below example to illustrate the salient point that consent cookie walls do not “constitute valid consent, as the provision of the service relies on the data subject clicking the ‘Accept cookies’ button. It is not presented with a genuine choice.”
It’s hard to get clearer than that, really.
Scrolling never means ‘take my data’
A second area to get attention in the updated guidance, as a result of the EDPB deciding there was a need for additional clarification, is the issue of scrolling and consent.
Simply put: Scrolling on a website or digital service can not — in any way — be interpreted as consent.
Or, as the EDPB puts it, “actions such as scrolling or swiping through a webpage or similar user activity will not under any circumstances satisfy the requirement of a clear and affirmative action” [emphasis ours].
The question of whether you own your digital purchases, or whether you’re simply licensing that content from whatever tech giant du jour hosts it, has always been a bit of a black box for consumers. Recently, this lack of transparency has prompted one California user to file a lawsuit against Amazon for saying customers can “purchase” movies on Prime Video when, in actuality, the company can cut off access to that content at its discretion.
Yeah, in case you didn’t know, you don’t really own what you buy on Prime Video. Even though the service bills this content as “Your Video Purchases”, Prime Video’s terms of service outlines how all purchases are really just long-term rentals that can disappear from your library at any time:
“Purchased Digital Content will generally continue to be available to you for download or streaming from the Service, as applicable, but may become unavailable due to potential content provider licensing restrictions or for other reasons, and Amazon will not be liable to you if Purchased Digital Content becomes unavailable for further download or streaming.”
None of this is made apparent unless you go digging into Prime Video’s ToS pages, though, which lawyers for the suit’s plaintiff, Amanda Caudel, argue is Amazon’s attempt to “deceive, mislead and defraud consumers.” Per the class action complaint, as first spotted by TechDirt:
“Reasonable consumers will expect that the use of a “Buy” button and the representation that their Video Content is a “Purchase” means that the consumer has paid for full access to the Video Content and, like any bought product, that access cannot be revoked.
Unfortunately for consumers who chose the “Buy” option, this is deceptive and untrue. Rather, the ugly truth is that Defendant secretly reserves the right to terminate the consumers’ access and use of the Video Content at any time, and has done so on numerous occasions, leaving the consumer without the ability to enjoy their already-bought Video Content.”
Defendant’s representations are misleading because they give the impression that the Video Content is purchased – i.e. the person owns it – when in fact that is not true because Defendant or others may revoke access to the Video Content at any time and for any reason.
And since renting movies for 30 days also costs significantly less than purchasing it on Prime Video, usually around $5 compared to $14.99-19.99, the lawsuit argues that Amazon uses this deceptive distinction to earn profit at the expense of consumers. Particularly since there’s no user agreement that pops up upon purchase to explain to customers that they won’t actually own the video content after hitting “Buy”. There’s no such disclaimer on the movie’s purchase page either.
This Guide has been developed by experts from IAB Europe’s Programmatic Trading Committee (PTC) to prepare brands, agencies, publishers and tech intermediaries for the much-anticipated post third-party cookie advertising ecosystem.
It provides background to the current use of cookies in digital advertising today and an overview of the alternative solutions being developed. As solutions evolve, the PTC will be updating this Guide on a regular basis to provide the latest information and guidance on market alternatives to third-party cookies.
The Guide, available below as an e-book or PDF, helps to answer to the following questions:
What factors have contributed to the depletion of the third-party cookie?
How will the depletion of third-party cookies impact stakeholders and the wider industry including proprietary platforms?
How will the absence of third-party cookies affect the execution of digital advertising campaigns?
What solutions currently exist to replace the usage of third-party cookies?
What industry solutions are currently being developed and by whom?
How can I get involved in contributing to the different solutions?
Last year, Apple accused a cybersecurity startup based in Florida of infringing its copyright by developing and selling software that allows customers to create virtual iPhone replicas. Critics have called the Apple’s lawsuit against the company, called Corellium, “dangerous” as it may shape how security researchers and software makers can tinker with Apple’s products and code.
The lawsuit, however, has already produced a tangible outcome: very few people, especially current and former customers and users, want to talk about Corellium, which sells the eponymous software that virtualizes iPhones and Android devices. During the lawsuit’s proceedings, Apple has sought information from companies that have used the tool, which emulates iOS on a computer, allowing researchers to probe potential iPhone vulnerabilities in a forgiving and easy-to-use environment.
[…]
“I don’t know if they intended it but when they name individuals at companies that have spoken in favor [of Corellium], I definitely believe retribution is possible,” the researcher added, referring to Apple’s subpoena to the spanish finance giant Santander Bank, which named an employee who had Tweeted about Corellium.
[…]
A security researcher, who specializes in offensive security and asked to remain anonymous, said that he would definitely “have legal look into it beforehand if I needed [Corellium’s] stuff,” arguing that he’d be wary of Apple getting involved.
Three other researchers who specialize in hacking Apple software declined to comment citing the risk of some sort of retaliation from Apple.
[…]
In January, Apple subpoenaed the defense contractor L3Harris and Santander Bank, requesting information on how they use Corellium, all communications they’ve had with the startup, internal communications about their products, and any contracts they’ve signed with the company, among other information.
Mark Dowd, the founder of Azimuth Security, a cybersecurity startup that specializes in developing hacking tools for governments that’s now part of L3Harris, said last year that he couldn’t comment about Corellium “because [Apple] mention[ed] us in the original filing.” (Dowd did not respond to a request for comment this week.)
[…]
Some researchers, however, are not afraid of Apple. Elias Naur uses Corellium to test code written in the Go language for mobile operating systems. Before Corellium, Naur said he had to test code on two busted old phones plugged in under his couch. Naur said he’s “not worried Apple will come after Corellium’s customers” and is still using the software.
[…]
In this David v. Goliath battle, as Forbes called it, many people are choosing to stay away from David even before seeing who wins.
Researchers have created a new a new system that helps Internet users ensure their online data is secure.
The software-based system, called Mitigator, includes a plugin users can install in their browser that will give them a secure signal when they visit a website verified to process its data in compliance with the site’s privacy policy.
“Privacy policies are really hard to read and understand,” said Miti Mazmudar, a PhD candidate in Waterloo’s David R. Cheriton School of Computer Science. “What we try to do is have a compliance system that takes a simplified model of the privacy policy and checks the code on the website’s end to see if it does what the privacy policy claims to do.
“If a website requires you to enter your email address, Mitigator will notify you if the privacy policy stated that this wouldn’t be needed or if the privacy policy did not mention the requirement at all.”
Mitigator can work on any computer, but the companies that own the website servers must have machines with a trusted execution environment (TEE). TEE, a secure area of modern server-class processors, guarantees the protection of code and data loaded in it with respect to confidentiality and integrity.
“The big difference between Mitigator and prior systems that had similar goals is that Mitigator’s primary focus is on the signal it gives to the user,” said Ian Goldberg, a professor in Waterloo’s Faculty of Mathematics. “The important thing is not just that the company knows their software is running correctly; we want the user to get this assurance that the company’s software is running correctly and is processing their data properly and not just leaving it lying around on disk to be stolen.
“Users of Mitigator will know whether their data is being properly protected, managed, and processed while the companies will benefit in that their customers are happier and more confident that nothing untoward is being done with their data.”
Britons will not be able to ask NHS admins to delete their COVID-19 tracking data from government servers, digital arm NHSX’s chief exec Matthew Gould admitted to MPs this afternoon.
Gould also told Parliament’s Human Rights Committee that data harvested from Britons through NHSX’s COVID-19 contact tracing app would be “pseudonymised” – and appeared to leave the door open for that data to be sold on for “research”.
The government’s contact-tracing app will be rolled out in Britain this week. A demo seen by The Register showed its basic consumer-facing functions. Key to those is a big green button that the user presses to send 28 days’ worth of contact data to the NHS.
Screenshot of the NHSX COVID-19 contact tracing app … Click to enlarge
Written by tech arm NHSX, Britain’s contact-tracing app breaks with international convention by opting for a centralised model of data collection, rather than keeping data on users’ phones and only storing it locally.
In response to questions from Scottish Nationalist MP Joanna Cherry this afternoon, Gould told MPs: “The data can be deleted for as long as it’s on your own device. Once uploaded all the data will be deleted or fully anonymised with the law, so it can be used for research purposes.”
Law professor Brian Frye has spent the last month or so making a really important point regarding the never-ending “is copyright property” debate — saying that if copyright is property, then copyright holders should be seen and treated as landlords. This whole approach can be summed up in the slightly snarky and trollish phrase: “OK, Landlord” used to respond to all sorts of nonsensical takes in support of more egregious copyright policies:
Like everyone, the copyright cops want to have their cake and eat it too. They claim that copyright is a kind of property, so the law should protect it just like any other kind of property. But they also claim that authors are morally entitled to copyright ownership because of their special contribution to society. I find both claims uncompelling, but in any case, they can’t have it both ways. If copyright is a property right, they have to own it and can’t claim the moral high ground.
What’s been most telling about this useful analogy is just how angry it seems to make copyright holders and copyright-system supporters. They react very negatively to the suggestion that they are “landlords” and any money they make from copyright licensing is a form of “rent.” But if you’re going to claim that your copyright is profit, then, well, the landlord moniker fits.
But the copyright cops persist, insisting that copyright is property, so copyright owners are entitled to the entire value of the works they create because that’s what property means. Accordingly, copying a work of authorship without permission is theft, even though it only increases the number of copies, because the copyright owner didn’t profit. And even consuming a work of authorship without permission is wrong because copyright owners are entitled to profit from every use of the work they own.
The circularity of these claims should be obvious: copyright is property because copyright owners receive exclusive rights, and copyright owners receive exclusive rights because copyright is property. But let’s run with it. Okay, copyright is property and copyright owners are property owners. Why are copyright owners entitled to profit from the use of their property?
Because they’re landlords. Copyright owners want to own the property metaphor? Then, let ‘em own it. If copyright is property, then they are landlords and copyright profits are rent. Just like landlords, copyright owners simply make a capital investment in creating or acquiring a property, then sit back and wait for the profits to roll in.
As Frye notes, the whole idea that copyright holders are landlords (even as they claim that they are holding property that you need to pay them to use), shows the sort of emotional trickery that copyright holders use in also claiming some sort of moral right to their works as “creators.” They’re picking and choosing which arguments to use when — and, have long tried to imbue some sort of magical mystical status on holding the copyright to creativity (which is often quite different than creating itself).
Of course, the real issue at play is that many of the most vocal copyright system supporters want to believe that they’re “artists” who are fighting the system and speaking for the oppressed… and being a “landlord” who is renting out their property goes against that self-image. But as Frye notes, they can’t really have it both ways. If they want to declare that they have property rights, they should be perfectly find with recognizing that they are the current landlords for that “property.”
When I began writing about the dot-org sale, it was out of concern for the loss of what I felt strongly was long understood to be a unique place in the Internet’s landscape. Like a national park, dot-org deserved special protection. It turns out lots of people and organizations agreed.
On April 30th, 2020, The ICANN Board upheld these values. They unanimously withheld consent for a change of control of the Public Interest Registry to a private equity firm. There were real questions about public support, financial stability and ultimately about whether the proposal was in the best interest of those most affected, dot-org domain owners.
Ethos, PIR and ISOC failed to respond to any in a convincing manner. They failed to gather any material support for their approach. As of today, the #savedotorg campaign has nearly 27,000 supporters and 2,000 nonprofits behind it. It dwarfs any campaign Internet governance has ever seen. There’s no way to de-legitimize such an outpouring of concern.
[…]
ISOC and PIR’s announcements seem to imply that things will simply go back to the way they were. PIR will continue to run dot-org and ISOC will continue to do what it does. This is the same kind of magical thinking that led to the idea that dot-org could be sold to a private equity firm. It is not grounded in the reality of how decisions that impact massive global communities are made.
Here’s what needs to be done:
First, ISOC and PIR leadership must recognize and apologize for the harm and uncertainty that they have caused both nonprofits and Internet governance. There never should have needed to be a #savedotorg campaign, because dot-org should never have been put at risk.
Second, The ISOC board should invite the leadership of the organizations that led the #SaveDotOrg campaign to an open dialogue to understand their concerns and priorities for the future of dot-org. This dialogue should recognize that it may be agreed that ISOC and PIR may no longer be the appropriate stewards for dot-org.
Third, the leadership of the #SaveDotOrg campaign needs to recognize that this was a closeted decision by a few actors, taken in secret. There are many skilled professionals that work at both PIR and ISOC. While ISOC and PIR may have to change dramatically, solutions must be sought that consider the value and future of these organizations, their staff, and their members.
Fourth, all parties should agree to work together with ICANN to chart a course of action that builds confidence and faith in the multi-stakeholder model of Internet governance. While there are many challenges with this model, one being how messy it seems, in the end the right decisions were taken. We must all come together to defend the model that has built and will continue to sustain a single global Internet.
Browser maker Mozilla is working on a new service called Private Relay that generates unique aliases to hide a user’s email address from advertisers and spam operators when filling in online forms.
The service entered testing last month and is currently in a closed beta, with a public beta currently scheduled for later this year, ZDNet has learned.
Private Relay will be available as a Firefox add-on that lets users generate a unique email address — an email alias — with one click.
The user can then enter this email address in web forms to send contact requests, subscribe to newsletters, and register new accounts.
“We will forward emails from the alias to your real inbox,” Mozilla says on the Firefox Private Relay website.
“If any alias starts to receive emails you don’t want, you can disable it or delete it completely,” the browser maker said.
The concept of an email alias has existed for decades, but managing them has always been a chore, or email providers didn’t allow users access to such a feature.
Through Firefox Private Relay, Mozilla hopes to provide an easy to use solution that can let users create and destroy email aliases with a few button clicks.
Brave, a maker of a pro-privacy browser, has lodged complaints with the European Commission against 27 EU Member States for under resourcing their national data protection watchdogs.
It’s asking the European Union’s executive body to launch an infringement procedure against Member State governments, and even refer them to the bloc’s top court, the European Court of Justice, if necessary.
“Article 52(4) of the GPDR [General Data Protection Regulation] requires that national governments give DPAs the human and financial resources necessary to perform their tasks,” it notes in a press release.
Brave has compiled a report to back up the complaints — in which it chronicles a drastic shortage of tech expertise and budget resource among Europe’s privacy agencies to enforce the region’s data protection framework.
Lack of proper resource to ensure the regulation’s teeth are able to clamp down on bad behavior — as the law drafters’ intended — has been a long standing concern.
In the Irish data watchdog’s annual report in February — AKA the agency that regulates most of big tech in Europe — the lack of any decisions in major cross-border cases against a roll-call of tech giants loomed large, despite plenty of worthy filler, with reams of stats included to illustrate the massive case load of complaints the agency is now dealing with.
Ireland’s decelerating budget and headcount in the face of rising numbers of GDPR complaints is a key concern highlighted by Brave’s report.
Per the report, half of EU data protection agencies have what it dubs a small budget (sub €5M), while only five of Europe’s 28 national GDPR enforcers have more than 10 “tech specialists”, as it describes them.
“Almost a third of the EU’s tech specialists work for one of Germany’s Länder (regional) or federal DPAs,” it warns. “All other EU countries are far behind Germany.”
“Europe’s GDPR enforcers do not have the capacity to investigate Big Tech,” is its top-line conclusion.
“If the GDPR is at risk of failing, the fault lies with national governments, not with the data protection authorities,” said Dr Johnny Ryan, Brave’s chief policy & industry relations officer, in a statement. “Robust, adversarial enforcement is essential. GDPR enforcers must be able to properly investigate ‘big tech’, and act without fear of vexatious appeals. But the national governments of European countries have not given them the resources to do so. The European Commission must intervene.”
It’s worth noting that Brave is not without its own commercial interest here. It absolutely has skin in the game, as a provider of privacy-sensitive adtech.
When he looked around the Web on the device’s default Xiaomi browser, it recorded all the websites he visited, including search engine queries whether with Google or the privacy-focused DuckDuckGo, and every item viewed on a news feed feature of the Xiaomi software. That tracking appeared to be happening even if he used the supposedly private “incognito” mode.
The device was also recording what folders he opened and to which screens he swiped, including the status bar and the settings page. All of the data was being packaged up and sent to remote servers in Singapore and Russia, though the Web domains they hosted were registered in Beijing.
Meanwhile, at Forbes’ request, cybersecurity researcher Andrew Tierney investigated further. He also found browsers shipped by Xiaomi on Google Play—Mi Browser Pro and the Mint Browser—were collecting the same data. Together, they have more than 15 million downloads, according to Google Play statistics.
[…]
And there appear to be issues with how Xiaomi is transferring the data to its servers. Though the Chinese company claimed the data was being encrypted when transferred in an attempt to protect user privacy, Cirlig found he was able to quickly see just what was being taken from his device by decoding a chunk of information that was hidden with a form of easily crackable encoding, known as base64. It took Cirlig just a few seconds to change the garbled data into readable chunks of information.
“My main concern for privacy is that the data sent to their servers can be very easily correlated with a specific user,” warned Cirlig.
[…]
But, as pointed out by Cirlig and Tierney, it wasn’t just the website or Web search that was sent to the server. Xiaomi was also collecting data about the phone, including unique numbers for identifying the specific device and Android version. Cirlig said such “metadata” could “easily be correlated with an actual human behind the screen.”
Xiaomi’s spokesperson also denied that browsing data was being recorded under incognito mode. Both Cirlig and Tierney, however, found in their independent tests that their web habits were sent off to remote servers regardless of what mode the browser was set to, providing both photos and videos as proof.
[…]
Both Cirlig and Tierney said Xiaomi’s behavior was more invasive than other browsers like Google Chrome or Apple Safari. “It’s a lot worse than any of the mainstream browsers I have seen,” Tierney said. “Many of them take analytics, but it’s about usage and crashing. Taking browser behavior, including URLs, without explicit consent and in private browsing mode, is about as bad as it gets.”
[…]
Cirlig also suspected that his app use was being monitored by Xiaomi, as every time he opened an app, a chunk of information would be sent to a remote server. Another researcher who’d tested Xiaomi devices, though was under an NDA to discuss the matter openly, said he’d seen the manufacturer’s phone collect such data. Xiaomi didn’t respond to questions on that issue.
[…]
Late in his research, Cirlig also discovered that Xiaomi’s music player app on his phone was collecting information on his listening habits: what songs were played and when.
It’s a bit of a puff piece, as American software also records all this data and sends it home. The article also seems to suggest that the whole phone is always sending data home, but only really talks about the browser and a music player app. So yes, you should have installed Firefox and used that as a browser as soon as you got the phone, but that goes for any phone that comes with Safari or Chrome as a browser too. A bit of anti Chinese storm in a teacup
The design of Australia’s COVIDSafe contact-tracing app creates some unintended surveillance opportunities, according to a group of four security pros who unpacked its .APK file.
Penned by independent security researcher Chris Culnane, University of Melbourne tutor, cryptography researcher and masters student Eleanor McMurtry, developer Robert Merkel and Australian National University associate professor and Thinking Security CEO Vanessa Teague and posted to GitHub, the analysis notes three concerning design choices.
The first-addressed is the decision to change UniqueIDs – the identifier the app shares with other users – once every two hours and for devices to only accept a new UniqueID if the app is running. The four researchers say this will make it possible for the government to understand if users are running the app.
“This means that a person who chooses to download the app, but prefers to turn it off at certain times of the day, is informing the Data Store of this choice,” they write.
The authors also suggest that persisting with a UniqueID for two hours “greatly increases the opportunities for third-party tracking.”
“The difference between 15 minutes’ and two hours’ worth of tracking opportunities is substantial. Suppose for example that the person has a home tracking device such as a Google home mini or Amazon Alexa, or even a cheap Bluetooth-enabled IoT device, which records the person’s UniqueID at home before they leave. Then consider that if the person goes to a shopping mall or other public space, every device that cooperates with their home device can share the information about where they went.”
The analysis also notes that “It is not true that all the data shared and stored by COVIDSafe is encrypted. It shares the phone’s exact model in plaintext with other users, who store it alongside the corresponding Unique ID.”
That’s worrisome as:
“The exact phone model of a person’s contacts could be extremely revealing information. Suppose for example that a person wishes to understand whether another person whose phone they have access to has visited some particular mutual acquaintance. The controlling person could read the (plaintext) logs of COVIDSafe and detect whether the phone models matched their hypothesis. This becomes even easier if there are multiple people at the same meeting. This sort of group re-identification could be possible in any situation in which one person had control over another’s phone. Although not very useful for suggesting a particular identity, it would be very valuable in confirming or refuting a theory of having met with a particular person.”
The authors also worry that the app shares all UniqueIDs when users choose to report a positive COVID-19 test.
“COVIDSafe does not give them the option of deleting or omitting some IDs before upload,” they write. “This means that users consent to an all-or-nothing communication to the authorities about their contacts. We do not see why this was necessary. If they wish to help defeat COVID-19 by notifying strangers in a train or supermarket that they may be at risk, then they also need to share with government a detailed picture of their day’s close contacts with family and friends, unless they have remembered to stop the app at those times.”
The analysis also calls out some instances of UniqueIDs persisting for up to eight hours, for unknown reasons.
The authors conclude the app is not an immediate danger to users. But they do say it presents “serious privacy problems if we consider the central authority to be an adversary.”
None of which seems to be bothering Australians, who have downloaded it more than two million times in 48 hours and blown away adoption expectations.
Atlassian co-founder Mike Cannon-Brookes may well have helped things along, by suggestingit’s time to “turn the … angry mob mode off. He also offered the following advice:
When asked by non technical people “Should I install this app? Is my data / privacy safe? Is it true it doesn’t track my location?” – say “Yes” and help them understand. Fight the misinformation. Remind them how little time they think before they download dozens of free, adware crap games that are likely far worse for their data & privacy than this ever would be!
Yes, we’ve seen lots of folks using COVID-19 to push their specific agendas forward, but this one is just bizarre. UNESCO (the United Nations Educational, Scientific and Cultural Organization) is an organization that is supposed to be focused on developing education and culture around the globe. From any objective standpoint, you’d think it would be in favor of things like more open licensing and sharing of culture, but, in practice, the organization has long been hijacked by copyright maximalist interests. Almost exactly a decade ago, we were perplexed at the organization’s decision to launch an anti-piracy organization. After all, “piracy” (or sharing of culture) is actually how culture and ideas frequently spread in the developing countries where UNESCO focuses.
In our #ResiliArt launch debate on how to support culture during #COVID19, #UNESCO’s Goodwill Ambassador @jeanmicheljarre suggested eternal copyright. What do you think?
We’ve started the conversation, now we count on you to join it.
They phrase this as “just started the conversation,” but that’s a trollish setup for a terrible, terrible idea. In case you can’t see the video, it’s electronic music creator Jean-Michel Jarre suggesting eternal copyright as a way to support future artists:
Why not going to the other way around, and to create the concept of eternal copyright. And I mean by this that after a certain period of time, the rights of movies, of music, of everything, would go to a global fund to help artists, and especially artists in emerging countries.
First, we can all agree that helping to enable and support artists in emerging countries is a good general idea. I’ve seen a former RIAA executive screaming about how everyone criticizing this idea is showing their true colors in how they don’t want to support artists. But that’s just silly. The criticism of this idea is that it doesn’t “support” artists at all, and will almost certainly make creativity and supporting artists more difficult. And that’s because art and creativity has always relied on building upon the works of those who came before — and locking up everything for eternity would make that cost prohibitive for all but the wealthiest of creators. Indeed, the idea that we need copyright and copyright alone to support artists shows (yet again) just how uncreative the people who claim to support copyright can be.
It has been called the “most extreme surveillance in the history of Western democracy.” It has not once but twice been found to be illegal. It sparked the largest ever protest of senior lawyers who called it “not fit for purpose.”
And now the UK’s Investigatory Powers Act of 2016 – better known as the Snooper’s Charter – is set to expand to allow government agencies you may never have heard of to trawl through your web histories, emails, or mobile phone records.
In a memorandum [PDF] first spotted by The Guardian, the British government is asking that five more public authorities be added to the list of bodies that can access data scooped up under the nation’s mass-surveillance laws: the Civil Nuclear Constabulary, the Environment Agency, the Insolvency Service, the UK National Authority for Counter Eavesdropping (UKNACE), and the Pensions Regulator.
The memo explains why each should be given the extraordinary powers, in general and specifically. In general, the five agencies “are increasingly unable to rely on local police forces to investigate crimes on their behalf,” and so should be given direct access to the data pipe itself.
Five Whys
The Civil Nuclear Constabulary (CNC) is a special armed police force that does security at the UK’s nuclear sites and when nuclear materials are being moved. It should be given access even though “the current threat to nuclear sites in the UK is assessed as low” because “it can also be difficult to accurately assess risk without the full information needed.”
The Environment Agency investigates “over 40,000 suspected offences each year,” the memo stated. Which is why it should also be able to ask ISPs to hand over people’s most sensitive communications information, in order “to tackle serious and organised waste crime.”
The Insolvency Service investigates breaches of company director disqualification orders. Some of those it investigates get put in jail so it is essential that the service be allowed “to attribute subscribers to telephone numbers and analyse itemised billings” as well as be able to see what IP addresses are accessing specific email accounts.
UKNACE, a little known agency that we have taken a look at in the past, is home of the real-life Qs, and one of its jobs is to detect attempts to eavesdrop on UK government offices. It needs access to the nation’s communications data “in order to identify and locate an attacker or an illegal transmitting device”, the memo claimed.
And lastly, the Pensions Regulator, which checks that companies have added their employees to their pension schemes, need to be able to delve into anyone’s emails so it can “secure compliance and punish wrongdoing.”
Taken together, the requests reflect exactly what critics of the Investigatory Powers Act feared would happen: that a once-shocking power that was granted on the back of terrorism fears is being slowly extended to even the most obscure government agency for no reason other that it will make bureaucrats’ lives easier.
None of the agencies would be required to apply for warrants to access people’s internet connection data, and they would be added to another 50-plus agencies that already have access, including the Food Standards Agency, Gambling Commission, and NHS Business Services Authority.
Safeguards
One of the biggest concerns remains that there are insufficient safeguards in place to prevent the system being abused; concerns that only grow as the number of people that have access to the country’s electronic communications grows.
It is also still not known precisely how all these agencies access the data that is accumulated, or what restrictions are in place beyond a broad-brush “double lock” authorization process that requires a former judge (a judicial commissioner, or JCs) to approve a minister’s approval.
Among startups and tech companies, Stripe seems to be the near-universal favorite for payment processing. When I needed paid subscription functionality for my new web app, Stripe felt like the natural choice. After integration, however, I discovered that Stripe’s official JavaScript library records all browsing activity on my site and reports it back to Stripe. This data includes:
Every URL the user visits on my site, including pages that never display Stripe payment forms
Telemetry about how the user moves their mouse cursor while browsing my site
Unique identifiers that allow Stripe to correlate visitors to my site against other sites that accept payment via Stripe
This post shares what I found, who else it affects, and how you can limit Stripe’s data collection in your web applications.
As Rolling Stone reported, the app is now playing host to virtual sex parties, “play parties,” and group check-ins which have become, as one host said, “the mutual appreciation jerk-off society.”
According to Zoom’s “acceptable use” policy, users may not use the technology to “engage in any activity that is harmful, obscene, or indecent, particularly as such would be understood in the context of business usage.” The policy specifies that this includes “displays of nudity, violence, pornography, sexually explicit material, or criminal activity.”
Zoom says that the platform uses ‘machine learning’ to identify accounts in violation of its policies — though it has remained vague about its methods for identifying offending users and content.
“We encourage users to report suspected violations of our policies, and we use a mix of tools, including machine learning, to proactively identify accounts that may be in violation,” a spokesperson for Zoom told Rolling Stone.
While Zoom executives did not respond to the outlet’s questions about the specifics of the machine-learning tools or how the platform might be alerted to nudity and pornographic content, a spokesperson did add that the company will take a “number of actions” against people found to be in violation of the specified acceptable use.
When reached for comment, a spokesperson for Zoom referred Insider to the “acceptable use” policy as well as the platform’s privacy policy which states that Zoom “does not monitor your meetings or its contents.”
The spokesperson also pointed to Yuan’s message in which he addressed how the company has “fallen short” of users’ “privacy and security expectations,” referencing instances of harassment and Zoom-bombing, and laid out the platform’s action plan going forward.
TalkTalk broadband users are complaining they can’t opt out of its Error Replacement Service, which swaps NXDomain DNS results with an IP address. And if that sounds familiar, it should. Users of the budget ISP complained about the very same issue back in 2014.
The Error Replacement Service redirects links to DNS addresses that don’t exist, like those created by fat-fingered address bar typos, to a TalkTalk-run webpage. El Reg reader Louis described it thusly:
“If I type a non-existing domain in the browser, instead of getting the proper ‘Hmm. We’re having trouble finding that site’ message, I get a list of ‘search results’ vaguely linked to the the non-existing domain. This is mildly annoying, as I’d rather not send my typos to some random advertiser,” he said.
His woes don’t stop there – the “service” also prevents him from logging into his work VPN. “During connection, instead of seeing the login window, I see a TalkTalk-branded page with ‘search results’ and I can’t complete the login process,” he complained.
This isn’t an isolated problem. The TalkTalk support forum is flooded with similar complaints, no doubt partially thanks to the rise in home working caused by the COVID-19 epidemic.
TalkTalk offers a way to opt out of the service, requiring users to visit a specific web page and then restart their router. But this appears to be somewhat ineffective, with both Twitter and the TalkTalk forum filled with complaints.
Six years ago, Twitter sued the US government in an attempt to detail surveillance requests the company had received, but a federal judge on Friday ruled in favor of the government’s case that detailing the requests would jeopardize the country’s safety.
If Twitter revealed the number of surveillance requests it received each calendar quarter, it “would be likely to lead to grave or imminent harm to the national security,” US District Judge Yvonne Gonzalez Rogers concluded after reviewing classified information from the government. See below for the full ruling.
“While we are disappointed with the court’s decision, we will continue to fight for transparency,” Twitter said in a statement Saturday.
The ruling shows the difficulties of balancing privacy and and security on the internet. Public posts and private communications have opened up a treasure trove of information that law enforcement and intelligence services can investigate, and people may not suspect the government is listening in. On the other hand, encryption technology also has opened up communication conduits that are fundamentally impenetrable to government and law enforcement.
In Twitter’s transparency report, now updated for six-month periods, the company publishes numbers on law enforcement information requests, copyright infringement allegations, attempts to spread disinformation, reports of abuse, and other goings-on. The company argued in its 2014 lawsuit it shouldn’t be barred from revealing detailed tallies of national security-related information requests.
“We think the government’s restriction on our speech not only unfairly impacts our users’ privacy, but also violates our First Amendment right to free expression and open discussion of government affairs,” Twitter argued at the time.
Six years later, Twitter says transparency is still important to show how it interacts with governments.
There’s a scene in Touchstone Pictures’ 1984 movie Splash where a young Tom Hanks watches a beautiful naked mermaid run off into the ocean from which she came. In the original version, the camera follows Hanks’ gaze, showing a brief glimpse of a naked butt. Splash received a PG rating because of the shot (and the insinuation that came with it), but people watching the movie on Disney Plus are greeted with an entirely different version of the scene.
In the re-edited version, which went viral, thanks to the tweet below, Disney used CGI hair to cover actress Daryl Hannah’s body. A Disney representative confirmed to The Verge that a “few scenes” from Splash were “slighted edited to remove nudity,” but they did not specify when the edits were made.
The representative also confirmed that Splash’s rating would revert from PG-13 on Disney Plus (different from the original) back to PG. It’s likely that the original film (with its brief nudity) would have been rated PG-13 if it came out a few months later, but Splash was released in March 1984, and the PG-13 rating didn’t exist until July 1984.
Disney+ didn’t want butts on their platform so they edited Splash with digital fur technology pic.twitter.com/df8XE0G9om
The change has bewildered social media users. If nudity was the issue, why not bring Splash to Hulu, Disney’s other streaming service geared toward older adults? Others have asked why Disney felt the need to re-edit the scene at all; Disney Plus allows movies up to a PG-13 rating on its service, and Splash was only rated PG. Another person pointed out that a scene in Thor: Ragnarok that includes Hulk’s naked butt wasn’t censored when it was brought to Disney Plus. (Although, there’s likely a difference in perception between actual nudity and nudity as it pertains to a completely CGI character.)
Splash is the most egregious, albeit hysterical example of movies being re-edited for Disney Plus, but it’s not a unique case. A new version of Star Wars: A New Hope appeared on Disney Plus the day the streaming service launched, one that was “made by George made prior to the Disney acquisition,” the company confirmed to The Verge at the time.
Disney has also instituted pre-roll messages that play before certain movies to inform viewers that scenes have been edited for specific reasons. The company removed the word “fuck” from movies like Adventures in Babysitting and Free Solo, took out racial slurs that appeared in older titles like The Adventures of Bullwhip Griffin, and edited other material in movies like Empire of Dreams that Disney no longer found suitable.
Splash has found itself in the middle of an ongoing debate over media being altered in digital spaces. It’s a debate that’s raged for decades; fans were upset when George Lucas edited A New Hope, making it so Greedo shot first instead of Han. People bemoaned Lucas and 20th Century Fox for not releasing the original version of the film anywhere, either. The only legal versions of A New Hope that exist for people to buy, download, or stream today feature Greedo shooting first. It wasn’t just that Lucas and Fox replaced the original scene with a slightly altered one, but the original also wasn’t available to purchase when reprints were made.
Last March, Simpsons producer James L. Brooks announced that future syndication packages, streaming, and future DVD releases will not include the season 3 premiere episode, “Stark Raving Dad.” The episode includes voice acting from Michael Jackson, and after renewed allegations against Jackson surfaced, The Simpsons’ team and Fox decided to effectively erase the episode. “This is our book, and we’re allowed to take out a chapter,” Brooks told TheWall Street Journal at the time.
“As physical media gives way to streaming, large corporations have greater and greater control over what we can and cannot see,” Slate’s Isaac Butler wrote on the issue. “This gives them unprecedented power to disappear bothersome work. Whether we agree with a particular instance of memory-holing or not, this practice is deeply troubling, its history even more so.”
Disney is more than just a large corporation. It is arguably the monolith. Disney bought 21st Century Fox, the same corporation that Butler wrote his concerns about. Disney also built an entire sales campaign around the idea of restricting access to physical versions of its films — something it referred to for years as “The Vault.” Now, scenes are being edited for its streaming service, and all people are getting is a message explaining why. Subscribers can’t watch the original films the way they were intended.
It’s an effort from companies to be better or more appropriate, but it doesn’t always work. There are better alternatives. Take Tom and Jerry, for example. The Warner Bros. cartoon series from the 1940s came with a disclaimer about the context of certain scenes when it was originally released on DVD by Warner Home Video and then again in 2014 when the episodes were made available digitally on iTunes and Amazon Prime. Warner Bros. didn’t erase or edit the show; instead, the company decided to give it a critical examination. History can’t be erased, but people can learn from it.
Retroactively editing films to suit a certain narrative or niche is an ongoing problem that’s caused concern in movie, television, and music circles. And as more people turn to streaming services, where files can be edited on the fly, concerns over the original presentation continue to grow. What may just be bad CGI hair over a butt in an old Tom Hanks movie today could be more elaborate edits and alterations tomorrow.
India has effectively banned videoconferencing service Zoom for government users and repeated warnings that consumers need to be careful when using the tool.
The nation’s Cyber Coordination Centre has issued advice (PDF) titled “Advisory on Secure use of Zoom meeting platform by private individuals (not for use by government offices/officials for official purpose)”.
ICANN has been accused by its founding CEO and original chair of abandoning the organization’s core principles and accepting commitments it knows it cannot enforce in order to push through the sale of the .org registry later this week.
In a furious letter [PDF] from Mike Roberts and Esther Dyson to the attorney generals of California and Pennsylvania, the DNS overseer is also accused of circumventing its own decision-making processes and using the coronavirus pandemic to push through the $1.13bn sale.
The two internet veterans ask the state’s top legal representatives to step in and suspend any sale for another six months “to permit your offices, ICANN and the US Congress, to revisit the questions of ICANN’s process and public-interest regulatory duty at a point when the pandemic is no longer the public’s principal concern”.
ICANN is due to decide at a board meeting on Thursday whether to approve or block the sale of the registry from the Internet Society to private equity firm Ethos Capital.
But despite five months of discussions and repeat efforts by Ethos to tackle concerns, many in the internet community remain extremely skeptical of the deal, particularly its financing and the unusual corporate structure of Ethos, which comprises no less than six different companies, all of which were registered on the same day in 2019.
“We write to express our deep dismay at ICANN’s rejection of its defining public-interest regulatory purpose as demonstrated in the totally inappropriate proposed sale of the .ORG delegation,” the letter begins. “ICANN is failing to deliver on the purpose it was created to serve, and is abandoning its core duty to protect the public interest.”
Accountability fail
Roberts was ICANN’s first CEO and was in charge of the organization for its first three years as it attempted to put a structure around the domain name system (DNS).
Dyson was its chair for the first two years. Back then, ICANN was a semi-autonomous body overseen by the US government. That oversight ended in January 2017 after a number of new accountability measures were introduced to ensure ICANN would remain answerable to the internet community rather than itself.
The most important of those new measures is called “Empowered Community” and, in theory, allows the internet community to force the organization to hand over documents and pause decisions. It has failed on its first use, Roberts and Dyson note, referencing a letter from ICANN’s general counsel in February that rejected an effort to use the oversight.
The oversight request [PDF] asked for records covering ICANN’s consideration of the .org sale as well as details on the process it would use to gain the internet community’s approval of its decision. ICANN responded [PDF] by claiming the request “exceeded the permissible scope” of the mechanism and refused to hand over any documents.
Apple has released a set of “Mobility Trends Reports” – a trove of anonymised and aggregated data that describes how people have moved around the world in the three months from 13 January to 13 April.
The data measures walking, driving and public transport use. And as you’d expect and as depicted in the image atop this story, human movement dropped off markedly as national coronavirus lockdowns came into effect.
Apple has explained the source of the data as follows:
This data is generated by counting the number of requests made to Apple Maps for directions in select countries/regions and cities. Data that is sent from users’ devices to the Maps service is associated with random, rotating identifiers so Apple doesn’t have a profile of your movements and searches. Data availability in a particular country/region or city is subject to a number of factors, including minimum thresholds for direction requests made per day.
Apple justified the release by saying it thinks it’ll help governments understand what its citizens are up to in these viral times. The company has also said this is a limited offer – it won’t be sharing this kind of analysis once the crisis passes.
But the data is also a peek at what Apple is capable of. And presumably also what Google, Microsoft, Waze, Mapquest and other spatial services providers can do too. Let’s not even imagine what Facebook could produce. ®
The EFF’s staff technologist — also an engineer on Privacy Badger and HTTPS Everywhere, writes: Twitter greeted its users with a confusing notification this week. “The control you have over what information Twitter shares with its business partners has changed,” it said. The changes will “help Twitter continue operating as a free service,” it assured. But at what cost?
Twitter has changed what happens when users opt out of the “Allow additional information sharing with business partners” setting in the “Personalization and Data” part of its site. The changes affect two types of data sharing that Twitter does… Previously, anyone in the world could opt out of Twitter’s conversion tracking (type 1), and people in GDPR-compliant regions had to opt in. Now, people outside of Europe have lost that option. Instead, users in the U.S. and most of the rest of the world can only opt out of Twitter sharing data with Google and Facebook (type 2).
The article explains how last August Twitter discovered that its option for opting out of device-level targeting and conversion tracking “did not actually opt users out.” But after fixing that bug, “advertisers were unhappy. And Twitter announced a substantial hit to its revenue… Now, Twitter has removed the ability to opt out of conversion tracking altogether.”
While users in Europe are protected by GDPR, “users in the United States and everywhere else, who don’t have the protection of a comprehensive privacy law, are only protected by companies’ self-interest…” BoingBoing argues that Twitter “has just unilaterally obliterated all its users’ privacy choices, announcing the change with a dialog box whose only button is ‘OK.’