The Linkielist

Linking ideas with the world

The Linkielist

Hong Kong Protests Show Dangers of a Cashless Society

Allowing cash to die would be a grave mistake. A cashless society is a surveillance society. The recent round of protests in Hong Kong highlights exactly what we have to lose.

The current unrest concerns a proposed change to Hong Kong’s extradition laws that would allow island fugitives to be transferred to Taiwan, Macau, and mainland China. The proposal sparked mass outrage, as many Hongkongers saw it as little more but a new way for the People’s Republic of China to erode the legal sovereignty of Hong Kong.

[…]

So tens of thousands of Hongkongers took to the streets to protest what they saw as creeping tyranny from a powerful threat. But they did it in a very particular way.

In Hong Kong, most people use a contactless smart card called an “Octopus card” to pay for everything from transit, to parking, and even retail purchases. It’s pretty handy: Just wave your tentacular card over the sensor and make your way to the platform.

But no one used their Octopus card to get around Hong Kong during the protests. The risk was that a government could view the central database of Octopus transactions to unmask these democratic ne’er-do-wells. Traveling downtown during the height of the protests? You could get put on a list, even if you just happened to be in the area.

So the savvy subversives turned to cash instead. Normally, the lines for the single-ticket machines that accept cash are populated only by a few confused tourists, while locals whiz through the turnstiles with their fintech wizardry.

But on protest days, the queues teemed with young activists clutching old school paper notes. As one protestor told Quartz: “We’re afraid of having our data tracked.”

Using cash to purchase single tickets meant that governments couldn’t connect activists’ activities with their Octopus accounts. It was instant anonymity. Sure, it was less convenient. And one-off physical tickets cost a little more than the Octopus equivalent. But the trade-off of avoiding persecution and jail time was well worth it.

What could protestors do in a cashless world? Maybe they would have to grit their teeth and hope for the best. But relying on the benevolence or incompetence of a motivated entity like China is not a great plan. Or perhaps public transit would be off-limits altogether. This could limit the protests to fit people within walking or biking distance, or people who have access to a private car—a rarity in expensive dense cities.

If some of our eggheads had their way, the protestors would have had no choice. A chorus of commentators call for an end to cash, whether because it frustrates central bank schemes, fuels black and grey markets, or is simply inefficient. We have plenty of newfangled payment options, they say. Why should modern first world economies hew to such primordial human institutions?

The answer is that there is simply no substitute for the privacy that cash, including digitized versions like cryptocurrencies, provide. Even if all of the alleged downsides that critics bemoan were true, cash would still be worth defending and celebrating for its core privacy-preserving functions. As Jerry Brito of Coin Center points out, cash protects our autonomy and indeed our human dignity.

[…]

Coin Center’s Peter Van Valkenburgh calls apps like WeChat Pay “tools for totalitarianism” for good reason: Each transaction is linked to your identity for possible viewing by Communist Party zealots. No wonder less than 8 percent of Hongkongers bother with hyper-palatable WeChat Pay.

Of course, Western offerings like Apple Pay and Venmo also maintain user databases that can be mined. Users may feel protected by the legal limits that countries like the United States place on what consumer data the government can extract from private business. But as research by Van Valkenburgh points out, US anti-money laundering laws afford less Fourth Amendment protection than you might expect. Besides, we still need to trust government and businesses to do the right thing. As the Edward Snowden revelations proved, this trust can be misplaced.

Hong Kong is about as first world as you can get. Yet even in such a developed economy, power’s jealous hold is but an ill-worded reform away. We should not allow today’s relative freedom to obscure the threat that a cashless world poses to our sovereignty. Not only canit happen here,” for some of your fellow citizens, it might already have.

Source: Hong Kong Protests Show Dangers of a Cashless Society – Reason.com

Amazon Confirms It Keeps Alexa Transcripts You Can’t Delete

Next time you use Amazon Alexa to message a friend or order a pizza, know that the record could be stored indefinitely, even if you ask to delete it.

In May, Delaware Senator Chris Coons sent Amazon CEO Jeff Bezos a letter asking why Amazon keeps transcripts of voices captured by Echo devices, citing privacy concerns over the practice. He was prompted by reports that Amazon stores the text.

“Unfortunately, recent reporting suggests that Amazon’s customers may not have as much control over their privacy as Amazon had indicated,” Coons wrote in the letter. “While I am encouraged that Amazon allows users to delete audio recordings linked to their accounts, I am very concerned by reports that suggest that text transcriptions of these audio records are preserved indefinitely on Amazon’s servers, and users are not given the option to delete these text transcripts.”

CNET first reported that Amazon’s vice president of public policy, Brian Huseman, responded to the senator on June 28, informing him that Amazon keeps the transcripts until users manually delete the information. The letter states that Amazon works “to ensure those transcripts do not remain in any of Alexa’s other storage systems.”

However, there are some Alexa-captured conversations that Amazon retains, regardless of customers’ requests to delete the recordings and transcripts, according to the letter.

As an example of records that Amazon may choose to keep despite deletion requests, Huseman mentioned instances when customers use Alexa to subscribe to Amazon’s music or delivery service, request a rideshare, order pizza, buy media, set alarms, schedule calendar events, or message friends. Huseman writes that it keeps these recordings because “customers would not want or expect deletion of the voice recording to delete the underlying data or prevent Alexa from performing the requested task.”

The letter says Amazon generally stores recordings and transcripts so users can understand what Alexa “thought it heard” and to train its machine learning systems to better understand the variations of speech “based on region, dialect, context, environment, and the individual speaker, including their age.” Such transcripts are not anonymized, according to the letter, though Huseman told Coons in his letter, “When a customer deletes a voice recording, we delete the transcripts associated with the customer’s account of both of the customer’s request and Alexa’s response.”

Amazon declined to provide a comment to Gizmodo beyond what was included in Huseman’s letter.

In his public response to the letter, Coons expressed concern that it shed light on the ways Amazon is keeping some recordings.

“Amazon’s response leaves open the possibility that transcripts of user voice interactions with Alexa are not deleted from all of Amazon’s servers, even after a user has deleted a recording of his or her voice,” Coons said. “What’s more, the extent to which this data is shared with third parties, and how those third parties use and control that information, is still unclear.”

Source: Amazon Confirms It Keeps Alexa Transcripts You Can’t Delete

YouTube mystery ban on hacking videos has content creators puzzled, looks like they want you to not learn about cybersecurity

YouTube, under fire since inception for building a business on other people’s copyrights and in recent years for its vacillating policies on irredeemable content, recently decided it no longer wants to host instructional hacking videos.

The written policy first appears in the Internet Wayback Machine’s archive of web history in an April 5, 2019 snapshot. It forbids: “Instructional hacking and phishing: Showing users how to bypass secure computer systems or steal user credentials and personal data.”

Lack of clarity about the permissibility of cybersecurity-related content has been an issue for years. In the past, hacking videos in years past could be removed if enough viewers submitted reports objecting to them or if moderators found the videos violated other articulated policies.

Now that there’s a written rule, there’s renewed concern about how the policy is being applied.

Kody Kinzie, a security researcher and educator who posts hacking videos to YouTube’s Null Byte channel, on Tuesday said a video created for the US July 4th holiday to demonstrate launching fireworks over Wi-Fi couldn’t be uploaded because of the rule.

“I’m worried for everyone that teaches about infosec and tries to fill in the gaps for people who are learning,” he said via Twitter. “It is hard, often boring, and expensive to learn cybersecurity.”

In an email to The Register, Kinzie clarified that YouTube had problems with three previous videos, which got flagged and are either in the process of review or have already been appealed and restored. They involved Wi-Fi hacking. One of the Wi-Fi hacking videos got a strike on Tuesday and that disabled uploading for the account, preventing the fireworks video from going up.

The Register asked Google’s YouTube for comment but we’ve not heard back.

Security professionals find the policy questionable. “Very simply, hacking is not a derogatory term and shouldn’t be used in a policy about what content is acceptable,” said Tim Erlin, VP of product management and strategy at cybersecurity biz Tripwire, in an email to The Register.

“Google’s intention here might be laudable, but the result is likely to stifle valuable information sharing in the information security community.”

Source: YouTube mystery ban on hacking videos has content creators puzzled • The Register

Spotify shuts down direct music uploading for independent artists, forces them to 3rd parties and also allows these 3rd parties into your personal account

Spotify has changed the way artists can upload music, now prohibiting individual musicians from putting their songs on the streaming service directly.

The new move requires a third party to be involved in the business of uploads.

The company announced the change on Monday, saying it will close the beta program and stop accepting direct uploads by the end of July.

“The most impactful way we can improve the experience of delivering music to Spotify for as many artists and labels as possible is to lean into the great work our distribution partners are already doing to serve the artist community,” Spotify said in a statement on its blog. “Over the past year, we’ve vastly improved our work with distribution partners to ensure metadata quality, protect artists from infringement, provide their users with instant access to Spotify for Artists, and more.”

“The best way for us to serve artists and labels is to focus our resources on developing tools in areas where Spotify can uniquely benefit them — like Spotify for Artists (which more than 300,000 creators use to gain new insight into their audience) and our playlist submission tool (which more than 36,000 artists have used to get playlisted for the very first time since it launched a year ago). We have a lot more planned here in the coming months,” the post continued.

The direct upload function began last September, allowing independent artists to utilize the streaming site without distribution methods.

Smaller artists will now need to return to sites like Bandcamp, SoundCloud and others to upload their material.

Many people, especially artists, were upset about the decision. You can see what they had to say on Twitter below.
More Spotify news

Pre-saving an upcoming release from your favorite artists on Spotify could be causing you to share more personal data than you realize.

In a recent report from Billboard, it was revealed that Spotify users were giving a band’s label data use permissions that were much broader than typical permissions.

When a user pre-saves a track, it adds it to the user’s library the moment it comes out. In order to do this, Spotify users have to click through and approve certain permissions.

These permissions give the label more access to your account than Spotify normally gives. It allows them to track listening habits, change the artists they follow and potentially control their streaming remotely.

Source: Spotify shuts down direct music uploading for independent artists

What. The. Fuck.

Dutch ING Bank wants to use customer payment data for direct marketing, privacy watchdog says NO! whilst Dutch Gov wants more banking data sharing with everyone!

The authority on personal data has reprimanded the ING Bank over plans to use payment data for advertising. The authority has told other banks to examine their policies for direct marketing. ING Bank recently changed their privacy statement, stating that the bank will use payment data for direct marketing offers. As an example they said being able to offer specific product offers after child support payments had come in. Many ING customers caught this and emailed and called the authority about this angrily.

This is the second time the ING has tried this: in 2014 they tried to do this, but then also sharing the payment data with third parties.

Source: AP: Banken mogen betaalgegevens niet zomaar gebruiken voor reclame – Emerce

In the meantime, the Dutch government is trying to find a way to prohibit cash payments of over EUR 3000,- and insiduously in the same law allowing banks and government to share client banking data more easily.

source: Kabinet gaat contante betaling boven de 3000 euro verbieden

Silicon Valley’s Hottest Email App Superhuman sends emails that track you and your location without your knowledge

Superhuman is one of the most talked about new apps in Silicon Valley. Why? The product — a $30 per month email app for power users hoping for greater productivity— is a good alternative to many popular and stale email apps, nearly everyone who has used it says so. Even better is the company’s publicity strategy: The service invite only and posting on social media is the quickest way to get in the door. So it gets some local buzz, a $33 million dollar investment, bigger blog write-ups and then a New York Times article to top it all off last month.

After a peak, a roller coaster hits a downward slope.

Superhuman was criticized sharply on Tuesday when a blog post by Mike Davidson, previously the VP of design at Twitter, spread widely across social media. The post goes into detail about how one of Superhuman’s powerful features was actually just a run-of-the-mill privacy-violating tracking pixel with an option to turn it off or a notification for the recipient on the other end. If you use Superhuman, you’ll be able to see when someone opened your email, how many times they did it, what device they were using and what location they’re in.

Here’s Davidson:

It is disappointing then that one of the most hyped new email clients, Superhuman, has decided to embed hidden tracking pixels inside of the emails its customers send out. Superhuman calls this feature “Read Receipts” and turns it on by default for its customers, without the consent of its recipients.

Tracking pixels are not new. If you get an email newsletter, for instance, it’s probably got a tracking pixel feeding this kind of data back to advertisers, senders, and a whole host of other trackers interested in collecting everything they can about you.

Let me put it this way: I send an email to your mother. She opens it. Now I know a ton of information about her including her whereabouts without ever her ever being informed or consenting to this tracking. What does this kind of behavior mean for nosy advertisers? What about abusive spouses? A stalker? Pushy salespeople? Intrusive co-workers and bosses?

Davidson sums it up in his blog:

They’ve identified a feature that provides value to some of their customers (i.e. seeing if someone has opened your email yet) and they’ve trampled the privacy of every single person they send email to in order to achieve that. Superhuman never asks the person on the other end if they are OK with sending a read receipt (complete with timestamp and geolocation). Superhuman never offers a way to opt out. Just as troublingly, Superhuman teaches its user to surveil by default. I imagine many users sign up for this, see the feature, and say to themselves “Cool! Read receipts! I guess that’s one of the things my $30 a month buys me.”

Tracking emails is a tried-and-true tactic used by a ton of companies. That doesn’t make it ethical or irreversible. There has been plenty of criticism of the strategy — and there is a technical workaround that we’ll talk about momentarily — but since the tech has been, until now, mainly visible to businesses, the conversation has paled in comparison to some of the other big privacy issues arising in recent years.

Superhuman is a consumer app. It’s targeted at power users, yes, but the potential audience is big and the buzz is real. Combined with the increasing public distaste for privacy violations in the name of building a more powerful app, Twitter has been awash this week and especially on Tuesday with criticism of Superhuman: Why does it need to take so much information without an option or notification?

We emailed Superhuman but did not get a response.

A tracking pixel works by embedding a small and hidden image in an email. The image is able to report back information including when the email is opened and where the reader is located. It’s hidden for a reason: The spy is not trying to ask permission.

If you’re willing to put in a little work, you can spot who among your contacts is using Superhuman by following these instructions.

The workaround is to disable images by default in email. The method varies in different email apps but will typically be located somewhere in the settings.

Apps like Gmail have tried for years to scrub tracking pixels. Marketers and other users sending these tracking tools out have been battling, sometimes successfully, to continue to track Gmail’s billion users without their permission.

In that case, disabling images by default is the only sure-fire way to go. When you do allow images in an email, know that you may be instantly giving up a small fortune of information to the sender — and whoever they’re working with — without even realizing it.

Source: Silicon Valley’s Hottest Email App Raises Ethical Questions About the Future of Email

We are shocked to learn that China, an oppressive surveillance state, injects spyware into visitors’ phones

The New York Times reported today that guards working the border with Krygyzstan in the Xinjiang region have insisted on putting an app called Fengcai on the Android devices of visitors – including tourists, journalists, and other foreigners.

The Android app is said to harvest details from the handset ranging from text messages and call records to contacts and calendar entries. It also apparently checks to see if the device contains any of 73,000 proscribed documents, including missives from terrorist groups, including ISIS recruitment fliers and bomb-making instructions. China being China, it also looks for information on the Dalai Lama and – bizarrely – mentions of a Japanese grindcore band.

Visitors using iPhones had their mobes connected to a different, hardware-based device that is believed to install similar spyware.

This is not the first report of Chinese authorities using spyware to keep tabs on people in the Xinjiang region, though it is the first time tourists are believed to have been the primary target. The app doesn’t appear to be used at any other border crossings into the Middle Kingdom.

In May, researchers with German security company Cure53 described how a similar app known as BXAG that was not only collecting data from Android phones, but also sending that harvested information via an insecure HTTP connection, putting visitors in even more danger from third parties who might be eavesdropping.

The remote region in northwest China has for decades seen conflict between the government and local Muslim and ethnic Uighur communities, with reports of massive reeducation camps beign set up in the area. Beijing has also become increasingly reliant on digital surveillance tools to maintain control over its population, and use of intrusive software in Xinjiang to monitor the locals has become more common.

Human Rights Watch also reported that those living in the region sometimes had their phones spied on by a police-installed app called IJOP, while in 2018 word emerged that a mandatory spyware tool called Jing Wang was being pushed to citizens in the region

Source: We are shocked to learn that China, an oppressive surveillance state, injects spyware into visitors’ phones • The Register

The Americans just force you to unlock the phone for them…

Boeing falsified records for 787 jet sold to Air Canada. It developed a fuel leak

Boeing staff falsified records for a 787 jet built for Air Canada which developed a fuel leak ten months into service in 2015.

In a statement to CBC News, Boeing said it self-disclosed the problem to the U.S. Federal Aviation Administration after Air Canada notified them of the fuel leak.

The records stated that manufacturing work had been completed when it had not.

Boeing said an audit concluded it was an isolated event and “immediate corrective action was initiated for both the Boeing mechanic and the Boeing inspector involved.”

Boeing is under increasing scrutiny in the U.S. and abroad following two deadly crashes that claimed 346 lives and the global grounding of its 737 Max jets.

On the latest revelations related to falsifying records for the Air Canada jet, Mike Doiron of Moncton-based Doiron Aviation Consulting said: “Any falsification of those documents which could basically cover up a safety issue is a major problem.”

In the aviation industry, these sorts of documents are crucial for ensuring the safety of aircraft and the passengers onboard, he said.

Source: Boeing falsified records for 787 jet sold to Air Canada. It developed a fuel leak | CBC News

Does this mean we need to avoid 787s too?

This weekend all Microsoft e-books will stop working. A gentle reminder that through DRM you don’t own what you think you own.

If you bought an ebook through Microsoft’s online store, now’s the time to give it a read, or reread, because it will stop working early July.

That’s right, the books you paid for will be literally removed from your electronic bookshelf because, um, Microsoft decided in April it no longer wanted to sell books. It will turn off the servers that check whether your copy was bought legitimately – using the usual anti-piracy digital-rights-management (DRM) tech – and that means your book can’t be verified as being in the hands of its purchaser, and so won’t be displayed.

Even the free-to-download ebooks will fail. According to Redmond, “You can continue to read free books you’ve downloaded until July 2019 when they will no longer be accessible.” And the paid-for ones? “You can continue to read books you’ve purchased until July 2019 when they will no longer be available, and you will receive a full refund of the original purchase price.”

Why has Microsoft done this? We don’t know. All the Windows giant said was that it was “streamlining the strategic focus” of its store. But how much can a DRM server possibly cost? And why is that cost too high for an American corporation with $110bn in annual revenue that makes $16.5bn in profit?

Source: This weekend you better read those ebooks you bought from Microsoft – because they’ll be dead come next week • The Register

Google’s new reCaptcha forces page admins to put it on EVERY page so Google can track you everywhere

According to tech statistics website Built With, more than 650,000 websites are already using reCaptcha v3; overall, there are at least 4.5 million websites use reCaptcha, including 25% of the top 10,000 sites. Google is also now testing an enterprise version of reCaptcha v3, where Google creates a customized reCaptcha for enterprises that are looking for more granular data about users’ risk levels to protect their site algorithms from malicious users and bots.

But this new, risk-score based system comes with a serious trade-off: users’ privacy.

According to two security researchers who’ve studied reCaptcha, one of the ways that Google determines whether you’re a malicious user or not is whether you already have a Google cookie installed on your browser. It’s the same cookie that allows you to open new tabs in your browser and not have to re-log in to your Google account every time. But according to Mohamed Akrout, a computer science PhD student at the University of Toronto who has studied reCaptcha, it appears that Google is also using its cookies to determine whether someone is a human in reCaptcha v3 tests. Akrout wrote in an April paper about how reCaptcha v3 simulations that ran on a browser with a connected Google account received lower risk scores than browsers without a connected Google account. “If you have a Google account it’s more likely you are human,” he says. Google did not respond to questions about the role that Google cookies play in reCaptcha.

With reCaptcha v3, technology consultant Marcos Perona and Akrout’s tests both found that their reCaptcha scores were always low risk when they visited a test website on a browser where they were already logged into a Google account. Alternatively, if they went to the test website from a private browser like Tor or a VPN, their scores were high risk.

To make this risk-score system work accurately, website administrators are supposed to embed reCaptcha v3 code on all of the pages of their website, not just on forms or log-in pages. Then, reCaptcha learns over time how their website’s users typically act, helping the machine learning algorithm underlying it to generate more accurate risk scores. Because reCaptcha v3 is likely to be on every page of a website,  if you’re signed into your Google account there’s a chance Google is getting data about every single webpage you go to that is embedded with reCaptcha v3—and there many be no visual indication on the site that it’s happening, beyond a small reCaptcha logo hidden in the corner.

Source: Google’s new reCaptcha has a dark side

Mozilla Has a New Tool for Tricking Advertisers Into Believing You’re Filthy Rich

If you notice the ads being served to you are eerily similar to stuff you were just browsing online, it’s not all in your head, and it’s the insidious truth of existing online without installing a bunch of browser extensions. But there’s now a tool that, while comically absurd in execution, can stick it to the man (advertisers) by effectively disguising your true interests. Hope you like tabs.

The tool, called Track THIS, was developed by the Mozilla Firefox folks and lets you pick one of four profiles—Hypebeast, Filthy Rich, Doomsday, or Influencer. You’ll then allow the tool to open 100 tabs based on the associated profile type. Data brokers and advertisers build a profile on you based on how you navigate the internet, which includes the webpages you visit. So whichever one of these personalities you choose will, theoretically, be how advertisers view you, which in turn will influence the type of ads you see.

I tried out both the Filthy Rich and Doomsday Prepper profiles. It took a few minutes for all 100 tabs to open up for each on Chrome. (If you’re on a computer that doesn’t have much RAM, just know that you might have to restart after everything freezes.) For the former, there were a lot of yacht sites, luxury designers, stock market sites, expensive watches, some equestrian real estate brokers, a page to sign up for a Mastercard Gold Card, and a page to book a room at the MGM Grand. For the latter, links to survival supplies and checklists, tents, mylar blankets, doomsday movies, and a lot (a lot) of conspiracy theories. I’m about to get served some ads for some luxury-ass Hazmat suits.

Screenshot: Melanie Ehrenkranz

As Mozilla noted in a blog post announcing the tool, it’ll likely only work as intended for a few days and then will revert back to showing you ads more in tune with your actual preferences. “This will show you ads for products you might not be interested in at all, so it’s really just throwing off brands who want to advertise to a very specific type of person,” the company wrote. “You’ll still be seeing ads. And eventually, if you just use the internet as you typically would day to day, you’ll start seeing ads again that align more closely to your normal browsing habits.”

Of course, you’re probably not going to fire up 100 tabs routinely to trick advertisers—the tool is more of a brilliantly ridiculous nod to the lengths we have to go to only temporarily be just a little less intimately targeted.

Source: Mozilla Has a New Tool for Tricking Advertisers Into Believing You’re Filthy Rich

And this is how monopolies take advantage of Open Source: Google’s plan to fork curl for no reason than to have their own version

Google is planning to reimplement parts of libcurl, a widely used open-source file transfer library, as a wrapper for Chromium’s networking API – but curl’s lead developer does not welcome the “competition”.

Issue 973603 in the Chromium bug tracker describes libcrurl,”a wrapper library for the libcurl easy interface implemented via Cronet API”.

Cronet is the Chromium network stack, used not only by Google’s browser but also available to Android applications.

The rationale is that:

Implementing libcurl using Cronet would allow developers to take advantage of the utility of the Chrome Network Stack, without having to learn a new interface and its corresponding workflow. This would ideally increase ease of accessibility of Cronet, and overall improve adoption of Cronet by first-party or third-party applications.

The Google engineer also believes that “it may also be desirable to develop a ‘crurl’ tool, which would potentially function as a substitute for the curl command in terminal or similar processes. This would be useful to troubleshoot connection issues or test the functionality of the Chrome Network Stack in a easily [sic] reproducible manner.”

Daniel Stenberg, lead developer of curl, has his doubts:

Getting basic functionality for a small set of use cases should be simple and straight forward. But even if they limit the subset to number of functions and libcurl options, making them work exactly as we have them documented will be hard and time consuming.

I don’t think applications will be able to arbitrarily use either library for a very long time, if ever. libcurl has 80 public functions and curl_easy_setopt alone takes 268 different options!

The real issue, though, is not so much Google’s ability to do this – after all, as Stenberg noted: “If they just put two paid engineers on their project they already have more dedicated man power than the original libcurl project does.”

Rather, it is why Google is reimplementing libcurl as a wrapper for its own APIs rather than simply using libcurl and potentially improving it for everyone.

“I think introducing half-baked implementations of the API will cause users grief since it will be hard for users to understand what API it is and how they differ,” Stenberg wrote. He also feels that naming the Chromium versions “libcrurl” and “crurl” will cause confusion as they “look like typos of the original names”.

Stenberg is clear that the Google team is morally and legally allowed to do this, since curl is free and open source under the MIT licence. But he added:

We are determined to keep libcurl the transfer library for the internet. We support the full API and we offer full backwards compatibility while working the same way on a vast amount of different platforms and architectures. Why use a copy when the original is free, proven and battle-tested since years?

Over to you, Google.®

Source: Kids can be so crurl: Lead dev unchuffed with Google’s plan to remake curl in its own image • The Register

Chrome is the biggest snoop of all on your computer or cell phone – so switch browser before there is no alternative any more

You open your browser to look at the Web. Do you know who is looking back at you?

Over a recent week of Web surfing, I peered under the hood of Google Chrome and found it brought along a few thousand friends. Shopping, news and even government sites quietly tagged my browser to let ad and data companies ride shotgun while I clicked around the Web.

This was made possible by the Web’s biggest snoop of all: Google. Seen from the inside, its Chrome browser looks a lot like surveillance software.

Lately I’ve been investigating the secret life of my data, running experiments to see what technology really gets up to under the cover of privacy policies that nobody reads. It turns out, having the world’s biggest advertising company make the most popular Web browser was about as smart as letting kids run a candy shop.

It made me decide to ditch Chrome for a new version of nonprofit Mozilla’s Firefox, which has default privacy protections. Switching involved less inconvenience than you might imagine.

My tests of Chrome vs. Firefox unearthed a personal data caper of absurd proportions. In a week of Web surfing on my desktop, I discovered 11,189 requests for tracker “cookies” that Chrome would have ushered right onto my computer but were automatically blocked by Firefox. These little files are the hooks that data firms, including Google itself, use to follow what websites you visit so they can build profiles of your interests, income and personality.

Chrome welcomed trackers even at websites you would think would be private. I watched Aetna and the Federal Student Aid website set cookies for Facebook and Google. They surreptitiously told the data giants every time I pulled up the insurance and loan service’s log-in pages.

And that’s not the half of it.

Look in the upper right corner of your Chrome browser. See a picture or a name in the circle? If so, you’re logged in to the browser, and Google might be tapping into your Web activity to target ads. Don’t recall signing in? I didn’t, either. Chrome recently started doing that automatically when you use Gmail.

Chrome is even sneakier on your phone. If you use Android, Chrome sends Google your location every time you conduct a search. (If you turn off location sharing it still sends your coordinates out, just with less accuracy.)

Firefox isn’t perfect — it still defaults searches to Google and permits some other tracking. But it doesn’t share browsing data with Mozilla, which isn’t in the data-collection business.

At a minimum, Web snooping can be annoying. Cookies are how a pair of pants you look at in one site end up following you around in ads elsewhere. More fundamentally, your Web history — like the color of your underpants — ain’t nobody’s business but your own. Letting anyone collect that data leaves it ripe for abuse by bullies, spies and hackers.

[…]

Choosing a browser is no longer just about speed and convenience — it’s also about data defaults.

It’s true that Google usually obtains consent before gathering data, and offers a lot of knobs you can adjust to opt out of tracking and targeted advertising. But its controls often feel like a shell game that results in us sharing more personal data.

I felt hoodwinked when Google quietly began signing Gmail users into Chrome last fall. Google says the Chrome shift didn’t cause anybody’s browsing history to be “synced” unless they specifically opted in — but I found mine was being sent to Google and don’t recall ever asking for extra surveillance. (You can turn off the Gmail auto-login by searching “Gmail” in Chrome settings and switching off “Allow Chrome sign-in.”)

After the sign-in shift, Johns Hopkins associate professor Matthew Green made waves in the computer science world when he blogged he was done with Chrome. “I lost faith,” he told me. “It only takes a few tiny changes to make it very privacy unfriendly.”

When you use Chrome, signing into Gmail automatically logs in the browser to your Google account. When “sync” is also on, Google receives your browsing history.

There are ways to defang Chrome, which is much more complicated than just using “Incognito Mode.” But it’s much easier to switch to a browser not owned by an advertising company.

Like Green, I’ve chosen Firefox, which works across phones, tablets, PCs and Macs. Apple’s Safari is also a good option on Macs, iPhones and iPads, and the niche Brave browser goes even further in trying to jam the ad-tech industry.

What does switching to Firefox cost you? It’s free, and downloading a different browser is much simpler than changing phones.

[…]

And as a nonprofit, it earns money when people make searches in the browser and click on ads — which means its biggest source of income is Google. Mozilla’s chief executive says the company is exploring new paid privacy services to diversify its income.

Its biggest risk is that Firefox might someday run out of steam in its battle with the Chrome behemoth. Even though it’s the No. 2 desktop browser,with about 10 percent of the market, major sites could decide to drop support, leaving Firefox scrambling.

If you care about privacy, let’s hope for another David and Goliath outcome.

Source: Google is the biggest snoop of all on your computer or cell phone

FYI: Your Venmo transfers with those edgy emojis aren’t private by default. And someone’s put 7m of them into a public DB

Graduate student Dan Salmon has released online seven million Venmo transfers, scraped from the social payment biz in recent months, to call attention to the privacy risks of public transaction data.

Venmo, for the uninitiated, is an app that allows friends to pay each other money for stuff. El Reg‘s Bay Area vultures primarily use it for settling restaurant and bar bills that we have no hope of expensing; one person pays on their personal credit card, and their pals transfer their share via Venmo. It makes picking up the check a lot easier.

Because it’s the 2010s, by default, Venmo makes those transactions public along with attached messages and emojis, sorta like Twitter but for payments, allowing people to pry into strangers’ spending and interactions. Who went out with whom for drinks, who owed someone a sizable debt, who went on vacation, and so on.

“I am releasing this dataset in order to bring attention to Venmo users that all of this data is publicly available for anyone to grab without even an API key,” said Salmon in a post to GitHub. “There is some very valuable data here for any attacker conducting [open-source intelligence] research.”

[…]

Despite past criticism from privacy advocates and a settlement with the US Federal Trade Commission, Venmo has kept person-to-person purchases public by default.

[…]

Last July, Berlin-based researcher Hang Do Thi Duc explored some 200m Venmo transactions from 2017 and set up a website, PublicByDefault.fyi, to peruse the e-commerce data. His stated goal was to change people’s attitudes about sharing data unnecessarily.

When The Register asked about transaction privacy last year, after a developer created a bot that tweeted Venmo purchases mentioning drugs, a company spokesperson said, “Like on other social networks, Venmo users can choose what they want to share on the Venmo public feed. There are a number of different settings that users can customize when it comes to sharing payments on Venmo.”

The current message from the company is not much different: “Venmo was designed for sharing experiences with your friends in today’s social world, and the newsfeed has always been a big part of this,” a Venmo spokesperson told The Register in an email. “Our users trust us with their money and personal information, and we take this responsibility very seriously.”

“I think Venmo is resisting calls to make their data private because it would go against the entire pitch of the app,” said Salmon. “Venmo is designed to be a “‘social’ app and the more open and social you make things, the more you open yourself to problems.”

Venmo’s privacy policy details all the ways in which customer data is not private.

Source: FYI: Your Venmo transfers with those edgy emojis aren’t private by default. And someone’s put 7m of them into a public DB • The Register

Readability of privacy policies for big tech companies visualised

For The New York Times, Kevin Litman-Navarro plotted the length and readability of privacy policies for large companies:

To see exactly how inscrutable they have become, I analyzed the length and readability of privacy policies from nearly 150 popular websites and apps. Facebook’s privacy policy, for example, takes around 18 minutes to read in its entirety – slightly above average for the policies I tested.

The comparison is between websites with a focus on Facebook and Google, but the main takeaway I think is that almost all privacy policies are complex, because they’re not there for the users.

Source: Readability of privacy policies for big tech companies | FlowingData

British Official Signs U.S. Extradition Order For Julian Assange Despite Hostility Between UK Home Secretary and Trump Regime

Britain’s Home Secretary Sajid Javid told BBC Radio today that he has signed the extradition order for Julian Assange, paving the way for the WikiLeaks founder to be sent to the U.S. to face charges of computer hacking and espionage.

“There’s an extradition request from the U.S. that is before the courts tomorrow, but yesterday I signed the extradition order, certified it, and that will be going in front of the courts tomorrow,” Javid said according to Australia’s public broadcaster, the ABC.

Assange is scheduled to appear in a UK court on Friday, though it’s not clear whether he’ll appear by video link or in person.

“It’s a decision ultimately for the courts but there is a very important part of it for the Home Secretary and I want to see justice done at all times, and we’ve got a legitimate extradition request so I’ve signed it, but the final decision is now with the courts,” Javid continued.

Curiously, Home Secretary Javid signed the extradition paperwork despite not being on the best terms with the U.S. government right now. Javid wasn’t invited to attend formal ceremonies when President Donald Trump recently visited the UK and some believe it’s because Javid criticized Trump’s treatment of Muslims in 2017 as well as the American president’s retweets of the far right group Britain First. Javid has a Muslim background, though he insists he doesn’t know why he wasn’t invited to the recent U.S.-focused events in Britain.

Assange is currently being held in Belmarsh prison in southern London and is serving a 50-week sentence for jumping bail in 2012. Assange sought asylum during the summer of 2012 at Ecuador’s embassy in London, where he lived for almost seven years until this past April. Ecuador revoked Assange’s asylum and the WikiLeaks founder was physically dragged out of the embassy by British police.

WikiLeaks founder Julian Assange, a 47-year-old Australian national, appears to be one step closer to being sent to the United States, but the deal is not done, as Javid notes. Not only does the extradition order need final approval by the UK court, there’s still the question of whether Assange could be sent to Sweden to face sexual assault charges.

The statute of limitation has expired for one of the sexual assault claims made against Assange in Sweden, but a rape claim could still be pursued if Swedish prosecutors decide to push the case. A Swedish court ruled earlier this month that Assange should not be detained in absentia, the first move under Swedish law that would have paved the way for his extradition.

Assange’s Swedish lawyer has previously claimed that Assange was too ill to even appear in court via video link, but secret video seemingly recorded by another inmate recently showed Assange looking relatively normal and healthy.

Assange has been charged with 18 counts by the U.S. Justice Department, including one under the Espionage Act, which potentially carries the death penalty. But American prosecutors supposedly gave Ecuador a “verbal pledge” that they won’t pursue death in Assange’s case, according to American news channel ABC. Obviously, a “verbal pledge” is not something that would hold up in court.

Source: British Official Signs U.S. Extradition Order For Julian Assange Despite Hostility Between UK Home Secretary and Trump Regime

Popular Soccer App Spied on Fans Through Phone Microphone to Catch Bars Pirating Game Streams

Spain’s data protection agency has fined La Liga, the nation’s top professional soccer league, 250,000 euros ($283,000 USD) for using the league’s phone app to spy on its fans. With millions of downloads, the app was reportedly being used to surveil bars in an effort to catch establishments playing matches on television without a license.

The La Liga app provides users with schedules, player rankings, statistics, and league news. It also knows when they’re watching games and where.

According to Spanish newspaper El País, the league told authorities that when its apps detected users were in bars the apps would record audio through phone microphones. The apps would then use the recording to determine if the user was watching a soccer game, using technology that’s similar to the Shazam app. If a game was playing in the vicinity, officials would then be able to determine if that bar location had a license to play the game.

So not only was the app spying on fans, but it was also turning those fans into unwitting narcs. El Diario reports that the app has been downloaded 10 million times.

Source: Popular Soccer App Spied on Fans Through Phone Microphone to Catch Bars Pirating Game Streams

The fine is insanely low, especially considering it’s the Spanish billionaires club that has to pay it.

The Russian Government Now Requires Tinder to Hand Over People’s Sexts

Tinder users in Russia may now have to decide whether the perks of dating apps outweigh a disconcerting invasion of privacy. Russian authorities are now requiring that the dating app hand over a wealth of intimate user data, including private messages, if and when it asks for them.

Tinder is the fourth dating app in the nation to be forced to comply with the Russian government’s request for user data, Moscow Times reports, and it’s among 175 services that have already consented to share information with the nation’s Federal Security Service, according to a registry online.

Tinder was added to the list of services that have to comply with the Russian data requests last Friday, May 31. The data Tinder must collect and provide to Russia upon request includes user data and all communications including audio and video. According to Tinder’s privacy policy, it does collect all your basic profile details, such as your date of birth and gender as well as the content you publish and your chats with other users, among other information. Which means the Russian government could get its hands on your sexts, your selfies, and even details on where you’ve been or where you might be going if it wants to.

It’s unclear if the possible data requests will apply to just Tinder users within Russia or any users of the dating app, regardless of where they are. If it’s the latter, it points to an unsettling reality in which one nation is able to extend its reach into the intimate data of people all over the world by simply making the request to any complying service that happens to also operate in Russia.

We have reached out to Tinder about which users this applies to, whether it will comply with this request, and what type of data it will share with the Russian authorities. We will update when we hear back. According to the Associated Press, Russian’s communications regulator confirmed on Monday that the company had shared information with it.

The Russian government is not only targeting Tinder. As the lengthy registry online indicates, a large and diverse range of services are already on the list and have been for years. This includes Snap, Wechat, Vimeo, and Badoo, another popular dating app in Russia.

Telegram famously objected to the Russian authorities’ request for its encryption keys last year, which resulted in the government banning the encrypted messaging app. It was an embarrassing mess for Russian internet service providers, which in their attempt to block workarounds for the messaging app, disrupted a litany of services online.

Source: The Russian Government Now Requires Tinder to Hand Over People’s Sexts

EU countries and car manufacturers, navigation systems will share information between everyone

Advanced Driver Assistance Systems (ADAS) in cars such as automatic braking systems, systems that detect the state of the road, if there is anything in your blind spot and navigation systems will be sharing their data with European countries, car manufacturers and presumably insurers under the cloak of making driving safer. I’m sure it will, but I still don’t feel comfortable having the government know where I am at all times and what my driving style is like.

The link below is in Dutch.

Source: EU-landen en autofabrikanten delen informatie voor meer verkeersveiligheid – Emerce

US now requires social media info for visa applications

If you want to stay in the US, you’ll likely have to share your internet presence. As proposed in March 2018 (and to some extent in 2015), the country now requires virtually all visa applicants to provide their social media account names for the past five years. The mandate only covers a list of selected services, although potential visitors and residents can volunteer info if they belong to social sites that aren’t mentioned in the form.

Applicants also have to provide previous email addresses and phone numbers on top of non-communications info like their travel statuses and any family involvement in terrorism. Some diplomats and officials are exempt from the requirements.

The US had previously only required these details for people who visited terrorist-controlled areas. The goal is the same, however. The US is hoping to both verify identities and spot extremists who’ve discussed their ideologies online, potentially preventing incidents like the San Bernardino mass shooting.

The measure will affect millions of visa seekers each year, although whether or not it will be effective isn’t clear. A State Department official told The Hill that applicants could face “serious immigration consequences” if they’re caught lying, but it’s not certain that they’ll be found out in a timely fashion — the policy is counting on applicants both telling the truth and having relatively easy-to-find accounts if they’re dishonest. And like it or not, this affects the privacy of social media users who might not want to divulge their online identities (particularly private accounts) to government staff.

Source: US now requires social media info for visa applications

In case you’re wondering, this is not a Good Thing

Apple’s privacy schtick is just an act, say folks suing the iGiant: iTunes ‘purchase histories sold’ to highest bidders

Apple has been hit with a class-action complaint in the US accusing the iGiant of playing fast and loose with the privacy of its customers.

The lawsuit [PDF], filed this month in a northern California federal district court, claims the Cupertino music giant gathers data from iTunes – including people’s music purchase history and personal information – then hands that info over to marketers in order to turn a quick buck.

“To supplement its revenues and enhance the formidability of its brand in the eyes of mobile application developers, Apple sells, rents, transmits, and/or otherwise discloses, to various third parties, information reflecting the music that its customers purchase from the iTunes Store application that comes pre-installed on their iPhones,” the filing alleged.

“The data Apple discloses includes the full names and home addresses of its customers, together with the genres and, in some cases, the specific titles of the digitally-recorded music that its customers have purchased via the iTunes Store and then stored in their devices’ Apple Music libraries.”

What’s more, the lawsuit goes on to claim that the data Apple sells is then combined by the marketers with information purchased from other sources to create detailed profiles on individuals that allow for even more targeted advertising.

Additionally, the lawsuit alleges the Music APIs Apple includes in its developer kit can allow third-party devs to harvest similarly detailed logs of user activity for their own use, further violating the privacy of iTunes customers.

The end result, the complaint states, is that Cook and Co are complacent in the illegal harvesting and reselling of personal data, all while pitching iOS and iTunes as bastions of personal privacy and data security.

“Apple’s disclosures of the personal listening information of plaintiffs and the other unnamed Class members were not only unlawful, they were also dangerous because such disclosures allow for the targeting of particularly vulnerable members of society,” the complaint reads.

“For example, any person or entity could rent a list with the names and addresses of all unmarried, college-educated women over the age of 70 with a household income of over $80,000 who purchased country music from Apple via its iTunes Store mobile application. Such a list is available for sale for approximately $136 per thousand customers listed.”

Source: Apple’s privacy schtick is just an act, say folks suing the iGiant: iTunes ‘purchase histories sold’ to highest bidders • The Register

Newly Released Amazon Patent Shows Just How Much Creepier Alexa Can Get

A newly revealed patent application filed by Amazon is raising privacy concerns over an envisaged upgrade to the company’s smart speaker systems. This change would mean that, by default, the devices end up listening to and recording everything you say in their presence.

Alexa, Amazon’s virtual assistant system that runs on the company’s Echo series of smart speakers, works by listening out for a ‘wakeword’ that tells the device to turn on its extended speech recognition systems in order to respond to spoken commands.

[…]

In theory, Alexa-enabled devices will only record what you say directly after the wakeword, which is then uploaded to Amazon, where remote servers use speech recognition to deduce your meaning, then relay commands back to your local speaker.

But one issue in this flow of events, as Amazon’s recently revealed patent application argues, is it means that anything you say before the wakeword isn’t actually heard.

“A user may not always structure a spoken command in the form of a wakeword followed by a command (eg. ‘Alexa, play some music’),” the Amazon authors explain in their patent application, which was filed back in January, but only became public last week.

“Instead, a user may include the command before the wakeword (eg. ‘Play some music, Alexa’) or even insert the wakeword in the middle of a command (eg. ‘Play some music, Alexa, the Beatles please’). While such phrasings may be natural for a user, current speech processing systems are not configured to handle commands that are not preceded by a wakeword.”

To overcome this barrier, Amazon is proposing an effective workaround: simply record everything the user says all the time, and figure it out later.

Rather than only record what is said after the wakeword is spoken, the system described in the patent application would effectively continuously record all speech, then look for instances of commands issued by a person.

Source: Newly Released Amazon Patent Shows Just How Much Creepier Alexa Can Get

wow – a continuous spy in your home

Germany thinks about resurrecting the Stasi, getting rid of end-to-end chat app encryption and requiring decrypted plain-text.

Government officials in Germany are reportedly mulling a law to force chat app providers to hand over end-to-end encrypted conversations in plain text on demand.

According to Der Spiegel this month, the Euro nation’s Ministry of the Interior wants a new set of rules that would require operators of services like WhatsApp, Signal, Apple iMessage, and Telegram to cough up plain-text records of people’s private enciphered chats to authorities that obtain a court order.

This would expand German law, which right now only allows communications to be gathered from a suspect’s device itself, to also include the companies providing encrypted chat services and software. True and strong end-to-end encrypted conversations can only be decrypted by those participating in the discussion, so the proposed rules would require app makers to deliberately knacker or backdoor their code in order to comply. Those changes would be needed to allow them to collect messages passing through their systems and decrypt them on demand.

Up until now, German police have opted not to bother with trying to decrypt the contents of messages in transit, opting instead to simply seize and break into the device itself, where the messages are typically stored in plain text.

The new rules are set to be discussed by the members of the interior ministry in an upcoming June conference, and are likely to face stiff opposition not only on privacy grounds, but also in regards to the technical feasibility of the requirements.

Spokespeople for Facebook-owned WhatsApp, and Threema, makers of encrypted messaging software, were not available to comment.

The rules are the latest in an ongoing global feud between the developers of secure messaging apps and the governments. The apps, designed in part to let citizens, journalists, and activists communicate secured from the prying eyes of oppressive government regimes.

The governments, meanwhile, say that the apps also provide a safe haven for criminals and terror groups that want to plan attacks and illegal activities, making it harder for intelligence and police agencies to perform vital monitoring tasks.

The app developers note that even if governments do try to implement mandatory decryption (aka backdoor) capabilities, actually getting those tools to work properly, without opening up a massive new security hole in the platforms that miscreants and criminals could exploit, would be next to impossible.

Source: Germany mulls giving end-to-end chat app encryption das boot: Law requiring decrypted plain-text is in the works • The Register

Whatever happened to mail confidentiality then?

Google Now Forces Microsoft Edge Preview Users to Use Chrome for the Modern YouTube Experience – a bit like they fuck around with Firefox

Microsoft started testing a new Microsoft Edge browser based on Chromium a little while ago. The company has been releasing new canary and dev builds for the browser over the last few weeks, and the preview is actually really great. In fact, I have been using the new Microsoft Edge Canary on my main Windows machine and my MacBook Pro for more than a month, and it’s really good.

But if you watch YouTube quite a lot, you will face a new problem on the new Edge. It turns out, Google has randomly disabled the modern YouTube experience for users of the new Microsoft Edge. Users are now redirected to the old YouTube experience, which lacks the modern design as well as the dark theme for YouTube, as first spotted by Gustave Monce. And when you try to manually access the new YouTube from youtube.com/new, YouTube simply asks users to download Google Chrome, stating that the Edge browser isn’t supported. Ironically, the same page states “We support the latest versions of Chrome, Firefox, Opera, Safari, and Edge.”

The change affects the latest versions of Microsoft Edge Canary and Dev channels. It is worth noting that the classic Microsoft Edge based on EdgeHTML continues to work fine with the modern YouTube experience.

The weird thing here is that Microsoft has been working closely with Google engineers on the new Edge and Chromium. Both the companies engineers are working closely to improve Chromium and introduce new features like ARM64 support to Chromium. So it’s very odd that Google would prevent users of the new Microsoft Edge browser from using the modern YouTube experience. This is most likely an error on Google’s part, but it could be intentional, too — we really don’t know for now.

Source: Google Now Forces Microsoft Edge Preview Users to Use Chrome for the Modern YouTube Experience – Thurrott.com

See also:
Google isn’t the company that we should have handed the Web over to: why MS switching to Chromium is a bad idea

Bose headphones spy on listeners, sell that information on without consent or knowledge: lawsuit

Bose Corp spies on its wireless headphone customers by using an app that tracks the music, podcasts and other audio they listen to, and violates their privacy rights by selling the information without permission, a lawsuit charged.

The complaint filed on Tuesday by Kyle Zak in federal court in Chicago seeks an injunction to stop Bose’s “wholesale disregard” for the privacy of customers who download its free Bose Connect app from Apple Inc or Google Play stores to their smartphones.

[…]

After paying $350 for his QuietComfort 35 headphones, Zak said he took Bose’s suggestion to “get the most out of your headphones” by downloading its app, and providing his name, email address and headphone serial number in the process.

But the Illinois resident said he was surprised to learn that Bose sent “all available media information” from his smartphone to third parties such as Segment.io, whose website promises to collect customer data and “send it anywhere.”

Audio choices offer “an incredible amount of insight” into customers’ personalities, behavior, politics and religious views, citing as an example that a person who listens to Muslim prayers might “very likely” be a Muslim, the complaint said.

“Defendants’ conduct demonstrates a wholesale disregard for consumer privacy rights,” the complaint said.

Zak is seeking millions of dollars of damages for buyers of headphones and speakers, including QuietComfort 35, QuietControl 30, SoundLink Around-Ear Wireless Headphones II, SoundLink Color II, SoundSport Wireless and SoundSport Pulse Wireless.

He also wants a halt to the data collection, which he said violates the federal Wiretap Act and Illinois laws against eavesdropping and consumer fraud.

Dore, a partner at Edelson PC, said customers do not see the Bose app’s user service and privacy agreements when signing up, and the privacy agreement says nothing about data collection.

Edelson specializes in suing technology companies over alleged privacy violations.

The case is Zak v Bose Corp, U.S. District Court, Northern District of Illinois, No. 17-02928.

Source: Bose headphones spy on listeners: lawsuit | Article [AMP] | Reuters