Top European Court Rules UK Mass Surveillance Regime Violates Human Rights

The European Court of Human Rights (ECHR) ruled this week that the United Kingdom government’s surveillance regime violated human rights laws.

The matter first came to light in 2013 when NSA whistleblower Edward Snowden revealed British surveillance practices—namely that the government intercepts social media, messages, and phone calls regardless of criminal record or suspicions of criminal activity.

The ECHR decided the surveillance program violates Article 8 of the European Convention on Human Rights—the right to a private life and a family life—due to what the court regarded as “insufficient oversight” of the selection of collected communications.

The court also believes that journalistic sources were not adequately protected. ECHR judges wrote, “In view of the potential chilling effect that any perceived interference with the confidentiality of journalists’ communications and, in particular, their sources might have on the freedom of the press, the Court found that the bulk interception regime was also in violation of article 10.”

In 2016, the UK Investigatory Powers Tribunal also ruled that intelligence agencies violated human rights through bulk collection and unsatisfactory oversight.

A group of human rights organizations including Big Brother Watch and Amnesty International brought the case to the court. The advocacy groups focused on the power granted by the Regulation of Investigatory Powers Act 2000 (RIPA), which was replaced in 2016 by the Investigatory Powers Act in 2016, a bill that hasn’t yet gone into effect.

“This landmark judgment confirming that the UK’s mass spying breached fundamental rights vindicates Mr. Snowden’s courageous whistleblowing,” Silkie Carlo, director of the Big Brother Watch, said in a statement. “Under the guise of counter-terrorism, the UK has adopted the most authoritarian surveillance regime of any Western state, corroding democracy itself and the rights of the British public. This judgment is a vital step towards protecting millions of law-abiding citizens from unjustified intrusion.”

The ECHR did deviate from these watchdog groups with the court ruling that the practice of sharing collected information with foreign nations—as opposed to oversight of the collection itself—does not violate freedom of speech or the right to a private life.

Source: Top European Court Rules UK Mass Surveillance Regime Violates Human Rights

How Location Tracking Actually Works on Your Smartphone (and how to manipulate it – kind of)

As the recent revelation over Google’s background tracking of your location shows, it’s not as easy as it should be to work out when apps, giant tech companies and pocket devices are tracking your location and when they’re not. Here’s what you need to know about how location tracking works on a phone—and how to disable it.

Location information is one of the prime bits of data any company can get on you, whether they want to personalize your weather reports or serve up an ad for a local bakery. As a result apps and mobile OSes are very keen to get hold of it. It’s a compromise though, and if you don’t want to give it away, you’ll have do without some location-based services (like directions to the park). Do you want convenience or privacy? You can’t have both, but know how it works, and when you can or should activate it should help.

Source: How Location Tracking Actually Works on Your Smartphone

Of course, you can’t stop Google entirely and if you use your browser then data will be sent to the sites you are visiting. It’s an unfortunate fact that this is inescapable using Android and IOS and the alternatives aren’t quite there yet. But for a layman, this is a pretty good starter guide.

Google Reportedly Bought Your Mastercard Data in Secret, and That’s Not Even the Bad News

Bloomberg reports that, after four years of negotiations, Google purchases a trove of credit card transaction data from Mastercard, allegedly for “millions of dollars.” Google then reportedly used that data to provide select advertisers with a tool called “store sales measurement” that the company quietly announced in a blog post last year, though it failed to mention the inclusion of Mastercard data in the workflow. The tool can track how online ads lead to real-world purchases, and that extra data is designed to make Google’s ad products more appealing to advertisers. (Read: everybody makes more money this way.) The public was not informed of the reported Mastercard deal, though advertisers have had access to the transaction data for at least a year, according to Bloomberg.

This is a hell of a bombshell, when you think about it. Thanks in part to heavy government regulation, your credit card and banking data has long been private. If you wanted to spend $98 at Sephora on a Tuesday afternoon, that transaction was between you, your bank, and Sephora. It now appears that Google has found a way to weasel its way into the data pipeline that connects consumers and their purchases. If you clicked on a Sephora ad while logged in to Google in the past year and then bought stuff at Sephora with a Mastercard in the past year, there’s a chance Google knows about that, at least on some level, and uses that data help its advertisers stuff their coffers.

[…]

This Orwellian ad engine does exist in Google’s new tool. Given the secrecy surrounding Google’s alleged Mastercard-assisted ad program, however, it’s hard to know what other tech giants are doing with our personal financial information. Amazon certainly knows a lot about the things we buy, and we learned earlier this year that the online retail giant was exploring the possibility of getting into the banking business itself. The Wall Street Journal has also reported that Amazon, like Facebook and Google, has had conversations with banks about gaining access to personal financial information.

Source: Google Reportedly Bought Your Banking Data in Secret, and That’s Not Even the Bad News

TSA says ‘Quiet Skies’ surveillance snared zero threats but put 5000 travellers under surveillance and on no fly lists

SA officials were summoned to Capitol Hill Wednesday and Thursday afternoon following Globe reports on the secret program, which sparked sharp criticism because it includes extensive surveillance of domestic fliers who are not suspected of a crime or listed on any terrorist watch list.

“Quiet Skies is the very definition of Big Brother,” Senator Edward Markey of Massachusetts, a member of the Senate Commerce, Science, and Transportation committee, said broadly about the program. “American travelers deserve to have their privacy and civil rights protected even 30,000 feet in the air.”

[…]

The teams document whether passengers fidget, use a computer, or have a “cold penetrating stare,” among other behaviors, according to agency documents.

All US citizens who enter the country from abroad are screened via Quiet Skies. Passengers may be selected through a broad, undisclosed set of criteria for enhanced surveillance by a team of air marshals on subsequent domestic flights, according to agency documents.

Dozens of air marshals told the Globe the “special mission coverage” seems to test the limits of the law, and is a waste of time and resources. Several said surveillance teams had been assigned to follow people who appeared to pose no threat — a working flight attendant, a businesswoman, a fellow law enforcement officer — and to document their actions in-flight and through airports.

[…]

The officials said about 5,000 US citizens had been closely monitored since March and none of them were deemed suspicious or merited further scrutiny, according to people with direct knowledge of the Thursday meeting.

Source: TSA says ‘Quiet Skies’ surveillance snared zero threats – The Boston Globe

Didn’t the TSA learn anything from the no-fly lists not working in the first place?!

Spyware Company Leaves ‘Terabytes’ of Selfies, Text Messages, and Location Data Exposed Online

A company that sells surveillance software to parents and employers left “terabytes of data” including photos, audio recordings, text messages and web history, exposed in a poorly-protected Amazon S3 bucket.

Image: Shutterstock

This story is part of When Spies Come Home, a Motherboard series about powerful surveillance software ordinary people use to spy on their loved ones.

A company that markets cell phone spyware to parents and employers left the data of thousands of its customers—and the information of the people they were monitoring—unprotected online.

The data exposed included selfies, text messages, audio recordings, contacts, location, hashed passwords and logins, Facebook messages, among others, according to a security researcher who asked to remain anonymous for fear of legal repercussions.

Last week, the researcher found the data on an Amazon S3 bucket owned by Spyfone, one of many companies that sell software that is designed to intercept text messages, calls, emails, and track locations of a monitored device.

Source: Spyware Company Leaves ‘Terabytes’ of Selfies, Text Messages, and Location Data Exposed Online – Motherboard

Android data slurping measured and monitored – scary amounts and loads of location tracking

Google’s passive collection of personal data from Android and iOS has been monitored and measured in a significant academic study.

The report confirms that Google is no respecter of the Chrome browser’s “incognito mode” aka “porn mode”, collecting Chrome data to add to your personal profile, as we pointed out earlier this year.

It also reveals how phone users are being tracked without realising it. How so? It’s here that the B2B parts of Google’s vast data collection network – its publisher and advertiser products – kick into life as soon the user engages with a phone. These parts of Google receive personal data from an Android even when the phone is static and not being used.

The activity has come to light thanks to research (PDF) by computer science professor Douglas Schmidt of Vanderbilt University, conducted for the nonprofit trade association Digital Content Next. It’s already been described by one privacy activist as “the most comprehensive report on Google’s data collection practices so far”.

[…]

Overall, the study discovered that Apple retrieves much less data than Google.

“The total number of calls to Apple servers from an iOS device was much lower, just 19 per cent the number of calls to Google servers from an Android device.

Moreover, there are no ad-related calls to Apple servers, which may stem from the fact that Apple’s business model is not as dependent on advertising as Google’s. Although Apple does obtain some user location data from iOS devices, the volume of data collected is much (16x) lower than what Google collects from Android,” the study noted.

Source: Android data slurping measured and monitored • The Register

The amount of location data slurped is scary – and it continues to slurp location in many different ways, even if wifi is turned off. It’s Big Brother in your pocket, with no opt out.

UK snooping ‘unlawful for more than decade’ – but seemingly (and amazingly) responsible

The system that allowed spy agency GCHQ access to vast amounts of personal data from telecoms companies was unlawful for more than a decade, a surveillance watchdog has ruled.

The Investigatory Powers Tribunal said that successive foreign secretaries had delegated powers without oversight.

But it added there was no evidence GCHQ had misused the system.

Privacy International criticised the “cavalier manner” in which personal data was shared.

The group brought the legal challenge and solicitor Millie Graham Wood said it was “proof positive” that the system set up to protect personal data was flawed.

“The foreign secretary was supposed to protect access to our data by personally authorising what is necessary and proportionate for telecommunications companies to provide to the agencies.

“The way that these directions were drafted risked nullifying that safeguard by delegating that power to GCHQ – a violation that went undetected by the system of commissioners for years and was seemingly consented to by all of the telecommunications companies affected.”

Under security rules introduced after the attacks on 11 September 2001, the UK’s foreign secretary had the power to direct GCHQ to obtain data from telecoms companies, with little oversight of what they were subsequently asking for.

Carte blanche

The Investigatory Powers Tribunal (IPT) – set up to investigate complaints about how personal data is handled by public bodies – ruled that most of the directions given between 2001 and 2012 had been unlawful.

The tribunal was critical of the way the government handed on requests to GCHQ, partly because phone and internet providers “would not be in any position to question the scope of the requirement” because they “would have no knowledge of the limited basis upon which the direction had been made”.

“In form, the general direction was a carte blanche. In practice, it was not treated as such and there is no evidence that GCHQ ever sought to obtain communications data which fell outside the scope of data which had been sought in the submission to the foreign secretary,” the IPT ruled.

It added that a series of improvements had been made and were in force “from at least 2014” that ensured “great care” was now taken to ensure the foreign secretary approved any changes to the information being demanded from telecoms companies.

Source: UK snooping ‘unlawful for more than decade’ – BBC News

Robocall Firm Exposes Hundreds of Thousands of US Voters’ Records

Personal details and political affiliations exposed

The server that drew Diachenko’s attention, this time, contained 2,584 files, which the researcher later connected to RoboCent.

The type of user data exposed via Robocent’s bucket included:

⬖  Full Name, suffix, prefix
⬖  Phone numbers (cell and landlines)
⬖  Address with house, street, city, state, zip, precinct
⬖  Political affiliation provided by state, or inferred based on voting history
⬖  Age and birth year
⬖  Gender
⬖  Jurisdiction breakdown based on district, zip code, precinct, county, state
⬖  Demographics based on ethnicity, language, education

Other data found on the servers, but not necessarily personal data, included audio files with prerecorded political messages used for robocalls.

According to RoboCent’s website, the company was not only providing robo-calling services for political surveys and inquiries but was also selling this data in raw format.

“Clients can now purchase voter data directly from their RoboCall provider,” the company’s website reads. “We provide voter files for every need, whether it be for a new RoboCall or simply to update records for door knocking.”

The company sells voter records for a price of 3¢/record. Leaving the core of its business available online on an AWS bucket without authentication is… self-defeating.

Source: Robocall Firm Exposes Hundreds of Thousands of US Voters’ Records

Chinese mobile phone cameras are not-so-secretly recording users’ activities

It has been widely reported that software and web applications made in China are often built with a “backdoor” feature, allowing the manufacturer or the government to monitor and collect data from the user’s device.

But how exactly does the backdoor feature work? Recent discussion among mobile phone users in mainland China has shed some light on the question.

Last month, users of Vivo NEX, a Chinese Android phone, found that when they opened certain applications on the phone, including Chinese internet giant QQ browser and travel booking app Ctrip, the mobile device’s camera would self-activate.

Different from most mobile phones, where a camera can be activated without giving the user any signal, the Vivo NEX has a tiny retractable camera that physically pops out from the top of the device when it is turned on.

Vivo NEX retractable camera. Photo by Vivo NEX, via We Chaat.

Though perhaps unintentionally, this design feature has given Chinese mobile users a tangible sense of exactly when and how they are being monitored.

One Weibo user observed that the retractable camera self-activates whenever he opens a new chat on Telegram, a messaging application designed for secured and encrypted communication.

While Telegram reacted quickly to reports of the issue and fixed the camera bug, Chinese internet giant Tencent instead defended the feature, arguing that its QQ browser needs the camera activated to prepare for scanning QR codes and insisted that the camera would not take photos or audio recordings unless the user told it to do so.

This explanation was not reassuring for users, as it only revealed the degree to which the QQ browser could record users’ activities.

After the news of the self-activated camera bug spread, users started testing the issue on other applications and found that Baidu’s voice input application has access to both the camera and voice recording function, which can be launched without users’ authorization.

A Vivo NEX user found that once she had installed Baidu’s voice input system, it would activate the phone’s camera and sound recording function whenever the user opened any application — including chat apps, browsers — that allows the user to input text.

Baidu says that the self-activated recording is not a backdoor but a “frontdoor” application that allows the company collect and adjust to background noise so as to prepare for and optimize its voice input function. This was not reassuring for users — any microphone collecting background noise would also unquestionably capture the voices and conversations of a user and whomever she speaks with face-to-face.

How does camera snooping affect people outside China?

These snooping features have not just affected people from mainland China, but all of those from outside the country who want to communicate with friends in China.

As the Chinese government has blocked most leading foreign social media technologies, anyone who wants to communicate with people in China has little choice but to install applications made in China, such as WeChat.

One strategy for increasing one’s mobile privacy when using Chinese-made applications is to keep all insecure applications on one device and assume that these communications will be recorded or spied upon, and to keep a second device for more secure or “clean” applications. When using an encrypted communication application like Telegram to communicate with friends in China, one also has to make sure that their friends’ mobile devices are clean.

Baidu has been notorious for snooping into users’ private data and activities. In January 2018, a government-affiliated consumer association in Jiangsu province filed a lawsuit against Baidu’s search application and mobile browser for snooping on users’ phone conversations and accessing their geo-location data without user consent. But the case was dropped in March after Baidu updated its applications by securing users’ consent for control over their mobile camera, voice recording, geo-location data, even though these controls are not essential to the application’s functionality.

In response to public concern about these backdoor features, Baidu and other Chinese internet giants may defend themselves simply by arguing that users have consented to having their cameras activated. But given the monopolistic nature of Chinese Internet giants in the country, do ordinary users have the power — or the choice — to say no?

Source: Chinese mobile phone cameras are not-so-secretly recording users’ activities – Global Voices Advox

App Traps: How Cheap Smartphones Siphon User Data in Developing Countries

For millions of people buying inexpensive smartphones in developing countries where privacy protections are usually low, the convenience of on-the-go internet access could come with a hidden cost: preloaded apps that harvest users’ data without their knowledge.

One such app, included on thousands of Chinese-made Singtech P10 smartphones sold in Myanmar and Cambodia, sends the owner’s location and unique-device details to a mobile-advertising firm in Taiwan called General Mobile Corp., or GMobi. The app also has appeared on smartphones sold in Brazil and those made by manufacturers based in China and India, security researchers said.

Taipei-based GMobi, with a subsidiary in Shanghai, said it uses the data to show targeted ads on the devices. It also sometimes shares the data with device makers to help them learn more about their customers.

Smartphones have been billed as a transformative technology in developing markets, bringing low-cost internet access to hundreds of millions of people. But this growing population of novice consumers, most of them living in countries with lax or nonexistent privacy protections, is also a juicy target for data harvesters, according to security researchers.

Smartphone makers that allow GMobi to install its app on phones they sell are able to use the app to send software updates for their devices known as “firmware” at no cost to them, said GMobi Chief Executive Paul Wu. That benefit is an important consideration for device makers pushing low-cost phones across emerging markets.

“If end users want a free internet service, he or she needs to suffer a little for better targeting ads,” said a GMobi spokeswoman.

[…]

Upstream Systems, a London-based mobile commerce and security firm that identified the GMobi app’s activity and shared it with the Journal, said it bought four new devices that, once activated, began sending data to GMobi via its firmware-updating app. This included 15-digit International Mobile Equipment Identification, or IMEI, numbers, along with unique codes called MAC addresses that are assigned to each piece of hardware that connects to the web. The app also sends some location data to GMobi’s servers located in Singapore, Upstream said.

Source: App Traps: How Cheap Smartphones Siphon User Data in Developing Countries – WSJ

 

I like the way even GMobi thinks users getting targetted advertising are suffering!

Mitsubishi Wants Your Driving Data, and It’s Willing to Throw in a Free Cup of Coffee to Get It

Automakers want in on the highly lucrative big data game and Mitsubishi is willing to pay for the privilege. In exchange for running the risk of jacking up its customers’ insurance premiums, the car manufacturer is offering drivers $10 off of an oil change and other rewards. Consumers will have to decide if a gift card is worth giving up their privacy.

According to the Wall Street Journal, Mitsubishi’s new smartphone app is the first of its kind. A driver can sign up and allow their driving habits to be tracked by their phone’s sensors, which monitor data points like acceleration, location, and rotation. Along the way, they’ll earn badges (reward points) based on good driving practices like staying under the speed limit. For now, the badges can be exchanged for discounted oil changes or car accessories, but the company plans to expand its incentives to other small perks like free cups of coffee by the end of the year.

It may seem like a win-win situation: You pay a little more attention to being a good driver and you get a little bonus for your efforts. But the first customer for all that data is State Auto Insurance Companies, which will be using it to create better risk models and adjust users’ premiums accordingly. It doesn’t appear that the data will be anonymized because the Journal reports that, after a trial period, insurers will be able to build a customer risk profile on users of the app that will then be used to determine rates. We reached out to Mitsubishi to ask about its anonymization of data but didn’t receive an immediate reply.

Mike LaRocco, State Auto’s CEO, framed this as a benefit to consumers when speaking with the Journal. “They’ll get a much more accurate quote from day one,” he claimed. That might be true, but it does nothing to assuage fears that insurance companies could penalize drivers who don’t voluntarily give up their data.

Ford also has an app that shares data with insurance companies, but it’s not offering any of those sweet, sweet gift cards. And at a moment when many people are debating whether tech giants should be paying us for our data, one could argue that Mitsubishi is doing the right thing. But as car companies are building web connectivity into their new models, we could easily see this become standard practice without offering drivers a choice or a reward. A study by McKinsey & Co from 2016, estimated that monetizing car data could be worth between $450-750 billion by 2030. Of course, autonomous vehicles could become more prevalent by then. And as long as they work as promised, insurance companies will be less necessary.

[Wall Street Journal]

Source: Mitsubishi Wants Your Driving Data, and It’s Willing to Throw in a Free Cup of Coffee to Get It

‘Plane Hacker’ Roberts: I put a network sniffer on my truck to see what it was sharing. Holy crap!

Cars are turning into computers on wheels and airplanes have become flying data centres, but this increase in power and connectivity has largely happened without designing in adequate security controls.

Improving transportation security was a major strand of the recent Cyber Week security conference in Israel. A one-day event, Speed of Light, focused on transportation cybersecurity, where Roberts served as master of ceremonies.

[…]

“Israel was here, not just a couple of companies. Israel is going, ‘We as a state, we as a country, need to understand [about transportation security]’,” Roberts said. “We need to learn.”

“In other places it’s the companies. GM is great. Ford is good. Some of the Germany companies are good. Fiat-Chrysler Group has got a lot of work to do.”

Some industries are more advanced than others at understanding cybersecurity risks, Roberts claimed. For example, awareness in the automobile industry is ahead of that found in aviation.

“Boeing is in denial. Airbus is kind of on the fence. Some of the other industries are better.”

[…]

There’s almost nothing you can do [as a user] to improve car security. The only thing you can do is go back to the garage every month for your Microsoft Patch Tuesday – updates from Ford or GM.

“You better come in once a month for your patches because if you don’t, the damn thing is not going to work.”

What about over-the-air updates? These may not always be reliable, Roberts warned.

“What happens if you’re in the middle of a dead spot? Or you’re in the middle of a developing country that doesn’t have that? What about the Toyotas that get sold to the Middle East or Far East, to countries that don’t have 4G or 5G coverage. And what happens when you move around countries?”

[…]

“I put a network sniffer on the big truck to see what it was sharing. Holy crap! The GPS, the telemetry, the tracking. There’s a lot of data this thing is sharing.

“If you turn it off you might be voiding warranties or [bypassing] security controls,” Roberts said, adding that there was also an issue about who owns the data a car generates. “Is it there to protect me or monitor me?” he mused.

Some insurance firms offer cheaper insurance to careful drivers, based on readings from telemetry devices and sensors. Roberts is dead set against this for privacy reasons. “Insurance can go to hell. For me, getting a 5 per cent discount on my insurance is not worth accepting a tracking device from an insurance company.”

Source: ‘Plane Hacker’ Roberts: I put a network sniffer on my truck to see what it was sharing. Holy crap! • The Register

Is My Phone Recording Everything I Say? It turns out it sends screenshots and videos of what you do

Some computer science academics at Northeastern University had heard enough people talking about this technological myth that they decided to do a rigorous study to tackle it. For the last year, Elleen Pan, Jingjing Ren, Martina Lindorfer, Christo Wilson, and David Choffnes ran an experiment involving more than 17,000 of the most popular apps on Android to find out whether any of them were secretly using the phone’s mic to capture audio. The apps included those belonging to Facebook, as well as over 8,000 apps that send information to Facebook.

Sorry, conspiracy theorists: They found no evidence of an app unexpectedly activating the microphone or sending audio out when not prompted to do so. Like good scientists, they refuse to say that their study definitively proves that your phone isn’t secretly listening to you, but they didn’t find a single instance of it happening. Instead, they discovered a different disturbing practice: apps recording a phone’s screen and sending that information out to third parties.

Of the 17,260 apps the researchers looked at, over 9,000 had permission to access the camera and microphone and thus the potential to overhear the phone’s owner talking about their need for cat litter or about how much they love a certain brand of gelato. Using 10 Android phones, the researchers used an automated program to interact with each of those apps and then analyzed the traffic generated. (A limitation of the study is that the automated phone users couldn’t do things humans could, like creating usernames and passwords to sign into an account on an app.) They were looking specifically for any media files that were sent, particularly when they were sent to an unexpected party.

These phones played with thousands of app to see if they could find one that would secretly activate their microphone
Photo: David Choffnes (Northeastern University)

The strange practice they started to see was that screenshots and video recordings of what people were doing in apps were being sent to third party domains. For example, when one of the phones used an app from GoPuff, a delivery start-up for people who have sudden cravings for junk food, the interaction with the app was recorded and sent to a domain affiliated with Appsee, a mobile analytics company. The video included a screen where you could enter personal information—in this case, their zip code.

[…]

In other words, until smartphone makers notify you when your screen is being recorded or give you the power to turn that ability off, you have a new thing to be paranoid about. The researchers will be presenting their work at the Privacy Enhancing Technology Symposium Conference in Barcelona next month. (While in Spain, they might want to check out the country’s most popular soccer app, which has given itself permission to access users’ smartphone mics to listen for illegal broadcasts of games in bars.)

The researchers weren’t comfortable saying for sure that your phone isn’t secretly listening to you in part because there are some scenarios not covered by their study. Their phones were being operated by an automated program, not by actual humans, so they might not have triggered apps the same way a flesh-and-blood user would. And the phones were in a controlled environment, not wandering the world in a way that might trigger them: For the first few months of the study the phones were near students in a lab at Northeastern University and thus surrounded by ambient conversation, but the phones made so much noise, as apps were constantly being played with on them, that they were eventually moved into a closet. (If the researchers did the experiment again, they would play a podcast on a loop in the closet next to the phones.) It’s also possible that the researchers could have missed audio recordings of conversations if the app transcribed the conversation to text on the phone before sending it out. So the myth can’t be entirely killed yet.

Source: Is My Phone Recording Everything I Say?

Europe is reading smartphones and using the data as a weapon to deport refugees

Across the continent, migrants are being confronted by a booming mobile forensics industry that specialises in extracting a smartphone’s messages, location history, and even WhatsApp data. That information can potentially be turned against the phone owners themselves.

In 2017 both Germany and Denmark expanded laws that enabled immigration officials to extract data from asylum seekers’ phones. Similar legislation has been proposed in Belgium and Austria, while the UK and Norway have been searching asylum seekers’ devices for years.

Following right-wing gains across the EU, beleaguered governments are scrambling to bring immigration numbers down. Tackling fraudulent asylum applications seems like an easy way to do that. As European leaders met in Brussels last week to thrash out a new, tougher framework to manage migration —which nevertheless seems insufficient to placate Angela Merkel’s critics in Germany— immigration agencies across Europe are showing new enthusiasm for laws and software that enable phone data to be used in deportation cases.

Admittedly, some refugees do lie on their asylum applications. Omar – not his real name – certainly did. He travelled to Germany via Greece. Even for Syrians like him there were few legal alternatives into the EU. But his route meant he could face deportation under the EU’s Dublin regulation, which dictates that asylum seekers must claim refugee status in the first EU country they arrive in. For Omar, that would mean settling in Greece – hardly an attractive destination considering its high unemployment and stretched social services.

Last year, more than 7,000 people were deported from Germany according to the Dublin regulation. If Omar’s phone were searched, he could have become one of them, as his location history would have revealed his route through Europe, including his arrival in Greece.

But before his asylum interview, he met Lena – also not her real name. A refugee advocate and businesswoman, Lena had read about Germany’s new surveillance laws. She encouraged Omar to throw his phone away and tell immigration officials it had been stolen in the refugee camp where he was staying. “This camp was well-known for crime,” says Lena, “so the story seemed believable.” His application is still pending.

Omar is not the only asylum seeker to hide phone data from state officials. When sociology professor Marie Gillespie researched phone use among migrants travelling to Europe in 2016, she encountered widespread fear of mobile phone surveillance. “Mobile phones were facilitators and enablers of their journeys, but they also posed a threat,” she says. In response, she saw migrants who kept up to 13 different SIM cards, hiding them in different parts of their bodies as they travelled.

[…]

Denmark is taking this a step further, by asking migrants for their Facebook passwords. Refugee groups note how the platform is being used more and more to verify an asylum seeker’s identity.

[…]

The Danish immigration agency confirmed they do ask asylum applicants to see their Facebook profiles. While it is not standard procedure, it can be used if a caseworker feels they need more information. If the applicant refused their consent, they would tell them they are obliged under Danish law. Right now, they only use Facebook – not Instagram or other social platforms.

[…]

“In my view, it’s a violation of ethics on privacy to ask for a password to Facebook or open somebody’s mobile phone,” says Michala Clante Bendixen of Denmark’s Refugees Welcome movement. “For an asylum seeker, this is often the only piece of personal and private space he or she has left.”

Information sourced from phones and social media offers an alternative reality that can compete with an asylum seeker’s own testimony. “They’re holding the phone to be a stronger testament to their history than what the person is ready to disclose,” says Gus Hosein, executive director of Privacy International. “That’s unprecedented.”

Privacy campaigners note how digital information might not reflect a person’s character accurately. “Because there is so much data on a person’s phone, you can make quite sweeping judgements that might not necessarily be true,” says Christopher Weatherhead, technologist at Privacy International.

[…]

Privacy International has investigated the UK police’s ability to search phones, indicating that immigration officials could possess similar powers. “What surprised us was the level of detail of these phone searches. Police could access information even you don’t have access to, such as deleted messages,” Weatherhead says.

His team found that British police are aided by Israeli mobile forensic company Cellebrite. Using their software, officials can access search history, including deleted browsing history. It can also extract WhatsApp messages from some Android phones.

Source: Europe is using smartphone data as a weapon to deport refugees | WIRED UK

Google allows outside app developers to read people’s Gmails

  • Google promised a year ago to provide more privacy to Gmail users, but The Wall Street Journal reports that hundreds of app makers have access to millions of inboxes belonging to Gmail users.
  • The outside app companies receive access to messages from Gmail users who signed up for things like price-comparison services or automated travel-itinerary planners, according to The Journal.
  • Some of these companies train software to scan the email, while others enable their workers to pore over private messages, the report says.
  • What isn’t clear from The Journal’s story is whether Google is doing anything differently than Microsoft or other rival email services.

Employees working for hundreds of software developers are reading the private messages of Gmail users, The Wall Street Journal reported on Monday.

A year ago, Google promised to stop scanning the inboxes of Gmail users, but the company has not done much to protect Gmail inboxes obtained by outside software developers, according to the newspaper. Gmail users who signed up for “email-based services” like “shopping price comparisons,” and “automated travel-itinerary planners” are most at risk of having their private messages read, The Journal reported.

Hundreds of app developers electronically “scan” inboxes of the people who signed up for some of these programs, and in some cases, employees do the reading, the paper reported. Google declined to comment.

The revelation comes at a bad time for Google and Gmail, the world’s largest email service, with 1.4 billion users. Top tech companies are under pressure in the United States and Europe to do more to protect user privacy and be more transparent about any parties with access to people’s data. The increased scrutiny follows the Cambridge Analytica scandal, in which a data firm was accused of misusing the personal information of more than 80 million Facebook users in an attempt to sway elections.

It’s not news that Google and many top email providers enable outside developers to access users’ inboxes. In most cases, the people who signed up for the price-comparison deals or other programs agreed to provide access to their inboxes as part of the opt-in process.

gmail opti-in
Gmail’s opt-in alert spells out generally what a user is agreeing to.
Google

In Google’s case, outside developers must pass a vetting process, and as part of that, Google ensures they have an acceptable privacy agreement, The Journal reported, citing a Google representative.

What is unclear is how closely these outside developers adhere to their agreements and whether Google does anything to ensure they do, as well as whether Gmail users are fully aware that individual employees may be reading their emails, as opposed to an automated system, the report says.

Mikael Berner, the CEO of Edison Software, a Gmail developer that offers a mobile app for organizing email, told The Journal that its employees had read emails from hundreds of Gmail users as part of an effort to build a new feature. An executive at another company said employees’ reading of emails had become “common practice.”

Companies that spoke to The Journal confirmed that the practice was specified in their user agreements and said they had implemented strict rules for employees regarding the handling of email.

It’s interesting to note that, judging from The Journal’s story, very little indicates that Google is doing anything different from Microsoft or other top email providers. According to the newspaper, nothing in Microsoft or Yahoo’s policy agreements explicitly allows people to read others’ emails.

Source: Google reportedly allows outside app developers to read people’s Gmails – INSIDER

Which also shows: no one ever reads the end user agreements. I’m pretty sure no-one got the bit where it said: you are also allowing us to read all your emails when they signed up

Dear Samsung mobe owners: It may leak your private pics to randoms

Samsung’s Messages app bundled with the South Korean giant’s latest smartphones and tablets may silently send people’s private photos to random contacts, it is claimed.

An unlucky bunch of Sammy phone fans – including owners of Galaxy S9, S9+ and Note 8 gadgets – have complained on Reddit and the official support forums that the application texted their snaps without permission.

One person said the app sent their photo albums to their girlfriend at 2.30am without them knowing – there was no trace of the transfer on the phone, although it showed up in their T-Mobile US account. The pictures, like the recipients, are seemingly picked at random from handheld’s contacts, and the messages do not appear in the application’s sent box. The seemingly misbehaving app is the default messaging tool on Samsung’s Android devices.

“Last night around 2:30am, my phone sent [my girlfriend] my entire photo gallery over text but there was no record of it on my messages app,” complained one confused Galaxy S9+ owner. “However, there was record of it [in my] T-Mobile logs.”

Another S9+ punter chipped in: “Oddly enough, my wife’s phone did that last night, and mine did it the night before. I think it has something to do with the Samsung SMS app being updated from the Galaxy Store. When her phone texted me her gallery, it didn’t show up on her end – and vice versa.”

Source: Dear Samsung mobe owners: It may leak your private pics to randoms • The Register

This popular Facebook app publicly exposed your data for years

Nametests.com, the website behind the quizzes, recently fixed a flaw that publicly exposed information of their more than 120 million monthly users — even after they deleted the app. At my request, Facebook donated $8,000 to the Freedom of the Press Foundation as part of their Data Abuse Bounty Program.

[…]

While loading a test, the website would fetch my personal information and display it on the webpage. Here’s where it got my personal information from:

http://nametests.com/appconfig_user

In theory, every website could have requested this data. Note that the data also includes a ‘token’ which gives access to all data the user authorised the application to access, such as photos, posts and friends.

I was shocked to see that this data was publicly available to any third-party that requested it.

In a normal situation, other websites would not be able to access this information. Web browsers have mechanisms in place to prevent that from happening. In this case however, the data was wrapped in something called javascript, which is an exception to this rule.

One of the basic principles of javascript is that it can be shared with other websites. Since NameTests displayed their user’s personal data in javascript file, virtually any website could access it when they would request it.

o verify it would actually be that easy to steal someone’s information, I set up a website that would connect to NameTests and get some information about my visitor. NameTests would also provide a secret key called an access token, which, depending on the permissions granted, could be used to gain access to a visitor’s posts, photos and friends. It would only take one visit to our website to gain access to someone’s personal information for up to two months.

Video proof:

An unauthorised website getting access to my Facebook information

As you can see in the video, NameTests would still reveal your identity even after deleting the app. In order to prevent this from happening, the user would have had to manually delete the cookies on their device, since NameTests.com does not offer a log out functionality.

Source: This popular Facebook app publicly exposed your data for years

Facebook Patent Imagines Triggering Your Phone’s Mic When a Hidden Signal Plays on TV

You may have seen the ads that Facebook has been running on TV in a full-court press to apologize for abusing users privacy. They’re embarrassing. And, it turns out, they may be a sign of things to come. Based on a recently published patent application, Facebook could one day use ads on television to further violate your privacy once you’ve forgotten about all those other times.

First spotted by Metro, the patent is titled “broadcast content view analysis based on ambient audio recording.” (PDF) It describes a system in which an “ambient audio fingerprint or signature” that’s inaudible to the human ear could be embedded in broadcast content like a TV ad. When a hypothetical user is watching this ad, the audio fingerprint could trigger their smartphone or another device to turn on its microphone, begin recording audio and transmit data about it to Facebook.

Diagram of soundwave containing signal, triggering device, and recording ambient audio.
Image: USPTO

Everything in the patent is written in legalese and is a bit vague about what happens to the audio data. One example scenario imagines that various ambient audio would be eliminated and the content playing on the broadcast would be identified. Data would be collected about the user’s proximity to the audio. Then, the identifying information, time, and identity of the Facebook user would be sent to the social media company for further processing.

In addition to all the data users voluntarily give up, and the incidental data it collects through techniques like browser fingerprinting, Facebook would use this audio information to figure out which ads are most effective. For example, if a user walked away from the TV or changed the channel as soon as the ad began to play, it might consider the ad ineffective or on a subject the user doesn’t find interesting. If the user stays where they are and the audio is loud and clear, Facebook could compare that seeming effective ad with your other data to make better suggestions for its advertising clients.

An example of a broadcasting device communicating with the network and identifying various users in a household.
Image: USPTO

Yes, this is creepy as hell and feels like someone trying to make a patent for a peephole on a nondescript painting

Source: Facebook Patent Imagines Triggering Your Phone’s Mic When a Hidden Signal Plays on TV

Facebook, Google, Microsoft scolded for tricking people into spilling their private info

Five consumer privacy groups have asked the European Data Protection Board to investigate how Facebook, Google, and Microsoft design their software to see whether it complies with the General Data Protection Regulation (GDPR).

Essentially, the tech giants are accused of crafting their user interfaces so that netizens are fooled into clicking away their privacy, and handing over their personal information.

In a letter sent today to chairwoman Andrea Jelinek, the BEUC (Bureau Européen des Unions de Consommateurs), the Norwegian Consumer Council (Forbrukerrådet), Consumers International, Privacy International and ANEC (just too damn long to spell out) contend that the three tech giants “employed numerous tricks and tactics to nudge or push consumers toward giving consent to sharing as much data for as many purposes as possible.”

The letter coincides with the publication a Forbrukerrådet report, “Deceived By Design,” that claims “tech companies use dark patterns to discourage us from exercising our rights to privacy.”

Dark patterns here refers to app interface design choices that attempt to influence users to do things they may not want to do because they benefit the software maker.

The report faults Google, Facebook and, to a lesser degree, Microsoft for employing default settings that dispense with privacy. It also says they use misleading language, give users an illusion of control, conceal pro-privacy choices, offer take-it-or-leave it choices and use design patterns that make it more laborious to choose privacy.

It argues that dark patterns deprive users of control, a central requirement under GDPR.

As an example of linguistic deception, the report cites Facebook text that seeks permission to use facial recognition on images:

If you keep face recognition turned off, we won’t be able to use this technology if a stranger uses your photo to impersonate you. If someone uses a screen reader, they won’t be told when you’re in a photo unless you’re tagged.

The way this is worded, the report says, pushes Facebook users to accept facial recognition by suggesting there’s a risk of impersonation if they refuse. And it implies there’s something unethical about depriving those forced to use screen readers of image descriptions, a practice known as “confirmshaming.”

Source: Facebook, Google, Microsoft scolded for tricking people into spilling their private info • The Register

Red Shell packaged games (Civ VI, Total War, ESO, KSP and more) contain a spyware which tracks your Internet activity outside of the game

Red shell is a Spyware that tracks data of your PC and shares it with 3rd parties. On their website they formulate it all in very harmless language, but the fact is that this is software from someone i don’t trust and whom i never invited, which is looking at my data and running on my pc against my will. This should have no place in a full price PC game, and in no games if it were up to me.

I make this thread to raise awareness of these user unfriendly marketing practices and data mining software that are common on the mobile market, and which are flooding over to our PC Games market. As a person and a gamer i refuse to be data mined. My data is my own and you have no business making money of it.

The announcement yesterday was only from “Holy Potatoes! We’re in Space?!”, but i would consider all their games as on risk to contain that spyware if they choose to include it again, with or without announcement. Also the Publisher of this one title is Daedalic Entertainment, while the others are self published. I would think it could be interesting to check if other Daedalic Entertainment Games have that spyware in it as well. I had no time to do that.

Reddit [PSA] RED SHELL Spyware – “Holy Potatoes! We’re in Space?!” integrated and removed it after complaints

and
[PSA] Civ VI, Total War, ESO, KSP and more contain a spyware which tracks your Internet activity outside of the game (x-post r/Steam)

Addresses to block:
redshell.io
api.redshell.io
treasuredata.com
api.treasuredata.com

Facebook gave some companies special access to data on users’ friends

Facebook granted a select group of companies special access to its users’ records even after the point in 2015 that the company has claimed it stopped sharing such data with app developers.

According to the Wall Street Journal, which cited court documents, unnamed Facebook officials and other unnamed sources, Facebook made special agreements with certain companies called “whitelists,” which gave them access to extra information about a user’s friends. This includes data such as phone numbers and “friend links,” which measure the degree of closeness between users and their friends.

These deals were made separately from the company’s data-sharing agreements with device manufacturers such as Huawei, which Facebook disclosed earlier this week after a New York Times report on the arrangement.

Source: Facebook gave some companies special access to data on users’ friends

The hits keep coming for Facebook: Web giant made 14m people’s private posts public

about 14 million people were affected by a bug that, for a nine-day span between May 18 and 27, caused profile posts to be set as public by default, allowing any Tom, Dick or Harriet to view the material.

“We recently found a bug that automatically suggested posting publicly when some people were creating their Facebook posts. We have fixed this issue and starting today we are letting everyone affected know and asking them to review any posts they made during that time,” Facebook chief privacy officer Erin Egan said in a statement to The Register.

Source: The hits keep coming for Facebook: Web giant made 14m people’s private posts public • The Register

You know that silly fear about Alexa recording everything and leaking it online? It just happened

It’s time to break out your “Alexa, I Told You So” banners – because a Portland, Oregon, couple received a phone call from one of the husband’s employees earlier this month, telling them she had just received a recording of them talking privately in their home.

“Unplug your Alexa devices right now,” the staffer told the couple, who did not wish to be fully identified, “you’re being hacked.”

At first the couple thought it might be a hoax call. However, the employee – over a hundred miles away in Seattle – confirmed the leak by revealing the pair had just been talking about their hardwood floors.

The recording had been sent from the couple’s Alexa-powered Amazon Echo to the employee’s phone, who is in the husband’s contacts list, and she forwarded the audio to the wife, Danielle, who was amazed to hear herself talking about their floors. Suffice to say, this episode was unexpected. The couple had not instructed Alexa to spill a copy of their conversation to someone else.

[…]

According to Danielle, Amazon confirmed that it was the voice-activated digital assistant that had recorded and sent the file to a virtual stranger, and apologized profusely, but gave no explanation for how it may have happened.

“They said ‘our engineers went through your logs, and they saw exactly what you told us, they saw exactly what you said happened, and we’re sorry.’ He apologized like 15 times in a matter of 30 minutes and he said we really appreciate you bringing this to our attention, this is something we need to fix!”

She said she’d asked for a refund for all their Alexa devices – something the company has so far demurred from agreeing to.

Alexa, what happened? Sorry, I can’t respond to that right now

We asked Amazon for an explanation, and today the US giant responded confirming its software screwed up:

Amazon takes privacy very seriously. We investigated what happened and determined this was an extremely rare occurrence. We are taking steps to avoid this from happening in the future.

For this to happen, something has gone very seriously wrong with the Alexa device’s programming.

The machines are designed to constantly listen out for the “Alexa” wake word, filling a one-second audio buffer from its microphone at all times in anticipation of a command. When the wake word is detected in the buffer, it records what is said until there is a gap in the conversation, and sends the audio to Amazon’s cloud system to transcribe, figure out what needs to be done, and respond to it.

[…]

A spokesperson for Amazon has been in touch with more details on what happened during the Alexa Echo blunder, at least from their point of view. We’re told the device misheard its wake-up word while overhearing the couple’s private chat, started processing talk of wood floorings as commands, and it all went downhill from there. Here is Amazon’s explanation:

The Echo woke up due to a word in background conversation sounding like “Alexa.” Then, the subsequent conversation was heard as a “send message” request. At which point, Alexa said out loud “To whom?” At which point, the background conversation was interpreted as a name in the customers contact list. Alexa then asked out loud, “[contact name], right?” Alexa then interpreted background conversation as “right.” As unlikely as this string of events is, we are evaluating options to make this case even less likely.

Source: You know that silly fear about Alexa recording everything and leaking it online? It just happened • The Register

Google sued for ‘clandestine tracking’ of 4.4m UK iPhone users’ browsing data

Google is being sued in the high court for as much as £3.2bn for the alleged “clandestine tracking and collation” of personal information from 4.4 million iPhone users in the UK.

The collective action is being led by former Which? director Richard Lloyd over claims Google bypassed the privacy settings of Apple’s Safari browser on iPhones between August 2011 and February 2012 in order to divide people into categories for advertisers.

At the opening of an expected two-day hearing in London on Monday, lawyers for Lloyd’s campaign group Google You Owe Us told the court information collected by Google included race, physical and mental heath, political leanings, sexuality, social class, financial, shopping habits and location data.

Hugh Tomlinson QC, representing Lloyd, said information was then “aggregated” and users were put into groups such as “football lovers” or “current affairs enthusiasts” for the targeting of advertising.

Tomlinson said the data was gathered through “clandestine tracking and collation” of browsing on the iPhone, known as the “Safari Workaround” – an activity he said was exposed by a PhD researcher in 2012. Tomlinson said Google has already paid $39.5m to settle claims in the US relating to the practice. Google was fined $22.5m for the practice by the US Federal Trade Commission in 2012 and forced to pay $17m to 37 US states.

Speaking ahead of the hearing, Lloyd said: “I believe that what Google did was quite simply against the law.

“Their actions have affected millions in England and Wales and we’ll be asking the judge to ensure they are held to account in our courts.”

The campaign group hopes to win at least £1bn in compensation for an estimated 4.4 million iPhone users. Court filings show Google You Owe Us could be seeking as much as £3.2bn, meaning claimants could receive £750 per individual if successful.

Google contends the type of “representative action” being brought against it by Lloyd is unsuitable and should not go ahead. The company’s lawyers said there is no suggestion the Safari Workaround resulted in any information being disclosed to third parties.

Source: Google sued for ‘clandestine tracking’ of 4.4m UK iPhone users’ browsing data | Technology | The Guardian

Note: Google does not contest the Safari Workaround though