Facebook gave some companies special access to data on users’ friends

Facebook granted a select group of companies special access to its users’ records even after the point in 2015 that the company has claimed it stopped sharing such data with app developers.

According to the Wall Street Journal, which cited court documents, unnamed Facebook officials and other unnamed sources, Facebook made special agreements with certain companies called “whitelists,” which gave them access to extra information about a user’s friends. This includes data such as phone numbers and “friend links,” which measure the degree of closeness between users and their friends.

These deals were made separately from the company’s data-sharing agreements with device manufacturers such as Huawei, which Facebook disclosed earlier this week after a New York Times report on the arrangement.

Source: Facebook gave some companies special access to data on users’ friends

The hits keep coming for Facebook: Web giant made 14m people’s private posts public

about 14 million people were affected by a bug that, for a nine-day span between May 18 and 27, caused profile posts to be set as public by default, allowing any Tom, Dick or Harriet to view the material.

“We recently found a bug that automatically suggested posting publicly when some people were creating their Facebook posts. We have fixed this issue and starting today we are letting everyone affected know and asking them to review any posts they made during that time,” Facebook chief privacy officer Erin Egan said in a statement to The Register.

Source: The hits keep coming for Facebook: Web giant made 14m people’s private posts public • The Register

You know that silly fear about Alexa recording everything and leaking it online? It just happened

It’s time to break out your “Alexa, I Told You So” banners – because a Portland, Oregon, couple received a phone call from one of the husband’s employees earlier this month, telling them she had just received a recording of them talking privately in their home.

“Unplug your Alexa devices right now,” the staffer told the couple, who did not wish to be fully identified, “you’re being hacked.”

At first the couple thought it might be a hoax call. However, the employee – over a hundred miles away in Seattle – confirmed the leak by revealing the pair had just been talking about their hardwood floors.

The recording had been sent from the couple’s Alexa-powered Amazon Echo to the employee’s phone, who is in the husband’s contacts list, and she forwarded the audio to the wife, Danielle, who was amazed to hear herself talking about their floors. Suffice to say, this episode was unexpected. The couple had not instructed Alexa to spill a copy of their conversation to someone else.

[…]

According to Danielle, Amazon confirmed that it was the voice-activated digital assistant that had recorded and sent the file to a virtual stranger, and apologized profusely, but gave no explanation for how it may have happened.

“They said ‘our engineers went through your logs, and they saw exactly what you told us, they saw exactly what you said happened, and we’re sorry.’ He apologized like 15 times in a matter of 30 minutes and he said we really appreciate you bringing this to our attention, this is something we need to fix!”

She said she’d asked for a refund for all their Alexa devices – something the company has so far demurred from agreeing to.

Alexa, what happened? Sorry, I can’t respond to that right now

We asked Amazon for an explanation, and today the US giant responded confirming its software screwed up:

Amazon takes privacy very seriously. We investigated what happened and determined this was an extremely rare occurrence. We are taking steps to avoid this from happening in the future.

For this to happen, something has gone very seriously wrong with the Alexa device’s programming.

The machines are designed to constantly listen out for the “Alexa” wake word, filling a one-second audio buffer from its microphone at all times in anticipation of a command. When the wake word is detected in the buffer, it records what is said until there is a gap in the conversation, and sends the audio to Amazon’s cloud system to transcribe, figure out what needs to be done, and respond to it.

[…]

A spokesperson for Amazon has been in touch with more details on what happened during the Alexa Echo blunder, at least from their point of view. We’re told the device misheard its wake-up word while overhearing the couple’s private chat, started processing talk of wood floorings as commands, and it all went downhill from there. Here is Amazon’s explanation:

The Echo woke up due to a word in background conversation sounding like “Alexa.” Then, the subsequent conversation was heard as a “send message” request. At which point, Alexa said out loud “To whom?” At which point, the background conversation was interpreted as a name in the customers contact list. Alexa then asked out loud, “[contact name], right?” Alexa then interpreted background conversation as “right.” As unlikely as this string of events is, we are evaluating options to make this case even less likely.

Source: You know that silly fear about Alexa recording everything and leaking it online? It just happened • The Register

Google sued for ‘clandestine tracking’ of 4.4m UK iPhone users’ browsing data

Google is being sued in the high court for as much as £3.2bn for the alleged “clandestine tracking and collation” of personal information from 4.4 million iPhone users in the UK.

The collective action is being led by former Which? director Richard Lloyd over claims Google bypassed the privacy settings of Apple’s Safari browser on iPhones between August 2011 and February 2012 in order to divide people into categories for advertisers.

At the opening of an expected two-day hearing in London on Monday, lawyers for Lloyd’s campaign group Google You Owe Us told the court information collected by Google included race, physical and mental heath, political leanings, sexuality, social class, financial, shopping habits and location data.

Hugh Tomlinson QC, representing Lloyd, said information was then “aggregated” and users were put into groups such as “football lovers” or “current affairs enthusiasts” for the targeting of advertising.

Tomlinson said the data was gathered through “clandestine tracking and collation” of browsing on the iPhone, known as the “Safari Workaround” – an activity he said was exposed by a PhD researcher in 2012. Tomlinson said Google has already paid $39.5m to settle claims in the US relating to the practice. Google was fined $22.5m for the practice by the US Federal Trade Commission in 2012 and forced to pay $17m to 37 US states.

Speaking ahead of the hearing, Lloyd said: “I believe that what Google did was quite simply against the law.

“Their actions have affected millions in England and Wales and we’ll be asking the judge to ensure they are held to account in our courts.”

The campaign group hopes to win at least £1bn in compensation for an estimated 4.4 million iPhone users. Court filings show Google You Owe Us could be seeking as much as £3.2bn, meaning claimants could receive £750 per individual if successful.

Google contends the type of “representative action” being brought against it by Lloyd is unsuitable and should not go ahead. The company’s lawyers said there is no suggestion the Safari Workaround resulted in any information being disclosed to third parties.

Source: Google sued for ‘clandestine tracking’ of 4.4m UK iPhone users’ browsing data | Technology | The Guardian

Note: Google does not contest the Safari Workaround though

Teensafe spying app leaked thousands of user passwords

At least one server used by an app for parents to monitor their teenagers’ phone activity has leaked tens of thousands of accounts of both parents and children.

The mobile app, TeenSafe, bills itself as a “secure” monitoring app for iOS and Android, which lets parents view their child’s text messages and location, monitor who they’re calling and when, access their web browsing history, and find out which apps they have installed.

Although teen monitoring apps are controversial and privacy-invasive, the company says it doesn’t require parents to obtain the consent of their children.

But the Los Angeles, Calif.-based company left its servers, hosted on Amazon’s cloud, unprotected and accessible by anyone without a password.

Source: Teen phone monitoring app leaked thousands of user passwords | ZDNet

Which basically means that other than nasty parents spying in on their children, anyone else was doing so also.

Tracking Firm LocationSmart Leaked Location Data for Customers of All Major U.S. Mobile Carriers Without Consent in Real Time Via Its Web Site

LocationSmart, a U.S. based company that acts as an aggregator of real-time data about the precise location of mobile phone devices, has been leaking this information to anyone via a buggy component of its Web site — without the need for any password or other form of authentication or authorization — KrebsOnSecurity has learned. The company took the vulnerable service offline early this afternoon after being contacted by KrebsOnSecurity, which verified that it could be used to reveal the location of any AT&T, Sprint, T-Mobile or Verizon phone in the United States to an accuracy of within a few hundred yards.

Source: Tracking Firm LocationSmart Leaked Location Data for Customers of All Major U.S. Mobile Carriers Without Consent in Real Time Via Its Web Site — Krebs on Security

Scarily this means it can still be used to track anyone if you’re willing to pay for the service.

UK Watchdog Calls for Face Recognition Ban Over 90 Percent False-Positive Rate

As face recognition in public places becomes more commonplace, Big Brother Watch is especially concerned with false identification. In May, South Wales Police revealed that its face-recognition software had erroneously flagged thousands of attendees of a soccer game as a match for criminals; 92 percent of the matches were wrong. In a statement to the BBC, Matt Jukes, the chief constable in South Wales, said “we need to use technology when we’ve got tens of thousands of people in those crowds to protect everybody, and we are getting some great results from that.”

If someone is misidentified as a criminal or flagged, police may engage and ask for further identification. Big Brother Watch argues that this amounts to “hidden identity checks” that require people to “prove their identity and thus their innocence.” 110 people were stopped at the event after being flagged, leading to 15 arrests.

Simply walking through a crowd could lead to an identity check, but it doesn’t end there. South Wales reported more than 2,400 “matches” between May 2017 and March 2018, but ultimately made only 15 connecting arrests. The thousands of photos taken, however, are still stored in the system, with the overwhelming majority of people having no idea they even had their photo taken.

Source: UK Watchdog Calls for Face Recognition Ban Over 90 Percent False-Positive Rate

Facebook admits it does track non-users, for their own good

Facebook’s apology-and-explanation machine grinds on, with The Social Network™ posting detail on one of its most controversial activities – how it tracks people who don’t use Facebook.

The company explained that the post is a partial response to questions CEO Mark Zuckerberg was unable to answer during his senate and Congressional hearings.

It’s no real surprise that someone using their Facebook Login to sign in to other sites is tracked, but the post by product management director David Baser goes into (a little) detail on other tracking activities – some of which have been known to the outside world for some time, occasionally denied by Facebook, and apparently mysteries only to Zuck.

When non-Facebook sites add a “Like” button (a social plugin, in Baser’s terminology), visitors to those sites are tracked: Facebook gets their IP address, browser and OS fingerprint, and visited site.

If that sounds a bit like the datr cookie dating from 2011, you wouldn’t be far wrong.

Facebook denied non-user tracking until 2015, at which time it emphasised that it was only gathering non-users’ interactions with Facebook users. That explanation didn’t satisfy everyone, which was why The Social Network™ was told to quit tracking Belgians who haven’t signed on earlier this year.

Source: Facebook admits it does track non-users, for their own good • The Register

The Golden State Killer Suspect’s DNA Was in a Publicly Available Database, and Yours Might Be Too

Plenty of people have voluntarily uploaded their DNA to GEDmatch and other databases, often with real names and contact information. It’s what you do if you’re an adopted kid looking for a long-lost parent, or a genealogy buff curious about whether you have any cousins still living in the old country. GEDmatch requires that you make your DNA data public if you want to use their comparison tools, although you don’t have to attach your real name. And they’re not the only database that has helped law enforcement track people down without their knowledge.

How DNA Databases Help Track People Down

We don’t know exactly what samples or databases were used in the Golden State Killer’s case; the Sacramento County District Attorney’s office gave very little information and hasn’t confirmed any further details. But here are some things that are possible.

Y chromosome data can lead to a good guess at an unknown person’s last name.

Cis men typically have an X and a Y chromosome, and cis women two X’s. That means the Y chromosome is passed down from genetic males to their offspring—for example, from father to son. Since last names are also often handed down the same way, in many families you’ll share a surname with anybody who shares your Y chromosome.

A 2013 Science paper described how a small amount of Y chromosome data should be enough to identify surnames for an estimated 12 percent of white males in the US. (That method would find the wrong surname for 5 percent, and the rest would come back as unknown.) As more people upload their information to public databases, the authors warned, the success rate will only increase.

This is exactly the technique that genealogical consultant Colleen Fitzpatrick used to narrow down a pool of suspects in an Arizona cold case. She seems to have used short tandem repeat (STR) data from the suspect’s Y chromosome to search the Family Tree DNA database, and she saw the name Miller in the results.

The police already had a long list of suspects in the Arizona case, but based on that tip they zeroed in on one with the last name Miller. As with the Golden State Killer case, police confirmed the DNA match by obtaining a fresh DNA sample directly from their subject—the Sacramento office said they got it from something he discarded. (Yes, this is legal, and it can be an item as ordinary as a used drinking straw.)

The authors of the Science paper point out that surname, location, and year of birth are often enough to find an individual in census data.

 SNP files can find family trees.

When you download your “raw data” after mailing in a 23andme or Ancestry test, what you get is a list of locations on your genome (called SNPs, for single nucleotide polymorphisms) and two letters indicating your status for each. For example, at a certain SNP you may have inherited an A from one parent and a G from the other.

Genetic testing sites will have tools to compare your DNA with others in their database, but you can also download your raw data and submit it to other sites, including GEDmatch or Family Tree DNA. (23andme and Ancestry allow you to download your data, but they don’t accept uploads.)

But you don’t have to send a spit sample to one of those companies to get a raw data file. The DNA Doe project describes how they sequenced the whole genome of an unidentified girl from a cold case and used that data to construct a SNP file to upload to GEDmatch. They found someone with enough of the same SNPs that they were probably a close cousin. That cousin also had an account at Ancestry, where they had filled out a family tree with details of their family members. The tree included an entry for a cousin of the same age as the unidentified girl, and whose death date was listed as “missing—presumed dead.” It was her.

Your DNA Is Not Just Yours

When you send in a spit sample, or upload a raw data file, you may only be thinking about your own privacy. I have nothing to hide, you might tell yourself. Who cares if somebody finds out that I have blue eyes or a predisposition to heart disease?

But half of your DNA belongs to your biological mother, and half to your biological father. Another half—cut a different way—belongs to each of your children. On average, you share half your DNA with a sibling, and a quarter with a half-sibling, grandparent, aunt, uncle, niece or nephew. You share about an eighth with a first cousin, and so on. The more of your extended family who are into genealogy, the more likely you are to have your DNA in a public database, already contributed by a relative.

In the cases we mention here, the breakthrough came when DNA was matched, through a public database, to a person’s real name. But your DNA is, in a sense, your most identifying information.

For some cases, it may not matter whether your name is attached. Facebook reportedly spoke with a hospital about exchanging anonymized data. They didn’t need names because they had enough information, and good enough algorithms, that they thought they could identify individuals based on everything else. (Facebook doesn’t currently collect DNA information, thank god. There is a public DNA project that signs people up using a Facebook app, but they say they don’t pass the data to Facebook itself.)

And remember that 2013 study about tracking down people’s surnames? They grabbed whole-genome data from a few high-profile people who had made theirs public, and showed that the DNA files were sometimes enough information to track down an individual’s full name. It may be impossible for DNA to be totally anonymous.

Can You Protect Your Privacy While Using DNA Databases?

If you’re very concerned about privacy, you’re best off not using any of these databases. But you can’t control whether your relatives use them, and you may be looking for a long-lost family member and thus want to be in a database while minimizing the risks.

Source: The Golden State Killer Suspect’s DNA Was in a Publicly Available Database, and Yours Might Be Too

‘Forget the Facebook leak’: China is mining data directly from workers’ brains on an industrial scale

the workers wear caps to monitor their brainwaves, data that management then uses to adjust the pace of production and redesign workflows, according to the company.

The company said it could increase the overall efficiency of the workers by manipulating the frequency and length of break times to reduce mental stress.

Hangzhou Zhongheng Electric is just one example of the large-scale application of brain surveillance devices to monitor people’s emotions and other mental activities in the workplace, according to scientists and companies involved in the government-backed projects.

Concealed in regular safety helmets or uniform hats, these lightweight, wireless sensors constantly monitor the wearer’s brainwaves and stream the data to computers that use artificial intelligence algorithms to detect emotional spikes such as depression, anxiety or rage.

The technology is in widespread use around the world but China has applied it on an unprecedented scale in factories, public transport, state-owned companies and the military to increase the competitiveness of its manufacturing industry and to maintain social stability.

It has also raised concerns about the need for regulation to prevent abuses in the workplace.

The technology is also in use at in Hangzhou at State Grid Zhejiang Electric Power, where it has boosted company profits by about 2 billion yuan (US$315 million) since it was rolled out in 2014, according to Cheng Jingzhou, an official overseeing the company’s emotional surveillance programme.

“There is no doubt about its effect,” Cheng said.

Source: ‘Forget the Facebook leak’: China is mining data directly from workers’ brains on an industrial scale | South China Morning Post

Chinese government admits collection of deleted WeChat messages

Chinese authorities revealed over the weekend that they have the capability of retrieving deleted messages from the almost universally used WeChat app. The admission doesn’t come as a surprise to many, but it’s rare for this type of questionable data collection tactic to be acknowledged publicly.As noted by the South China Morning Post, an anti-corruption commission in Hefei province posted Saturday to social media that it has “retrieved a series of deleted WeChat conversations from a subject” as part of an investigation.The post was deleted Sunday, but not before many had seen it and understood the ramifications. Tencent, which operates the WeChat service used by nearly a billion people (including myself), explained in a statement that “WeChat does not store any chat histories — they are only stored on users’ phones and computers.”The technical details of this storage were not disclosed, but it seems clear from the commission’s post that they are accessible in some way to interested authorities, as many have suspected for years. The app does, of course, comply with other government requirements, such as censoring certain topics.There are still plenty of questions, the answers to which would help explain user vulnerability: Are messages effectively encrypted at rest? Does retrieval require the user’s password and login, or can it be forced with a “master key” or backdoor? Can users permanently and totally delete messages on the WeChat platform at all?

Source: Chinese government admits collection of deleted WeChat messages | TechCrunch

Revealed: how bookies use AI to keep gamblers hooked | Technology | The Guardian

The gambling industry is increasingly using artificial intelligence to predict consumer habits and personalise promotions to keep gamblers hooked, industry insiders have revealed.Current and former gambling industry employees have described how people’s betting habits are scrutinised and modelled to manipulate their future behaviour.“The industry is using AI to profile customers and predict their behaviour in frightening new ways,” said Asif, a digital marketer who previously worked for a gambling company. “Every click is scrutinised in order to optimise profit, not to enhance a user’s experience.”“I’ve often heard people wonder about how they are targeted so accurately and it’s no wonder because its all hidden in the small print.”Publicly, gambling executives boast of increasingly sophisticated advertising keeping people betting, while privately conceding that some are more susceptible to gambling addiction when bombarded with these type of bespoke ads and incentives.Gamblers’ every click, page view and transaction is scientifically examined so that ads statistically more likely to work can be pushed through Google, Facebook and other platforms.

[…]

Last August, the Guardian revealed the gambling industry uses third-party companies to harvest people’s data, helping bookmakers and online casinos target people on low incomes and those who have stopped gambling.

Despite condemnation from MPs, experts and campaigners, such practices remain an industry norm.

“You can buy email lists with more than 100,000 people’s emails and phone numbers from data warehouses who regularly sell data to help market gambling promotions,” said Brian. “They say it’s all opted in but people haven’t opted in at all.”

In this way, among others, gambling companies and advertisers create detailed customer profiles including masses of information about their interests, earnings, personal details and credit history.

[…]

Elsewhere, there are plans to geolocate customers in order to identify when they arrive at stadiums so they can prompted via texts to bet on the game they are about to watch.

The gambling industry earned£14bn in 2016, £4.5bn of which from online betting, and it is pumping some of that money into making its products more sophisticated and, in effect, addictive.

Source: Revealed: how bookies use AI to keep gamblers hooked | Technology | The Guardian

Whois is dead as Europe hands DNS overlord ICANN its arse :(

The Whois public database of domain name registration details is dead.

In a letter [PDF] sent this week to DNS overseer ICANN, Europe’s data protection authorities have effectively killed off the current service, noting that it breaks the law and so will be illegal come 25 May, when GDPR comes into force.

The letter also has harsh words for ICANN’s proposed interim solution, criticizing its vagueness and noting it needs to include explicit wording about what can be done with registrant data, as well as introduce auditing and compliance functions to make sure the data isn’t being abused.

ICANN now has a little over a month to come up with a replacement to the decades-old service that covers millions of domain names and lists the personal contact details of domain registrants, including their name, email and telephone number.

ICANN has already acknowledged it has no chance of doing so: a blog post by the company in response to the letter warns that without being granted a special temporary exemption from the law, the system will fracture.

“Unless there is a moratorium, we may no longer be able to give instructions to the contracted parties through our agreements to maintain Whois,” it warns. “Without resolution of these issues, the Whois system will become fragmented.”

We spoke with the president of ICANN’s Global Domains Division, Akram Atallah, and he told us that while there was “general agreement that having every thing public is not the right way to go”, he was hopeful that the letter would not result in the Whois service being turned off completely while a replacement was developed.

Source: Whois is dead as Europe hands DNS overlord ICANN its arse • The Register

It’s an important and useful tool – hopefully they will resolve this one way or another.

Orkut Hello: The Man Behind Orkut Says His ‘Hello’ Platform Doesn’t Sell User Data

In 2004, one of the world’s most popular social networks, Orkut, was founded by a former Google employee named Orkut Büyükkökten. Later that year, a Harvard University student named Mark Zuckerberg launched ‘the Facebook’, which over the course of a year became ubiquitous in Ivy League universities and was eventually called Facebook.com.

Orkut was shut down by Google in 2014, but in its heyday, the network had hit 300 million users around the world. Facebook took five years to achieve that feat. At a time when the #DeleteFacebook movement is gaining traction worldwide in light of the Cambridge Analytica scandal, Orkut has made a comeback

“Hello.com is a spiritual successor of Orkut.com,” Büyükkökten told BloombergQuint. “The most important thing about Orkut was communities, because they brought people together around topics and things that interested them and provided a safe place for people to exchange ideas and share genuine passions and feelings. We have built the entire ‘Hello’ experience around communities and passions and see it as Orkut 2.0.”

Orkut has decided to make a comeback when Mark Zuckerberg, founder and CEO of Facebook, has been questioned by U.S. congressmen and senators about its policies and data collection and usage practices. That came after the Cambridge Analytica data leak which impacted nearly 87 million users, including Zuckerberg himself.

“People have lost trust in social networks and the main reason is social media services today don’t put the users first. They put advertisers, brands, third parties, shareholders before the users,” Büyükkökten said. “They are also not transparent about practices. The privacy policy and terms of services are more like black boxes. How many users actually read them?”

Büyükkökten said users need to be educated about these things and user consent is imperative in such situations when data is shared by such platforms. “On Hello, we do not share data with third parties. We have our own registration and login and so the data doesn’t follow you anywhere,”he said. “You don’t need to sell user data in order to be profitable or make money.”

Source: Orkut Hello: The Man Behind Orkut Says His ‘Hello’ Platform Doesn’t Sell User Data – Bloomberg Quint

I am very curious what his business model is then

Facebook admits: Apps were given users’ permission to go into their inboxes

Facebook has admitted that some apps had access to users’ private messages, thanks to a policy that allowed devs to request mailbox permissions.

The revelation came as current Facebook users found out whether they or their friends had used the “This Is Your Digital Life” app that allowed academic Aleksandr Kogan to collect data on users and their friends.

Users whose friends had been suckered in by the quiz were told that as a result, their public profile, Page likes, birthday and current city were “likely shared” with the app.

So far, so expected. But, the notification went on:

A small number of people who logged into “This Is Your Digital Life” also shared their own News Feed, timeline, posts and messages which may have included post and messages from you. They may also have shared your hometown.

That’s because, back in 2014 when the app was in use, developers using Facebook’s Graph API to get data off the platform could ask for read_mailbox permission, allowing them access to a person’s inbox.

That was just one of a series of extended permissions granted to devs under v1.0 of the Graph API, which was first introduced in 2010.

Following pressure from privacy activists – but much to the disappointment of developers – Facebook shut that tap off for most permissions in April 2015, although the changelog shows that read_mailbox wasn’t deprecated until 6 October 2015.

Facebook confirmed to The Register that this access had been requested by the app and that a small number of people had granted it permission.

“In 2014, Facebook’s platform policy allowed developers to request mailbox permissions but only if the person explicitly gave consent for this to happen,” a spokesborg told us.

“According to our records only a very small number of people explicitly opted into sharing this information. The feature was turned off in 2015.”

Source: Facebook admits: Apps were given users’ permission to go into their inboxes • The Register

How to Check if Cambridge Analytica Had Your Facebook Data

Facebook launched a tool yesterday that you can use to find out whether you or your friends shared information with Cambridge Analytica, the Trump-affiliated company that harvested data from a Facebook app to support the then-candidate’s efforts in the 2016 presidential election.

If you were affected directly—and you have plenty of company, if so—you should have already received a little notification from Facebook. If you missed that in your News Feed (or you’ve already sworn off Facebook, but want to check and see if your information was compromised), Facebook also has a handy little Cambridge Analytica tool you can use.

The problem? While the tool can tell you if you or your friends shared your information via the spammy “This is Your Digital Life” app, it won’t tell you who among your friends was foolish enough to give up your information to a third party. You have lost your ability to publicly shame them, yell at them, or go over to where they live (or fire up a remote desktop session) to teach them how to … not do that ever again.

So, what can you do now?

Even though your past Facebook data might already be out there in the digital ether somewhere, you can now start locking down your information a bit more. Once you’re done checking the Cambridge Analytica tool, go here (Facebook’s Settings page). Click on Apps and Websites. Up until recently, Facebook had a setting (under “Apps Others Use”) that you could use to restrict the information that your friends could share about you to apps they were using. Now, you’ll see this message instead:

“These outdated settings have been removed because they applied to an older version of our platform that no longer exists.

To see or change the info you currently share with apps and websites, review the ones listed above, under ‘Logged in with Facebook.’”

Sounds ominous, right? Well, according to Facebook, these settings haven’t really done much of anything for years, anyway. As a Facebook spokesperson recently told Wired:

“These controls were built before we made significant changes to how developers build apps on Facebook. At the time, the Apps Others Use functionality allowed people to control what information could be shared to developers. We changed our systems years ago so that people could not share friends’ information with developers unless each friend also had explicitly granted permission to the developer.”

Instead, take a little time to review (again) the apps you’ve allowed to access your Facebook information. If you’re not using the app anymore, or if it sounds a little fishy, remove it—heck, remove as many apps as you can in one go.

Source: How to Check if Cambridge Analytica Had Your Facebook Data

CubeYou: Cambridge-like app collected data on millions from Facebook

Facebook is suspending a data analytics firm called CubeYou from the platform after CNBC notified the company that CubeYou was collecting information about users through quizzes.

CubeYou misleadingly labeled its quizzes “for non-profit academic research,” then shared user information with marketers. The scenario is eerily similar to how Cambridge Analytica received unauthorized access to data from as many as 87 million Facebook user accounts to target political marketing.

CubeYou, whose CEO denies any deception, sold data that had been collected by researchers working with the Psychometrics Lab at Cambridge University, similar to how Cambridge Analytica used information it obtained from other professors at the school for political marketing.

The CubeYou discovery suggests that collecting data from quizzes and using it for marketing purposes was far from an isolated incident. Moreover, the fact that CubeYou was able to mislabel the purpose of the quizzes — and that Facebook did nothing to stop it until CNBC pointed out the problem — suggests the platform has little control over this activity.

[…]

CubeYou boasts on its website that it uses census data and various web and social apps on Facebook and Twitter to collect personal information. CubeYou then contracts with advertising agencies that want to target certain types of Facebook users for ad campaigns.

CubeYou’s site says it has access to personally identifiable information (PII) such as first names, last names, emails, phone numbers, IP addresses, mobile IDs and browser fingerprints.

On a cached version of its website from March 19, it also said it keeps age, gender, location, work and education, and family and relationship information. It also has likes, follows, shares, posts, likes to posts, comments to posts, check-ins and mentions of brands/celebrities in a post. Interactions with companies are tracked back to 2012 and are updated weekly, the site said.

Source: CubeYou Cambridge-like app collected data on millions from Facebook

Grindr’s API Surrendered Location Data to a Third-Party Website—Even After Users Opted Out

A website that allowed Gindr’s gay-dating app users to see who blocked them on the service says that by using the company’s API it was able to view unread messages, email addresses, deleted photos, and—perhaps most troubling—location data, according to a report published Wednesday.

The website, C*ckblocked, boasts of being the “first and only way to see who blocked you on Grindr.” The website’s owner, Trever Faden, told NBC that, by using Grindr’s API, he was able to access a wealth of personal information, including the location data of users—even for those who had opted to hide their locations.

“One could, without too much difficulty or even a huge amount of technological skill, easily pinpoint a user’s exact location,” Faden told NBC. But before he could access this information, Grindr users first had to supply C*ckblocked with their usernames and passwords, meaning that they voluntarily surrendered access to their accounts.

Grindr said that, once notified by Faden, it moved quickly to resolve the issue. The API that allowed C*ckblocked to function was patched on March 23rd, according to the website.

Source: Grindr’s API Surrendered Location Data to a Third-Party Website—Even After Users Opted Out

Mozilla launches Facebook container extension

This extension helps you control more of your web activity from Facebook by isolating your identity into a separate container. This makes it harder for Facebook to track your activity on other websites via third-party cookies.

Rather than stop using a service you find valuable and miss out on those adorable photos of your nephew, we think you should have tools to limit what data others can collect about you. That includes us: Mozilla does not collect data from your use of the Facebook Container extension. We only know the number of times the extension is installed or removed.

When you install this extension it will delete your Facebook cookies and log you out of Facebook. The next time you visit Facebook it will open in a new blue-colored browser tab (aka “container tab”). In that tab you can login to Facebook and use it like you normally would. If you click on a non-Facebook link or navigate to a non-Facebook website in the URL bar, these pages will load outside of the container.

Source: Facebook Container Extension: Take control of how you’re being tracked | The Firefox Frontier

Wylie: It’s possible that the Facebook app is listening to you

During an appearance before a committee of U.K. lawmakers today, Cambridge Analytica whistleblower Christopher Wylie breathed new life into longstanding rumors that the Facebook app listens to its users in order to target advertisements.Damian Collins, a member of parliament who chaired the committee, asked whether the Facebook app might listen to what users are discussing and use it to prioritize certain ads.

But, Wylie said in a meandering reply, it’s possible that Facebook and other smartphone apps are listening in for reasons other than speech recognition. Specifically, he said, they might be trying to ascertain what type of environment a user is in in order to “improve the contextual value of the advertising itself.”

“There’s audio that could be useful just in terms of, are you in an office environment, are you outside, are you watching TV, what are you doing right now?” Wylie said, without elaborating on how that information could help target ads.

Facebook has long denied that its app analyzes audio in order to customize ads. But users have often reported mentioning a product that they’ve never expressed an interest in online — and then being inundated with online ads for it. Reddit users, in particular, spend time collecting what they purport to be evidence that Facebook is listening to users in a particular way, such as “micro-samples” of a few seconds rather than full-on continuous natural language processing.

Source: Wylie: It’s possible that the Facebook app is listening to you | The Outline

Dutch government pretends to think about referendum result against big brother unlimited surveillance, ignores it completely.

Basically not only will they allow a huge amount of different agencies to tap your internet and phone and store it without any judicary procedures, checks or balances, they will also allow these agencies to share the data with whoever they want, including foreign agencies. Surprisingly the Dutch people voted against these far reaching breaches of privacy, so the government said they thought about it and would edit the law in six tiny places which completely miss the point and the problems people have with their privacy being destroyed.

Source: Kabinet scherpt Wet op de inlichtingen- en veiligheidsdiensten 2017 aan | Nieuwsbericht | Defensie.nl

Facebook Blames a ‘Bug’ for Not Deleting Your Seemingly Deleted Videos

Did you ever record a video on Facebook to post directly to your friend’s wall, only to discard the take and film a new version? You may have thought those embarrassing draft versions were deleted, but Facebook kept a copy. The company is blaming it on a “bug” and swears that it’s going to delete those discarded videos now. They pinkie promise this time.

Last week, New York’s Select All broke the story that social network was keeping the seemingly deleted old videos. The continued existence of the draft videos was discovered when several users downloaded their personal Facebook archives—and found numerous videos they never published. Today, Select All got a statement from Facebook blaming the whole thing on a “bug.” From Facebook via New York:

We investigated a report that some people were seeing their old draft videos when they accessed their information from our Download Your Information tool. We discovered a bug that prevented draft videos from being deleted. We are deleting them and apologize for the inconvenience. We appreciate New York Magazine for bringing the issue to our attention.

It was revealed last month that the data-harvesting firm (and apparent bribery consultants) Cambridge Analytica had acquired the information of about 50 million Facebook users and abused that data to help President Trump get elected. Specifically, the company was exploiting the anger of voters through highly-targeted advertising. And in the wake of the ensuing scandal, people have been learning all kinds of crazy things about Facebook.

Facebook users have been downloading some of the data that the social media behemoth keeps on them and it’s not pretty. For example, Facebook has kept detailed call logs from users with Android phones. The company says that Android users had to opt-in for the feature, but that’s a bullshit cop-out when you take a look at what the screen for “opting in” actually looks like.

Source: Facebook Blames a ‘Bug’ for Not Deleting Your Seemingly Deleted Videos

‘Big Brother’ in India Requires Fingerprint Scans for Food, Phones and Finances

NEW DELHI — Seeking to build an identification system of unprecedented scope, India is scanning the fingerprints, eyes and faces of its 1.3 billion residents and connecting the data to everything from welfare benefits to mobile phones.

Civil libertarians are horrified, viewing the program, called Aadhaar, as Orwell’s Big Brother brought to life. To the government, it’s more like “big brother,” a term of endearment used by many Indians to address a stranger when asking for help.

For other countries, the technology could provide a model for how to track their residents. And for India’s top court, the ID system presents unique legal issues that will define what the constitutional right to privacy means in the digital age.

To Adita Jha, Aadhaar was simply a hassle. The 30-year-old environmental consultant in Delhi waited in line three times to sit in front of a computer that photographed her face, captured her fingerprints and snapped images of her irises. Three times, the data failed to upload. The fourth attempt finally worked, and she has now been added to the 1.1 billion Indians already included in the program.

[…]

The poor must scan their fingerprints at the ration shop to get their government allocations of rice. Retirees must do the same to get their pensions. Middle-school students cannot enter the water department’s annual painting contest until they submit their identification.

In some cities, newborns cannot leave the hospital until their parents sign them up. Even leprosy patients, whose illness damages their fingers and eyes, have been told they must pass fingerprint or iris scans to get their benefits.

The Modi government has also ordered Indians to link their IDs to their cellphone and bank accounts. States have added their own twists, like using the data to map where people live. Some employers use the ID for background checks on job applicants.

[…]

Although the system’s core fingerprint, iris and face database appears to have remained secure, at least 210 government websites have leaked other personal data — such as name, birth date, address, parents’ names, bank account number and Aadhaar number — for millions of Indians. Some of that data is still available with a simple Google search.

As Aadhaar has become mandatory for government benefits, parts of rural India have struggled with the internet connections necessary to make Aadhaar work. After a lifetime of manual labor, many Indians also have no readable prints, making authentication difficult. One recent study found that 20 percent of the households in Jharkand state had failed to get their food rations under Aadhaar-based verification — five times the failure rate of ration cards.

Source: ‘Big Brother’ in India Requires Fingerprint Scans for Food, Phones and Finances – The New York Times

Grindr: Yeah, we shared your HIV status info with other companies – but we didn’t charge them! (oh and your GPS coords)

Hookup fixer Grindr is on the defensive after it shared sensitive information, including HIV status and physical location, of its app’s users with outside organizations.

The quickie booking facilitator on Monday admitted it passed, via HTTPS, people’s public profiles to third-party analytics companies to process on its behalf. That means, yes, the information was handed over in bulk, but, hey, at least it didn’t sell it!

“Grindr has never, nor will we ever sell personally identifiable user information – especially information regarding HIV status or last test date – to third parties or advertisers,” CTO Scott Chen said in a statement.

Rather than apologize, Grindr said its punters should have known better than to give it any details they didn’t want passed around to other companies. On the one hand, the data was scraped from the application’s public profiles, so, well, maybe people ought to calm down. It was all public anyway. On the other hand, perhaps people didn’t expect it to be handed over for analysis en masse.

“It’s important to remember that Grindr is a public forum,” Chen said. “We give users the option to post information about themselves including HIV status and last test date, and we make it clear in our privacy policy that if you choose to include this information in your profile, the information will also become public.”

This statement is in response to last week’s disclosure by security researchers on the ways the Grindr app shares user information with third-party advertisers and partners. Among the information found to be passed around by Grindr was the user’s HIV status, something Grindr allows members to list in their profiles.

The HIV status, along with last test date, sexual position preference, and GPS location were among the pieces of info Grindr shared via encrypted network connections with analytics companies Localytics and Apptimize.

The revelation drew sharp criticism of Grindr, with many slamming the upstart for sharing what many consider to be highly sensitive personal information with third-parties along with GPS coordinates.

Source: Grindr: Yeah, we shared your HIV status info with other companies – but we didn’t charge them! • The Register