Google Removes ‘Don’t Be Evil’ Clause From Its Code of Conduct

Google’s unofficial motto has long been the simple phrase “don’t be evil.” But that’s over, according to the code of conduct that Google distributes to its employees. The phrase was removed sometime in late April or early May, archives hosted by the Wayback Machine show.

“Don’t be evil” has been part of the company’s corporate code of conduct since 2000. When Google was reorganized under a new parent company, Alphabet, in 2015, Alphabet assumed a slightly adjusted version of the motto, “do the right thing.” However, Google retained its original “don’t be evil” language until the past several weeks. The phrase has been deeply incorporated into Google’s company culture—so much so that a version of the phrase has served as the wifi password on the shuttles that Google uses to ferry its employees to its Mountain View headquarters, sources told Gizmodo.

[…]

Despite this significant change, Google’s code of conduct says it has not been updated since April 5, 2018.

The updated version of Google’s code of conduct still retains one reference to the company’s unofficial motto—the final line of the document is still: “And remember… don’t be evil, and if you see something that you think isn’t right – speak up!”

Source: Google Removes ‘Don’t Be Evil’ Clause From Its Code of Conduct

Tracking Firm LocationSmart Leaked Location Data for Customers of All Major U.S. Mobile Carriers Without Consent in Real Time Via Its Web Site

LocationSmart, a U.S. based company that acts as an aggregator of real-time data about the precise location of mobile phone devices, has been leaking this information to anyone via a buggy component of its Web site — without the need for any password or other form of authentication or authorization — KrebsOnSecurity has learned. The company took the vulnerable service offline early this afternoon after being contacted by KrebsOnSecurity, which verified that it could be used to reveal the location of any AT&T, Sprint, T-Mobile or Verizon phone in the United States to an accuracy of within a few hundred yards.

Source: Tracking Firm LocationSmart Leaked Location Data for Customers of All Major U.S. Mobile Carriers Without Consent in Real Time Via Its Web Site — Krebs on Security

Scarily this means it can still be used to track anyone if you’re willing to pay for the service.

UK Watchdog Calls for Face Recognition Ban Over 90 Percent False-Positive Rate

As face recognition in public places becomes more commonplace, Big Brother Watch is especially concerned with false identification. In May, South Wales Police revealed that its face-recognition software had erroneously flagged thousands of attendees of a soccer game as a match for criminals; 92 percent of the matches were wrong. In a statement to the BBC, Matt Jukes, the chief constable in South Wales, said “we need to use technology when we’ve got tens of thousands of people in those crowds to protect everybody, and we are getting some great results from that.”

If someone is misidentified as a criminal or flagged, police may engage and ask for further identification. Big Brother Watch argues that this amounts to “hidden identity checks” that require people to “prove their identity and thus their innocence.” 110 people were stopped at the event after being flagged, leading to 15 arrests.

Simply walking through a crowd could lead to an identity check, but it doesn’t end there. South Wales reported more than 2,400 “matches” between May 2017 and March 2018, but ultimately made only 15 connecting arrests. The thousands of photos taken, however, are still stored in the system, with the overwhelming majority of people having no idea they even had their photo taken.

Source: UK Watchdog Calls for Face Recognition Ban Over 90 Percent False-Positive Rate

Facebook admits it does track non-users, for their own good

Facebook’s apology-and-explanation machine grinds on, with The Social Network™ posting detail on one of its most controversial activities – how it tracks people who don’t use Facebook.

The company explained that the post is a partial response to questions CEO Mark Zuckerberg was unable to answer during his senate and Congressional hearings.

It’s no real surprise that someone using their Facebook Login to sign in to other sites is tracked, but the post by product management director David Baser goes into (a little) detail on other tracking activities – some of which have been known to the outside world for some time, occasionally denied by Facebook, and apparently mysteries only to Zuck.

When non-Facebook sites add a “Like” button (a social plugin, in Baser’s terminology), visitors to those sites are tracked: Facebook gets their IP address, browser and OS fingerprint, and visited site.

If that sounds a bit like the datr cookie dating from 2011, you wouldn’t be far wrong.

Facebook denied non-user tracking until 2015, at which time it emphasised that it was only gathering non-users’ interactions with Facebook users. That explanation didn’t satisfy everyone, which was why The Social Network™ was told to quit tracking Belgians who haven’t signed on earlier this year.

Source: Facebook admits it does track non-users, for their own good • The Register

The Golden State Killer Suspect’s DNA Was in a Publicly Available Database, and Yours Might Be Too

Plenty of people have voluntarily uploaded their DNA to GEDmatch and other databases, often with real names and contact information. It’s what you do if you’re an adopted kid looking for a long-lost parent, or a genealogy buff curious about whether you have any cousins still living in the old country. GEDmatch requires that you make your DNA data public if you want to use their comparison tools, although you don’t have to attach your real name. And they’re not the only database that has helped law enforcement track people down without their knowledge.

How DNA Databases Help Track People Down

We don’t know exactly what samples or databases were used in the Golden State Killer’s case; the Sacramento County District Attorney’s office gave very little information and hasn’t confirmed any further details. But here are some things that are possible.

Y chromosome data can lead to a good guess at an unknown person’s last name.

Cis men typically have an X and a Y chromosome, and cis women two X’s. That means the Y chromosome is passed down from genetic males to their offspring—for example, from father to son. Since last names are also often handed down the same way, in many families you’ll share a surname with anybody who shares your Y chromosome.

A 2013 Science paper described how a small amount of Y chromosome data should be enough to identify surnames for an estimated 12 percent of white males in the US. (That method would find the wrong surname for 5 percent, and the rest would come back as unknown.) As more people upload their information to public databases, the authors warned, the success rate will only increase.

This is exactly the technique that genealogical consultant Colleen Fitzpatrick used to narrow down a pool of suspects in an Arizona cold case. She seems to have used short tandem repeat (STR) data from the suspect’s Y chromosome to search the Family Tree DNA database, and she saw the name Miller in the results.

The police already had a long list of suspects in the Arizona case, but based on that tip they zeroed in on one with the last name Miller. As with the Golden State Killer case, police confirmed the DNA match by obtaining a fresh DNA sample directly from their subject—the Sacramento office said they got it from something he discarded. (Yes, this is legal, and it can be an item as ordinary as a used drinking straw.)

The authors of the Science paper point out that surname, location, and year of birth are often enough to find an individual in census data.

 SNP files can find family trees.

When you download your “raw data” after mailing in a 23andme or Ancestry test, what you get is a list of locations on your genome (called SNPs, for single nucleotide polymorphisms) and two letters indicating your status for each. For example, at a certain SNP you may have inherited an A from one parent and a G from the other.

Genetic testing sites will have tools to compare your DNA with others in their database, but you can also download your raw data and submit it to other sites, including GEDmatch or Family Tree DNA. (23andme and Ancestry allow you to download your data, but they don’t accept uploads.)

But you don’t have to send a spit sample to one of those companies to get a raw data file. The DNA Doe project describes how they sequenced the whole genome of an unidentified girl from a cold case and used that data to construct a SNP file to upload to GEDmatch. They found someone with enough of the same SNPs that they were probably a close cousin. That cousin also had an account at Ancestry, where they had filled out a family tree with details of their family members. The tree included an entry for a cousin of the same age as the unidentified girl, and whose death date was listed as “missing—presumed dead.” It was her.

Your DNA Is Not Just Yours

When you send in a spit sample, or upload a raw data file, you may only be thinking about your own privacy. I have nothing to hide, you might tell yourself. Who cares if somebody finds out that I have blue eyes or a predisposition to heart disease?

But half of your DNA belongs to your biological mother, and half to your biological father. Another half—cut a different way—belongs to each of your children. On average, you share half your DNA with a sibling, and a quarter with a half-sibling, grandparent, aunt, uncle, niece or nephew. You share about an eighth with a first cousin, and so on. The more of your extended family who are into genealogy, the more likely you are to have your DNA in a public database, already contributed by a relative.

In the cases we mention here, the breakthrough came when DNA was matched, through a public database, to a person’s real name. But your DNA is, in a sense, your most identifying information.

For some cases, it may not matter whether your name is attached. Facebook reportedly spoke with a hospital about exchanging anonymized data. They didn’t need names because they had enough information, and good enough algorithms, that they thought they could identify individuals based on everything else. (Facebook doesn’t currently collect DNA information, thank god. There is a public DNA project that signs people up using a Facebook app, but they say they don’t pass the data to Facebook itself.)

And remember that 2013 study about tracking down people’s surnames? They grabbed whole-genome data from a few high-profile people who had made theirs public, and showed that the DNA files were sometimes enough information to track down an individual’s full name. It may be impossible for DNA to be totally anonymous.

Can You Protect Your Privacy While Using DNA Databases?

If you’re very concerned about privacy, you’re best off not using any of these databases. But you can’t control whether your relatives use them, and you may be looking for a long-lost family member and thus want to be in a database while minimizing the risks.

Source: The Golden State Killer Suspect’s DNA Was in a Publicly Available Database, and Yours Might Be Too

‘Forget the Facebook leak’: China is mining data directly from workers’ brains on an industrial scale

the workers wear caps to monitor their brainwaves, data that management then uses to adjust the pace of production and redesign workflows, according to the company.

The company said it could increase the overall efficiency of the workers by manipulating the frequency and length of break times to reduce mental stress.

Hangzhou Zhongheng Electric is just one example of the large-scale application of brain surveillance devices to monitor people’s emotions and other mental activities in the workplace, according to scientists and companies involved in the government-backed projects.

Concealed in regular safety helmets or uniform hats, these lightweight, wireless sensors constantly monitor the wearer’s brainwaves and stream the data to computers that use artificial intelligence algorithms to detect emotional spikes such as depression, anxiety or rage.

The technology is in widespread use around the world but China has applied it on an unprecedented scale in factories, public transport, state-owned companies and the military to increase the competitiveness of its manufacturing industry and to maintain social stability.

It has also raised concerns about the need for regulation to prevent abuses in the workplace.

The technology is also in use at in Hangzhou at State Grid Zhejiang Electric Power, where it has boosted company profits by about 2 billion yuan (US$315 million) since it was rolled out in 2014, according to Cheng Jingzhou, an official overseeing the company’s emotional surveillance programme.

“There is no doubt about its effect,” Cheng said.

Source: ‘Forget the Facebook leak’: China is mining data directly from workers’ brains on an industrial scale | South China Morning Post

Chinese government admits collection of deleted WeChat messages

Chinese authorities revealed over the weekend that they have the capability of retrieving deleted messages from the almost universally used WeChat app. The admission doesn’t come as a surprise to many, but it’s rare for this type of questionable data collection tactic to be acknowledged publicly.As noted by the South China Morning Post, an anti-corruption commission in Hefei province posted Saturday to social media that it has “retrieved a series of deleted WeChat conversations from a subject” as part of an investigation.The post was deleted Sunday, but not before many had seen it and understood the ramifications. Tencent, which operates the WeChat service used by nearly a billion people (including myself), explained in a statement that “WeChat does not store any chat histories — they are only stored on users’ phones and computers.”The technical details of this storage were not disclosed, but it seems clear from the commission’s post that they are accessible in some way to interested authorities, as many have suspected for years. The app does, of course, comply with other government requirements, such as censoring certain topics.There are still plenty of questions, the answers to which would help explain user vulnerability: Are messages effectively encrypted at rest? Does retrieval require the user’s password and login, or can it be forced with a “master key” or backdoor? Can users permanently and totally delete messages on the WeChat platform at all?

Source: Chinese government admits collection of deleted WeChat messages | TechCrunch

Revealed: how bookies use AI to keep gamblers hooked | Technology | The Guardian

The gambling industry is increasingly using artificial intelligence to predict consumer habits and personalise promotions to keep gamblers hooked, industry insiders have revealed.Current and former gambling industry employees have described how people’s betting habits are scrutinised and modelled to manipulate their future behaviour.“The industry is using AI to profile customers and predict their behaviour in frightening new ways,” said Asif, a digital marketer who previously worked for a gambling company. “Every click is scrutinised in order to optimise profit, not to enhance a user’s experience.”“I’ve often heard people wonder about how they are targeted so accurately and it’s no wonder because its all hidden in the small print.”Publicly, gambling executives boast of increasingly sophisticated advertising keeping people betting, while privately conceding that some are more susceptible to gambling addiction when bombarded with these type of bespoke ads and incentives.Gamblers’ every click, page view and transaction is scientifically examined so that ads statistically more likely to work can be pushed through Google, Facebook and other platforms.

[…]

Last August, the Guardian revealed the gambling industry uses third-party companies to harvest people’s data, helping bookmakers and online casinos target people on low incomes and those who have stopped gambling.

Despite condemnation from MPs, experts and campaigners, such practices remain an industry norm.

“You can buy email lists with more than 100,000 people’s emails and phone numbers from data warehouses who regularly sell data to help market gambling promotions,” said Brian. “They say it’s all opted in but people haven’t opted in at all.”

In this way, among others, gambling companies and advertisers create detailed customer profiles including masses of information about their interests, earnings, personal details and credit history.

[…]

Elsewhere, there are plans to geolocate customers in order to identify when they arrive at stadiums so they can prompted via texts to bet on the game they are about to watch.

The gambling industry earned£14bn in 2016, £4.5bn of which from online betting, and it is pumping some of that money into making its products more sophisticated and, in effect, addictive.

Source: Revealed: how bookies use AI to keep gamblers hooked | Technology | The Guardian

Whois is dead as Europe hands DNS overlord ICANN its arse :(

The Whois public database of domain name registration details is dead.

In a letter [PDF] sent this week to DNS overseer ICANN, Europe’s data protection authorities have effectively killed off the current service, noting that it breaks the law and so will be illegal come 25 May, when GDPR comes into force.

The letter also has harsh words for ICANN’s proposed interim solution, criticizing its vagueness and noting it needs to include explicit wording about what can be done with registrant data, as well as introduce auditing and compliance functions to make sure the data isn’t being abused.

ICANN now has a little over a month to come up with a replacement to the decades-old service that covers millions of domain names and lists the personal contact details of domain registrants, including their name, email and telephone number.

ICANN has already acknowledged it has no chance of doing so: a blog post by the company in response to the letter warns that without being granted a special temporary exemption from the law, the system will fracture.

“Unless there is a moratorium, we may no longer be able to give instructions to the contracted parties through our agreements to maintain Whois,” it warns. “Without resolution of these issues, the Whois system will become fragmented.”

We spoke with the president of ICANN’s Global Domains Division, Akram Atallah, and he told us that while there was “general agreement that having every thing public is not the right way to go”, he was hopeful that the letter would not result in the Whois service being turned off completely while a replacement was developed.

Source: Whois is dead as Europe hands DNS overlord ICANN its arse • The Register

It’s an important and useful tool – hopefully they will resolve this one way or another.

Orkut Hello: The Man Behind Orkut Says His ‘Hello’ Platform Doesn’t Sell User Data

In 2004, one of the world’s most popular social networks, Orkut, was founded by a former Google employee named Orkut Büyükkökten. Later that year, a Harvard University student named Mark Zuckerberg launched ‘the Facebook’, which over the course of a year became ubiquitous in Ivy League universities and was eventually called Facebook.com.

Orkut was shut down by Google in 2014, but in its heyday, the network had hit 300 million users around the world. Facebook took five years to achieve that feat. At a time when the #DeleteFacebook movement is gaining traction worldwide in light of the Cambridge Analytica scandal, Orkut has made a comeback

“Hello.com is a spiritual successor of Orkut.com,” Büyükkökten told BloombergQuint. “The most important thing about Orkut was communities, because they brought people together around topics and things that interested them and provided a safe place for people to exchange ideas and share genuine passions and feelings. We have built the entire ‘Hello’ experience around communities and passions and see it as Orkut 2.0.”

Orkut has decided to make a comeback when Mark Zuckerberg, founder and CEO of Facebook, has been questioned by U.S. congressmen and senators about its policies and data collection and usage practices. That came after the Cambridge Analytica data leak which impacted nearly 87 million users, including Zuckerberg himself.

“People have lost trust in social networks and the main reason is social media services today don’t put the users first. They put advertisers, brands, third parties, shareholders before the users,” Büyükkökten said. “They are also not transparent about practices. The privacy policy and terms of services are more like black boxes. How many users actually read them?”

Büyükkökten said users need to be educated about these things and user consent is imperative in such situations when data is shared by such platforms. “On Hello, we do not share data with third parties. We have our own registration and login and so the data doesn’t follow you anywhere,”he said. “You don’t need to sell user data in order to be profitable or make money.”

Source: Orkut Hello: The Man Behind Orkut Says His ‘Hello’ Platform Doesn’t Sell User Data – Bloomberg Quint

I am very curious what his business model is then

Facebook admits: Apps were given users’ permission to go into their inboxes

Facebook has admitted that some apps had access to users’ private messages, thanks to a policy that allowed devs to request mailbox permissions.

The revelation came as current Facebook users found out whether they or their friends had used the “This Is Your Digital Life” app that allowed academic Aleksandr Kogan to collect data on users and their friends.

Users whose friends had been suckered in by the quiz were told that as a result, their public profile, Page likes, birthday and current city were “likely shared” with the app.

So far, so expected. But, the notification went on:

A small number of people who logged into “This Is Your Digital Life” also shared their own News Feed, timeline, posts and messages which may have included post and messages from you. They may also have shared your hometown.

That’s because, back in 2014 when the app was in use, developers using Facebook’s Graph API to get data off the platform could ask for read_mailbox permission, allowing them access to a person’s inbox.

That was just one of a series of extended permissions granted to devs under v1.0 of the Graph API, which was first introduced in 2010.

Following pressure from privacy activists – but much to the disappointment of developers – Facebook shut that tap off for most permissions in April 2015, although the changelog shows that read_mailbox wasn’t deprecated until 6 October 2015.

Facebook confirmed to The Register that this access had been requested by the app and that a small number of people had granted it permission.

“In 2014, Facebook’s platform policy allowed developers to request mailbox permissions but only if the person explicitly gave consent for this to happen,” a spokesborg told us.

“According to our records only a very small number of people explicitly opted into sharing this information. The feature was turned off in 2015.”

Source: Facebook admits: Apps were given users’ permission to go into their inboxes • The Register

How to Check if Cambridge Analytica Had Your Facebook Data

Facebook launched a tool yesterday that you can use to find out whether you or your friends shared information with Cambridge Analytica, the Trump-affiliated company that harvested data from a Facebook app to support the then-candidate’s efforts in the 2016 presidential election.

If you were affected directly—and you have plenty of company, if so—you should have already received a little notification from Facebook. If you missed that in your News Feed (or you’ve already sworn off Facebook, but want to check and see if your information was compromised), Facebook also has a handy little Cambridge Analytica tool you can use.

The problem? While the tool can tell you if you or your friends shared your information via the spammy “This is Your Digital Life” app, it won’t tell you who among your friends was foolish enough to give up your information to a third party. You have lost your ability to publicly shame them, yell at them, or go over to where they live (or fire up a remote desktop session) to teach them how to … not do that ever again.

So, what can you do now?

Even though your past Facebook data might already be out there in the digital ether somewhere, you can now start locking down your information a bit more. Once you’re done checking the Cambridge Analytica tool, go here (Facebook’s Settings page). Click on Apps and Websites. Up until recently, Facebook had a setting (under “Apps Others Use”) that you could use to restrict the information that your friends could share about you to apps they were using. Now, you’ll see this message instead:

“These outdated settings have been removed because they applied to an older version of our platform that no longer exists.

To see or change the info you currently share with apps and websites, review the ones listed above, under ‘Logged in with Facebook.’”

Sounds ominous, right? Well, according to Facebook, these settings haven’t really done much of anything for years, anyway. As a Facebook spokesperson recently told Wired:

“These controls were built before we made significant changes to how developers build apps on Facebook. At the time, the Apps Others Use functionality allowed people to control what information could be shared to developers. We changed our systems years ago so that people could not share friends’ information with developers unless each friend also had explicitly granted permission to the developer.”

Instead, take a little time to review (again) the apps you’ve allowed to access your Facebook information. If you’re not using the app anymore, or if it sounds a little fishy, remove it—heck, remove as many apps as you can in one go.

Source: How to Check if Cambridge Analytica Had Your Facebook Data

CubeYou: Cambridge-like app collected data on millions from Facebook

Facebook is suspending a data analytics firm called CubeYou from the platform after CNBC notified the company that CubeYou was collecting information about users through quizzes.

CubeYou misleadingly labeled its quizzes “for non-profit academic research,” then shared user information with marketers. The scenario is eerily similar to how Cambridge Analytica received unauthorized access to data from as many as 87 million Facebook user accounts to target political marketing.

CubeYou, whose CEO denies any deception, sold data that had been collected by researchers working with the Psychometrics Lab at Cambridge University, similar to how Cambridge Analytica used information it obtained from other professors at the school for political marketing.

The CubeYou discovery suggests that collecting data from quizzes and using it for marketing purposes was far from an isolated incident. Moreover, the fact that CubeYou was able to mislabel the purpose of the quizzes — and that Facebook did nothing to stop it until CNBC pointed out the problem — suggests the platform has little control over this activity.

[…]

CubeYou boasts on its website that it uses census data and various web and social apps on Facebook and Twitter to collect personal information. CubeYou then contracts with advertising agencies that want to target certain types of Facebook users for ad campaigns.

CubeYou’s site says it has access to personally identifiable information (PII) such as first names, last names, emails, phone numbers, IP addresses, mobile IDs and browser fingerprints.

On a cached version of its website from March 19, it also said it keeps age, gender, location, work and education, and family and relationship information. It also has likes, follows, shares, posts, likes to posts, comments to posts, check-ins and mentions of brands/celebrities in a post. Interactions with companies are tracked back to 2012 and are updated weekly, the site said.

Source: CubeYou Cambridge-like app collected data on millions from Facebook

$0.75 – about how much Cambridge Analytica paid per voter in bid to micro-target their minds, internal docs reveal

Cambridge Analytica bought psychological profiles on individual US voters, costing roughly 75 cents to $5 apiece, each crafted using personal information plundered from millions of Facebook accounts, according to revealed internal documents.

Over the course of the past two weeks, whistleblower Chris Wylie has made a series of claims against his former employer, Cambridge Analytica, and its parent organizations SCL Elections and SCL Group.

He has alleged CA drafted in university academic Dr Aleksander Kogan to help micro-target voters using their personal information harvested from Facebook, and that the Vote Leave campaign in the UK’s Brexit referendum “cheated” election spending limits by funneling money to Canadian political ad campaign biz AggregateIQ through a number of smaller groups.

Cambridge Analytica has denied using Facebook-sourced information in its work for Donald Trump’s US election campaign, and dubbed the allegations against it as “completely unfounded conspiracy theories.”

A set of internal CA files released Thursday by Britain’s House of Commons’ Digital, Culture, Media and Sport Select Committee includes contracts and email exchanges, plus micro-targeting strategies and case studies boasting of the organization’s influence in previous international campaigns.

Among them is a contract, dated June 4, 2014, revealing a deal struck between SCL Elections and Kogan’s biz Global Science Research, referred to as GS in the documents. It showed that Kogan was commissioned by SCL to build up psychological profiles of people, using data slurped from their Facebook accounts by a quiz app, and match them to voter records obtained by SCL.

The app was built by GS, installed by some 270,000 people, and was granted access to their social network accounts and those of their friends, up to 50 million of them. The information was sold to Cambridge Analytica by GS.

[…]

GS’s fee was a nominal £3.14, and up to $5 per person during the trial stage. The maximum payment would have been $150,000 for 30,000 records.

The price tag for the full sample was to be established after the trial, the document stated, but the total fee was not to exceed $0.75 per matched record. The total cost of the full sample stage would have been up to $1.5m for all two million matches. Wylie claimed roughly $1m was spent in the end.

[…]

Elsewhere in the cache are documents relating to the relationship between AggregateIQ and SCL.

One file laid out an AIQ contract to develop a platform called Ripon – which SCL and later CA is said to have used for micro-targeting political campaigns – in the run-up to the 2014 US mid-term elections. Although this document wasn’t signed, it indicated the first payment to AIQ was made on April 7, 2014: a handsome sum of $25,000 (CA$27,000, £18,000).

[…]

A separate contract showed the two companies had worked together before this. It is dated November 25, 2013, and set out a deal in wbhich AIQ would “assist” SCL by creating a constituent relationship management (CRM) system and help with the “acquisition of online data” for a political campaign in Trinidad and Tobago.

The payment for this work was $50,000, followed by three further installments of $50,000. The document is signed by AIQ cofounders: president Zackary Massingham, and chief operating officer Jeff Silvester. Project deliverables include data mapping, and use of behavioral datasets of qualified sources of data “that illustrate browsing activity, online behaviour and social contributions.”

A large section in the document, under the main heading for CRM deliverables, between sections labelled “reports” and “markup and CMS integration design / HTML markup,” is heavily redacted.

The document dump also revealed discussions between Rebekah Mercer, daughter of billionaire CA backer Robert Mercer, and Trump strategist Steve Bannon, about how to manage the involvement of UK-based Cambridge Analytica – a foreign company – with American elections and US election law, as well as praise for SCL from the UK’s Ministry of Defence.

Source: $0.75 – about how much Cambridge Analytica paid per voter in bid to micro-target their minds, internal docs reveal • The Register

Cambridge Analytica’s daddy biz SCL had ‘routine access’ to UK secrets

Cambridge Analytica’s parent biz had “routine access to UK secret information” as part of training it offered to the UK’s psyops group, according to documents released today.

A letter, published as part of a cache handed over to MPs by whisteblower Chris Wylie, details work that Strategic Communications Laboratories (SCL) carried out for the 15 (UK) Psychological Operations Group.

Dated 11 January 2012, it said that the group – which has since been subsumed into the unit 77 Brigade – received training from SCL, first as part of a commission and then on a continued basis without additional cost to the Ministry of Defence.

The author’s name is redacted, but it stated that SCL were a “UK List ‘X’ accredited company cleared to routine access to UK secret information”.

It said that five training staff from SCL provided the group with measurement of effect training over the course of two weeks, with students including Defence Science and Technology Ltd scientists, deploying military officers and senior soldiers.

It said that, because of SCL’s clearance, the final part of the package “was a classified case study from current operations in Helmand, Afghanistan”.

The author commented: “Such contemporary realism added enormous value to the course.”

The letter went on to say that, since delivery, SCL has continued to support the group “without additional charge to the MoD”, which involved “further testing of the trained product on operations in Libya and Afghanistan”.

Finally, the document’s author offered their recommendation for the service provided by SCL.

It said that, although the MoD is “officially disbarred from offering commercial endorsement”, the author would have “no hesitation in inviting SCL to tender for further contracts of this nature”.

They added: “Indeed it is my personal view that there are very few, if any, other commercial organisations that can deliver proven training and education of this very specialist nature.”

Source: Cambridge Analytica’s daddy biz had ‘routine access’ to UK secrets • The Register

Grindr’s API Surrendered Location Data to a Third-Party Website—Even After Users Opted Out

A website that allowed Gindr’s gay-dating app users to see who blocked them on the service says that by using the company’s API it was able to view unread messages, email addresses, deleted photos, and—perhaps most troubling—location data, according to a report published Wednesday.

The website, C*ckblocked, boasts of being the “first and only way to see who blocked you on Grindr.” The website’s owner, Trever Faden, told NBC that, by using Grindr’s API, he was able to access a wealth of personal information, including the location data of users—even for those who had opted to hide their locations.

“One could, without too much difficulty or even a huge amount of technological skill, easily pinpoint a user’s exact location,” Faden told NBC. But before he could access this information, Grindr users first had to supply C*ckblocked with their usernames and passwords, meaning that they voluntarily surrendered access to their accounts.

Grindr said that, once notified by Faden, it moved quickly to resolve the issue. The API that allowed C*ckblocked to function was patched on March 23rd, according to the website.

Source: Grindr’s API Surrendered Location Data to a Third-Party Website—Even After Users Opted Out

Mozilla launches Facebook container extension

This extension helps you control more of your web activity from Facebook by isolating your identity into a separate container. This makes it harder for Facebook to track your activity on other websites via third-party cookies.

Rather than stop using a service you find valuable and miss out on those adorable photos of your nephew, we think you should have tools to limit what data others can collect about you. That includes us: Mozilla does not collect data from your use of the Facebook Container extension. We only know the number of times the extension is installed or removed.

When you install this extension it will delete your Facebook cookies and log you out of Facebook. The next time you visit Facebook it will open in a new blue-colored browser tab (aka “container tab”). In that tab you can login to Facebook and use it like you normally would. If you click on a non-Facebook link or navigate to a non-Facebook website in the URL bar, these pages will load outside of the container.

Source: Facebook Container Extension: Take control of how you’re being tracked | The Firefox Frontier

Wylie: It’s possible that the Facebook app is listening to you

During an appearance before a committee of U.K. lawmakers today, Cambridge Analytica whistleblower Christopher Wylie breathed new life into longstanding rumors that the Facebook app listens to its users in order to target advertisements.Damian Collins, a member of parliament who chaired the committee, asked whether the Facebook app might listen to what users are discussing and use it to prioritize certain ads.

But, Wylie said in a meandering reply, it’s possible that Facebook and other smartphone apps are listening in for reasons other than speech recognition. Specifically, he said, they might be trying to ascertain what type of environment a user is in in order to “improve the contextual value of the advertising itself.”

“There’s audio that could be useful just in terms of, are you in an office environment, are you outside, are you watching TV, what are you doing right now?” Wylie said, without elaborating on how that information could help target ads.

Facebook has long denied that its app analyzes audio in order to customize ads. But users have often reported mentioning a product that they’ve never expressed an interest in online — and then being inundated with online ads for it. Reddit users, in particular, spend time collecting what they purport to be evidence that Facebook is listening to users in a particular way, such as “micro-samples” of a few seconds rather than full-on continuous natural language processing.

Source: Wylie: It’s possible that the Facebook app is listening to you | The Outline

Dutch government pretends to think about referendum result against big brother unlimited surveillance, ignores it completely.

Basically not only will they allow a huge amount of different agencies to tap your internet and phone and store it without any judicary procedures, checks or balances, they will also allow these agencies to share the data with whoever they want, including foreign agencies. Surprisingly the Dutch people voted against these far reaching breaches of privacy, so the government said they thought about it and would edit the law in six tiny places which completely miss the point and the problems people have with their privacy being destroyed.

Source: Kabinet scherpt Wet op de inlichtingen- en veiligheidsdiensten 2017 aan | Nieuwsbericht | Defensie.nl

Facebook Blames a ‘Bug’ for Not Deleting Your Seemingly Deleted Videos

Did you ever record a video on Facebook to post directly to your friend’s wall, only to discard the take and film a new version? You may have thought those embarrassing draft versions were deleted, but Facebook kept a copy. The company is blaming it on a “bug” and swears that it’s going to delete those discarded videos now. They pinkie promise this time.

Last week, New York’s Select All broke the story that social network was keeping the seemingly deleted old videos. The continued existence of the draft videos was discovered when several users downloaded their personal Facebook archives—and found numerous videos they never published. Today, Select All got a statement from Facebook blaming the whole thing on a “bug.” From Facebook via New York:

We investigated a report that some people were seeing their old draft videos when they accessed their information from our Download Your Information tool. We discovered a bug that prevented draft videos from being deleted. We are deleting them and apologize for the inconvenience. We appreciate New York Magazine for bringing the issue to our attention.

It was revealed last month that the data-harvesting firm (and apparent bribery consultants) Cambridge Analytica had acquired the information of about 50 million Facebook users and abused that data to help President Trump get elected. Specifically, the company was exploiting the anger of voters through highly-targeted advertising. And in the wake of the ensuing scandal, people have been learning all kinds of crazy things about Facebook.

Facebook users have been downloading some of the data that the social media behemoth keeps on them and it’s not pretty. For example, Facebook has kept detailed call logs from users with Android phones. The company says that Android users had to opt-in for the feature, but that’s a bullshit cop-out when you take a look at what the screen for “opting in” actually looks like.

Source: Facebook Blames a ‘Bug’ for Not Deleting Your Seemingly Deleted Videos

‘Big Brother’ in India Requires Fingerprint Scans for Food, Phones and Finances

NEW DELHI — Seeking to build an identification system of unprecedented scope, India is scanning the fingerprints, eyes and faces of its 1.3 billion residents and connecting the data to everything from welfare benefits to mobile phones.

Civil libertarians are horrified, viewing the program, called Aadhaar, as Orwell’s Big Brother brought to life. To the government, it’s more like “big brother,” a term of endearment used by many Indians to address a stranger when asking for help.

For other countries, the technology could provide a model for how to track their residents. And for India’s top court, the ID system presents unique legal issues that will define what the constitutional right to privacy means in the digital age.

To Adita Jha, Aadhaar was simply a hassle. The 30-year-old environmental consultant in Delhi waited in line three times to sit in front of a computer that photographed her face, captured her fingerprints and snapped images of her irises. Three times, the data failed to upload. The fourth attempt finally worked, and she has now been added to the 1.1 billion Indians already included in the program.

[…]

The poor must scan their fingerprints at the ration shop to get their government allocations of rice. Retirees must do the same to get their pensions. Middle-school students cannot enter the water department’s annual painting contest until they submit their identification.

In some cities, newborns cannot leave the hospital until their parents sign them up. Even leprosy patients, whose illness damages their fingers and eyes, have been told they must pass fingerprint or iris scans to get their benefits.

The Modi government has also ordered Indians to link their IDs to their cellphone and bank accounts. States have added their own twists, like using the data to map where people live. Some employers use the ID for background checks on job applicants.

[…]

Although the system’s core fingerprint, iris and face database appears to have remained secure, at least 210 government websites have leaked other personal data — such as name, birth date, address, parents’ names, bank account number and Aadhaar number — for millions of Indians. Some of that data is still available with a simple Google search.

As Aadhaar has become mandatory for government benefits, parts of rural India have struggled with the internet connections necessary to make Aadhaar work. After a lifetime of manual labor, many Indians also have no readable prints, making authentication difficult. One recent study found that 20 percent of the households in Jharkand state had failed to get their food rations under Aadhaar-based verification — five times the failure rate of ration cards.

Source: ‘Big Brother’ in India Requires Fingerprint Scans for Food, Phones and Finances – The New York Times

Jaywalkers under surveillance in Shenzhen soon to be punished via text messages

Intellifusion, a Shenzhen-based AI firm that provides technology to the city’s police to display the faces of jaywalkers on large LED screens at intersections, is now talking with local mobile phone carriers and social media platforms such as WeChat and Sina Weibo to develop a system where offenders will receive personal text messages as soon as they violate the rules, according to Wang Jun, the company’s director of marketing solutions.

“Jaywalking has always been an issue in China and can hardly be resolved just by imposing fines or taking photos of the offenders. But a combination of technology and psychology … can greatly reduce instances of jaywalking and will prevent repeat offences,” Wang said.

[…]

For the current system installed in Shenzhen, Intellifusion installed cameras with 7 million pixels of resolution to capture photos of pedestrians crossing the road against traffic lights. Facial recognition technology identifies the individual from a database and displays a photo of the jaywalking offence, the family name of the offender and part of their government identification number on large LED screens above the pavement.

In the 10 months to February this year, as many as 13,930 jaywalking offenders were recorded and displayed on the LED screen at one busy intersection in Futian district, the Shenzhen traffic police announced last month.

Taking it a step further, in March the traffic police launched a webpage which displays photos, names and partial ID numbers of jaywalkers.

These measures have effectively reduced the number of repeat offenders, according to Wang.

Source: Jaywalkers under surveillance in Shenzhen soon to be punished via text messages | South China Morning Post

Wow, that’s a scary way to scan your entire population

Any social media accounts to declare? US wants travelers to tell

The US Department of State wants to ask visa applicants to provide details on the social media accounts they’ve used in the past five years, as well as telephone numbers, email addresses, and international travel during this period.

The plan, if approved by the Office of Management and Budget, will expand the vetting regime applied to those flagged for extra immigration scrutiny – rolled out last year – to every immigrant visa applicant and to non-immigrant visa applicants such as business travelers and tourists.

The Department of State published its notice of request for public comment in the Federal Register on Friday. The comment process concludes on May 29, 2018.

The notice explains that the Department of State wants to expand the information it collects by adding questions to its Electronic Application for Immigrant Visa and Alien Registration (DS-260).

The online form will provide a list of social media platforms – presumably the major ones – and “requires the applicant to provide any identifiers used by applicants for those platforms during the five years preceding the date of application.”

For social media platforms not on the list, visa applicants “will be given the option to provide information.”

The Department of State says that the form “will be submitted electronically over an encrypted connection to the Department via the internet,” as if to offer reassurance that it will be able to store the data securely.

It’s perhaps worth noting that Russian hackers penetrated the Department of State’s email system in 2014, and in 2016, the State Department’s Office of Inspector General (OIG) gave the agency dismal marks for both its physical and cybersecurity competency.

The Department of State estimates that its revised visa process will affect 710,000 immigrant visa applicants attempting to enter the US; its more limited review of travelers flagged for additional screening only affected an estimated 65,000 people.

But around 10 million non-immigrant visa applicants who seek to come to the US can also look forward to social media screening.

In a statement emailed to The Register, a State Department spokesperson said the proposed changes follow from President Trump’s March 2017 Memorandum and Executive Order 13780 and reflect the need for screening standards to address emerging threats.

“Under this proposal, nearly all US visa applicants will be asked to provide additional information, including their social media identifiers, prior passport numbers, information about family members, and a longer history of past travel, employment, and contact information than is collected in current visa application forms,” the spokesperson said.

The Department of State already collects limited contact information, travel history, family member information, and previous addresses from all visa applicants, the spokesperson said.

Source: Any social media accounts to declare? US wants travelers to tell • The Register

You can now use your Netflix subscription anywhere in the EU

‘This content is not available in your country’ – a damn annoying message, especially when you’re paying for it. But a new EU regulation means you can now access Netflix, Amazon Prime and other services from any country in Europe, marking an end to boring evenings in hotels watching BBC World News.

The European Commission’s ‘digital single market strategy’, which last year claimed victory over mobile roaming charges, has now lead to it passing the ‘portability regulation’, which will allow users around the EU to use region locked services more freely while travelling abroad.

Under currently active rules, what content is available in a certain territory is based on the specific local rights that a provider has secured. The new rules allow for what Phil Sherrell, head of international media, entertainment and sport for international law firm Bird and Bird, calls “copyright fiction”, allowing the normal rules to be bent temporarily while a user is travelling.

The regulation was originally passed in June 2017, but the nine-month period given to rights holders and service providers to prepare is about to expire, and thereby making the rules enforceable.

From today, content providers, whether their products are videos, music, games, live sport or e-books, will use their subscribers’ details to validate their home country, and let them access all the usual content and services available in that location all around the Union. This is mandatory for all paid services, who are also not permitted to charge extra for the new portability.

Sadly, this doesn’t mean you get extra content from other countries when you use the services back at home, just parity of experience around the EU. Another caveat to the regulation is that services which are offered for free, such as the online offerings of public service broadcasters like the BBC, are not obliged to follow the regulation. These providers instead may opt-in to the rules should they want to compete with their fee charging rivals.

[…]

Brexit of course may mean UK users only benefit from the legislation for a year or so, but that’s as yet unconfirmed. For now though, we can enjoy the simple pleasure of going abroad and, instead of sampling some of the local sights, enjoy the crucial freedom of watching, listening, playing or reading the same things that we could get at home.

Source: You can now use your Netflix subscription anywhere in the EU | WIRED UK