‘Forget the Facebook leak’: China is mining data directly from workers’ brains on an industrial scale

the workers wear caps to monitor their brainwaves, data that management then uses to adjust the pace of production and redesign workflows, according to the company.

The company said it could increase the overall efficiency of the workers by manipulating the frequency and length of break times to reduce mental stress.

Hangzhou Zhongheng Electric is just one example of the large-scale application of brain surveillance devices to monitor people’s emotions and other mental activities in the workplace, according to scientists and companies involved in the government-backed projects.

Concealed in regular safety helmets or uniform hats, these lightweight, wireless sensors constantly monitor the wearer’s brainwaves and stream the data to computers that use artificial intelligence algorithms to detect emotional spikes such as depression, anxiety or rage.

The technology is in widespread use around the world but China has applied it on an unprecedented scale in factories, public transport, state-owned companies and the military to increase the competitiveness of its manufacturing industry and to maintain social stability.

It has also raised concerns about the need for regulation to prevent abuses in the workplace.

The technology is also in use at in Hangzhou at State Grid Zhejiang Electric Power, where it has boosted company profits by about 2 billion yuan (US$315 million) since it was rolled out in 2014, according to Cheng Jingzhou, an official overseeing the company’s emotional surveillance programme.

“There is no doubt about its effect,” Cheng said.

Source: ‘Forget the Facebook leak’: China is mining data directly from workers’ brains on an industrial scale | South China Morning Post

Chinese government admits collection of deleted WeChat messages

Chinese authorities revealed over the weekend that they have the capability of retrieving deleted messages from the almost universally used WeChat app. The admission doesn’t come as a surprise to many, but it’s rare for this type of questionable data collection tactic to be acknowledged publicly.As noted by the South China Morning Post, an anti-corruption commission in Hefei province posted Saturday to social media that it has “retrieved a series of deleted WeChat conversations from a subject” as part of an investigation.The post was deleted Sunday, but not before many had seen it and understood the ramifications. Tencent, which operates the WeChat service used by nearly a billion people (including myself), explained in a statement that “WeChat does not store any chat histories — they are only stored on users’ phones and computers.”The technical details of this storage were not disclosed, but it seems clear from the commission’s post that they are accessible in some way to interested authorities, as many have suspected for years. The app does, of course, comply with other government requirements, such as censoring certain topics.There are still plenty of questions, the answers to which would help explain user vulnerability: Are messages effectively encrypted at rest? Does retrieval require the user’s password and login, or can it be forced with a “master key” or backdoor? Can users permanently and totally delete messages on the WeChat platform at all?

Source: Chinese government admits collection of deleted WeChat messages | TechCrunch

Revealed: how bookies use AI to keep gamblers hooked | Technology | The Guardian

The gambling industry is increasingly using artificial intelligence to predict consumer habits and personalise promotions to keep gamblers hooked, industry insiders have revealed.Current and former gambling industry employees have described how people’s betting habits are scrutinised and modelled to manipulate their future behaviour.“The industry is using AI to profile customers and predict their behaviour in frightening new ways,” said Asif, a digital marketer who previously worked for a gambling company. “Every click is scrutinised in order to optimise profit, not to enhance a user’s experience.”“I’ve often heard people wonder about how they are targeted so accurately and it’s no wonder because its all hidden in the small print.”Publicly, gambling executives boast of increasingly sophisticated advertising keeping people betting, while privately conceding that some are more susceptible to gambling addiction when bombarded with these type of bespoke ads and incentives.Gamblers’ every click, page view and transaction is scientifically examined so that ads statistically more likely to work can be pushed through Google, Facebook and other platforms.

[…]

Last August, the Guardian revealed the gambling industry uses third-party companies to harvest people’s data, helping bookmakers and online casinos target people on low incomes and those who have stopped gambling.

Despite condemnation from MPs, experts and campaigners, such practices remain an industry norm.

“You can buy email lists with more than 100,000 people’s emails and phone numbers from data warehouses who regularly sell data to help market gambling promotions,” said Brian. “They say it’s all opted in but people haven’t opted in at all.”

In this way, among others, gambling companies and advertisers create detailed customer profiles including masses of information about their interests, earnings, personal details and credit history.

[…]

Elsewhere, there are plans to geolocate customers in order to identify when they arrive at stadiums so they can prompted via texts to bet on the game they are about to watch.

The gambling industry earned£14bn in 2016, £4.5bn of which from online betting, and it is pumping some of that money into making its products more sophisticated and, in effect, addictive.

Source: Revealed: how bookies use AI to keep gamblers hooked | Technology | The Guardian

Whois is dead as Europe hands DNS overlord ICANN its arse :(

The Whois public database of domain name registration details is dead.

In a letter [PDF] sent this week to DNS overseer ICANN, Europe’s data protection authorities have effectively killed off the current service, noting that it breaks the law and so will be illegal come 25 May, when GDPR comes into force.

The letter also has harsh words for ICANN’s proposed interim solution, criticizing its vagueness and noting it needs to include explicit wording about what can be done with registrant data, as well as introduce auditing and compliance functions to make sure the data isn’t being abused.

ICANN now has a little over a month to come up with a replacement to the decades-old service that covers millions of domain names and lists the personal contact details of domain registrants, including their name, email and telephone number.

ICANN has already acknowledged it has no chance of doing so: a blog post by the company in response to the letter warns that without being granted a special temporary exemption from the law, the system will fracture.

“Unless there is a moratorium, we may no longer be able to give instructions to the contracted parties through our agreements to maintain Whois,” it warns. “Without resolution of these issues, the Whois system will become fragmented.”

We spoke with the president of ICANN’s Global Domains Division, Akram Atallah, and he told us that while there was “general agreement that having every thing public is not the right way to go”, he was hopeful that the letter would not result in the Whois service being turned off completely while a replacement was developed.

Source: Whois is dead as Europe hands DNS overlord ICANN its arse • The Register

It’s an important and useful tool – hopefully they will resolve this one way or another.

Orkut Hello: The Man Behind Orkut Says His ‘Hello’ Platform Doesn’t Sell User Data

In 2004, one of the world’s most popular social networks, Orkut, was founded by a former Google employee named Orkut Büyükkökten. Later that year, a Harvard University student named Mark Zuckerberg launched ‘the Facebook’, which over the course of a year became ubiquitous in Ivy League universities and was eventually called Facebook.com.

Orkut was shut down by Google in 2014, but in its heyday, the network had hit 300 million users around the world. Facebook took five years to achieve that feat. At a time when the #DeleteFacebook movement is gaining traction worldwide in light of the Cambridge Analytica scandal, Orkut has made a comeback

“Hello.com is a spiritual successor of Orkut.com,” Büyükkökten told BloombergQuint. “The most important thing about Orkut was communities, because they brought people together around topics and things that interested them and provided a safe place for people to exchange ideas and share genuine passions and feelings. We have built the entire ‘Hello’ experience around communities and passions and see it as Orkut 2.0.”

Orkut has decided to make a comeback when Mark Zuckerberg, founder and CEO of Facebook, has been questioned by U.S. congressmen and senators about its policies and data collection and usage practices. That came after the Cambridge Analytica data leak which impacted nearly 87 million users, including Zuckerberg himself.

“People have lost trust in social networks and the main reason is social media services today don’t put the users first. They put advertisers, brands, third parties, shareholders before the users,” Büyükkökten said. “They are also not transparent about practices. The privacy policy and terms of services are more like black boxes. How many users actually read them?”

Büyükkökten said users need to be educated about these things and user consent is imperative in such situations when data is shared by such platforms. “On Hello, we do not share data with third parties. We have our own registration and login and so the data doesn’t follow you anywhere,”he said. “You don’t need to sell user data in order to be profitable or make money.”

Source: Orkut Hello: The Man Behind Orkut Says His ‘Hello’ Platform Doesn’t Sell User Data – Bloomberg Quint

I am very curious what his business model is then

Facebook admits: Apps were given users’ permission to go into their inboxes

Facebook has admitted that some apps had access to users’ private messages, thanks to a policy that allowed devs to request mailbox permissions.

The revelation came as current Facebook users found out whether they or their friends had used the “This Is Your Digital Life” app that allowed academic Aleksandr Kogan to collect data on users and their friends.

Users whose friends had been suckered in by the quiz were told that as a result, their public profile, Page likes, birthday and current city were “likely shared” with the app.

So far, so expected. But, the notification went on:

A small number of people who logged into “This Is Your Digital Life” also shared their own News Feed, timeline, posts and messages which may have included post and messages from you. They may also have shared your hometown.

That’s because, back in 2014 when the app was in use, developers using Facebook’s Graph API to get data off the platform could ask for read_mailbox permission, allowing them access to a person’s inbox.

That was just one of a series of extended permissions granted to devs under v1.0 of the Graph API, which was first introduced in 2010.

Following pressure from privacy activists – but much to the disappointment of developers – Facebook shut that tap off for most permissions in April 2015, although the changelog shows that read_mailbox wasn’t deprecated until 6 October 2015.

Facebook confirmed to The Register that this access had been requested by the app and that a small number of people had granted it permission.

“In 2014, Facebook’s platform policy allowed developers to request mailbox permissions but only if the person explicitly gave consent for this to happen,” a spokesborg told us.

“According to our records only a very small number of people explicitly opted into sharing this information. The feature was turned off in 2015.”

Source: Facebook admits: Apps were given users’ permission to go into their inboxes • The Register

How to Check if Cambridge Analytica Had Your Facebook Data

Facebook launched a tool yesterday that you can use to find out whether you or your friends shared information with Cambridge Analytica, the Trump-affiliated company that harvested data from a Facebook app to support the then-candidate’s efforts in the 2016 presidential election.

If you were affected directly—and you have plenty of company, if so—you should have already received a little notification from Facebook. If you missed that in your News Feed (or you’ve already sworn off Facebook, but want to check and see if your information was compromised), Facebook also has a handy little Cambridge Analytica tool you can use.

The problem? While the tool can tell you if you or your friends shared your information via the spammy “This is Your Digital Life” app, it won’t tell you who among your friends was foolish enough to give up your information to a third party. You have lost your ability to publicly shame them, yell at them, or go over to where they live (or fire up a remote desktop session) to teach them how to … not do that ever again.

So, what can you do now?

Even though your past Facebook data might already be out there in the digital ether somewhere, you can now start locking down your information a bit more. Once you’re done checking the Cambridge Analytica tool, go here (Facebook’s Settings page). Click on Apps and Websites. Up until recently, Facebook had a setting (under “Apps Others Use”) that you could use to restrict the information that your friends could share about you to apps they were using. Now, you’ll see this message instead:

“These outdated settings have been removed because they applied to an older version of our platform that no longer exists.

To see or change the info you currently share with apps and websites, review the ones listed above, under ‘Logged in with Facebook.’”

Sounds ominous, right? Well, according to Facebook, these settings haven’t really done much of anything for years, anyway. As a Facebook spokesperson recently told Wired:

“These controls were built before we made significant changes to how developers build apps on Facebook. At the time, the Apps Others Use functionality allowed people to control what information could be shared to developers. We changed our systems years ago so that people could not share friends’ information with developers unless each friend also had explicitly granted permission to the developer.”

Instead, take a little time to review (again) the apps you’ve allowed to access your Facebook information. If you’re not using the app anymore, or if it sounds a little fishy, remove it—heck, remove as many apps as you can in one go.

Source: How to Check if Cambridge Analytica Had Your Facebook Data

CubeYou: Cambridge-like app collected data on millions from Facebook

Facebook is suspending a data analytics firm called CubeYou from the platform after CNBC notified the company that CubeYou was collecting information about users through quizzes.

CubeYou misleadingly labeled its quizzes “for non-profit academic research,” then shared user information with marketers. The scenario is eerily similar to how Cambridge Analytica received unauthorized access to data from as many as 87 million Facebook user accounts to target political marketing.

CubeYou, whose CEO denies any deception, sold data that had been collected by researchers working with the Psychometrics Lab at Cambridge University, similar to how Cambridge Analytica used information it obtained from other professors at the school for political marketing.

The CubeYou discovery suggests that collecting data from quizzes and using it for marketing purposes was far from an isolated incident. Moreover, the fact that CubeYou was able to mislabel the purpose of the quizzes — and that Facebook did nothing to stop it until CNBC pointed out the problem — suggests the platform has little control over this activity.

[…]

CubeYou boasts on its website that it uses census data and various web and social apps on Facebook and Twitter to collect personal information. CubeYou then contracts with advertising agencies that want to target certain types of Facebook users for ad campaigns.

CubeYou’s site says it has access to personally identifiable information (PII) such as first names, last names, emails, phone numbers, IP addresses, mobile IDs and browser fingerprints.

On a cached version of its website from March 19, it also said it keeps age, gender, location, work and education, and family and relationship information. It also has likes, follows, shares, posts, likes to posts, comments to posts, check-ins and mentions of brands/celebrities in a post. Interactions with companies are tracked back to 2012 and are updated weekly, the site said.

Source: CubeYou Cambridge-like app collected data on millions from Facebook

$0.75 – about how much Cambridge Analytica paid per voter in bid to micro-target their minds, internal docs reveal

Cambridge Analytica bought psychological profiles on individual US voters, costing roughly 75 cents to $5 apiece, each crafted using personal information plundered from millions of Facebook accounts, according to revealed internal documents.

Over the course of the past two weeks, whistleblower Chris Wylie has made a series of claims against his former employer, Cambridge Analytica, and its parent organizations SCL Elections and SCL Group.

He has alleged CA drafted in university academic Dr Aleksander Kogan to help micro-target voters using their personal information harvested from Facebook, and that the Vote Leave campaign in the UK’s Brexit referendum “cheated” election spending limits by funneling money to Canadian political ad campaign biz AggregateIQ through a number of smaller groups.

Cambridge Analytica has denied using Facebook-sourced information in its work for Donald Trump’s US election campaign, and dubbed the allegations against it as “completely unfounded conspiracy theories.”

A set of internal CA files released Thursday by Britain’s House of Commons’ Digital, Culture, Media and Sport Select Committee includes contracts and email exchanges, plus micro-targeting strategies and case studies boasting of the organization’s influence in previous international campaigns.

Among them is a contract, dated June 4, 2014, revealing a deal struck between SCL Elections and Kogan’s biz Global Science Research, referred to as GS in the documents. It showed that Kogan was commissioned by SCL to build up psychological profiles of people, using data slurped from their Facebook accounts by a quiz app, and match them to voter records obtained by SCL.

The app was built by GS, installed by some 270,000 people, and was granted access to their social network accounts and those of their friends, up to 50 million of them. The information was sold to Cambridge Analytica by GS.

[…]

GS’s fee was a nominal £3.14, and up to $5 per person during the trial stage. The maximum payment would have been $150,000 for 30,000 records.

The price tag for the full sample was to be established after the trial, the document stated, but the total fee was not to exceed $0.75 per matched record. The total cost of the full sample stage would have been up to $1.5m for all two million matches. Wylie claimed roughly $1m was spent in the end.

[…]

Elsewhere in the cache are documents relating to the relationship between AggregateIQ and SCL.

One file laid out an AIQ contract to develop a platform called Ripon – which SCL and later CA is said to have used for micro-targeting political campaigns – in the run-up to the 2014 US mid-term elections. Although this document wasn’t signed, it indicated the first payment to AIQ was made on April 7, 2014: a handsome sum of $25,000 (CA$27,000, £18,000).

[…]

A separate contract showed the two companies had worked together before this. It is dated November 25, 2013, and set out a deal in wbhich AIQ would “assist” SCL by creating a constituent relationship management (CRM) system and help with the “acquisition of online data” for a political campaign in Trinidad and Tobago.

The payment for this work was $50,000, followed by three further installments of $50,000. The document is signed by AIQ cofounders: president Zackary Massingham, and chief operating officer Jeff Silvester. Project deliverables include data mapping, and use of behavioral datasets of qualified sources of data “that illustrate browsing activity, online behaviour and social contributions.”

A large section in the document, under the main heading for CRM deliverables, between sections labelled “reports” and “markup and CMS integration design / HTML markup,” is heavily redacted.

The document dump also revealed discussions between Rebekah Mercer, daughter of billionaire CA backer Robert Mercer, and Trump strategist Steve Bannon, about how to manage the involvement of UK-based Cambridge Analytica – a foreign company – with American elections and US election law, as well as praise for SCL from the UK’s Ministry of Defence.

Source: $0.75 – about how much Cambridge Analytica paid per voter in bid to micro-target their minds, internal docs reveal • The Register

Cambridge Analytica’s daddy biz SCL had ‘routine access’ to UK secrets

Cambridge Analytica’s parent biz had “routine access to UK secret information” as part of training it offered to the UK’s psyops group, according to documents released today.

A letter, published as part of a cache handed over to MPs by whisteblower Chris Wylie, details work that Strategic Communications Laboratories (SCL) carried out for the 15 (UK) Psychological Operations Group.

Dated 11 January 2012, it said that the group – which has since been subsumed into the unit 77 Brigade – received training from SCL, first as part of a commission and then on a continued basis without additional cost to the Ministry of Defence.

The author’s name is redacted, but it stated that SCL were a “UK List ‘X’ accredited company cleared to routine access to UK secret information”.

It said that five training staff from SCL provided the group with measurement of effect training over the course of two weeks, with students including Defence Science and Technology Ltd scientists, deploying military officers and senior soldiers.

It said that, because of SCL’s clearance, the final part of the package “was a classified case study from current operations in Helmand, Afghanistan”.

The author commented: “Such contemporary realism added enormous value to the course.”

The letter went on to say that, since delivery, SCL has continued to support the group “without additional charge to the MoD”, which involved “further testing of the trained product on operations in Libya and Afghanistan”.

Finally, the document’s author offered their recommendation for the service provided by SCL.

It said that, although the MoD is “officially disbarred from offering commercial endorsement”, the author would have “no hesitation in inviting SCL to tender for further contracts of this nature”.

They added: “Indeed it is my personal view that there are very few, if any, other commercial organisations that can deliver proven training and education of this very specialist nature.”

Source: Cambridge Analytica’s daddy biz had ‘routine access’ to UK secrets • The Register

Grindr’s API Surrendered Location Data to a Third-Party Website—Even After Users Opted Out

A website that allowed Gindr’s gay-dating app users to see who blocked them on the service says that by using the company’s API it was able to view unread messages, email addresses, deleted photos, and—perhaps most troubling—location data, according to a report published Wednesday.

The website, C*ckblocked, boasts of being the “first and only way to see who blocked you on Grindr.” The website’s owner, Trever Faden, told NBC that, by using Grindr’s API, he was able to access a wealth of personal information, including the location data of users—even for those who had opted to hide their locations.

“One could, without too much difficulty or even a huge amount of technological skill, easily pinpoint a user’s exact location,” Faden told NBC. But before he could access this information, Grindr users first had to supply C*ckblocked with their usernames and passwords, meaning that they voluntarily surrendered access to their accounts.

Grindr said that, once notified by Faden, it moved quickly to resolve the issue. The API that allowed C*ckblocked to function was patched on March 23rd, according to the website.

Source: Grindr’s API Surrendered Location Data to a Third-Party Website—Even After Users Opted Out

Mozilla launches Facebook container extension

This extension helps you control more of your web activity from Facebook by isolating your identity into a separate container. This makes it harder for Facebook to track your activity on other websites via third-party cookies.

Rather than stop using a service you find valuable and miss out on those adorable photos of your nephew, we think you should have tools to limit what data others can collect about you. That includes us: Mozilla does not collect data from your use of the Facebook Container extension. We only know the number of times the extension is installed or removed.

When you install this extension it will delete your Facebook cookies and log you out of Facebook. The next time you visit Facebook it will open in a new blue-colored browser tab (aka “container tab”). In that tab you can login to Facebook and use it like you normally would. If you click on a non-Facebook link or navigate to a non-Facebook website in the URL bar, these pages will load outside of the container.

Source: Facebook Container Extension: Take control of how you’re being tracked | The Firefox Frontier

Wylie: It’s possible that the Facebook app is listening to you

During an appearance before a committee of U.K. lawmakers today, Cambridge Analytica whistleblower Christopher Wylie breathed new life into longstanding rumors that the Facebook app listens to its users in order to target advertisements.Damian Collins, a member of parliament who chaired the committee, asked whether the Facebook app might listen to what users are discussing and use it to prioritize certain ads.

But, Wylie said in a meandering reply, it’s possible that Facebook and other smartphone apps are listening in for reasons other than speech recognition. Specifically, he said, they might be trying to ascertain what type of environment a user is in in order to “improve the contextual value of the advertising itself.”

“There’s audio that could be useful just in terms of, are you in an office environment, are you outside, are you watching TV, what are you doing right now?” Wylie said, without elaborating on how that information could help target ads.

Facebook has long denied that its app analyzes audio in order to customize ads. But users have often reported mentioning a product that they’ve never expressed an interest in online — and then being inundated with online ads for it. Reddit users, in particular, spend time collecting what they purport to be evidence that Facebook is listening to users in a particular way, such as “micro-samples” of a few seconds rather than full-on continuous natural language processing.

Source: Wylie: It’s possible that the Facebook app is listening to you | The Outline

Dutch government pretends to think about referendum result against big brother unlimited surveillance, ignores it completely.

Basically not only will they allow a huge amount of different agencies to tap your internet and phone and store it without any judicary procedures, checks or balances, they will also allow these agencies to share the data with whoever they want, including foreign agencies. Surprisingly the Dutch people voted against these far reaching breaches of privacy, so the government said they thought about it and would edit the law in six tiny places which completely miss the point and the problems people have with their privacy being destroyed.

Source: Kabinet scherpt Wet op de inlichtingen- en veiligheidsdiensten 2017 aan | Nieuwsbericht | Defensie.nl

Facebook Blames a ‘Bug’ for Not Deleting Your Seemingly Deleted Videos

Did you ever record a video on Facebook to post directly to your friend’s wall, only to discard the take and film a new version? You may have thought those embarrassing draft versions were deleted, but Facebook kept a copy. The company is blaming it on a “bug” and swears that it’s going to delete those discarded videos now. They pinkie promise this time.

Last week, New York’s Select All broke the story that social network was keeping the seemingly deleted old videos. The continued existence of the draft videos was discovered when several users downloaded their personal Facebook archives—and found numerous videos they never published. Today, Select All got a statement from Facebook blaming the whole thing on a “bug.” From Facebook via New York:

We investigated a report that some people were seeing their old draft videos when they accessed their information from our Download Your Information tool. We discovered a bug that prevented draft videos from being deleted. We are deleting them and apologize for the inconvenience. We appreciate New York Magazine for bringing the issue to our attention.

It was revealed last month that the data-harvesting firm (and apparent bribery consultants) Cambridge Analytica had acquired the information of about 50 million Facebook users and abused that data to help President Trump get elected. Specifically, the company was exploiting the anger of voters through highly-targeted advertising. And in the wake of the ensuing scandal, people have been learning all kinds of crazy things about Facebook.

Facebook users have been downloading some of the data that the social media behemoth keeps on them and it’s not pretty. For example, Facebook has kept detailed call logs from users with Android phones. The company says that Android users had to opt-in for the feature, but that’s a bullshit cop-out when you take a look at what the screen for “opting in” actually looks like.

Source: Facebook Blames a ‘Bug’ for Not Deleting Your Seemingly Deleted Videos

‘Big Brother’ in India Requires Fingerprint Scans for Food, Phones and Finances

NEW DELHI — Seeking to build an identification system of unprecedented scope, India is scanning the fingerprints, eyes and faces of its 1.3 billion residents and connecting the data to everything from welfare benefits to mobile phones.

Civil libertarians are horrified, viewing the program, called Aadhaar, as Orwell’s Big Brother brought to life. To the government, it’s more like “big brother,” a term of endearment used by many Indians to address a stranger when asking for help.

For other countries, the technology could provide a model for how to track their residents. And for India’s top court, the ID system presents unique legal issues that will define what the constitutional right to privacy means in the digital age.

To Adita Jha, Aadhaar was simply a hassle. The 30-year-old environmental consultant in Delhi waited in line three times to sit in front of a computer that photographed her face, captured her fingerprints and snapped images of her irises. Three times, the data failed to upload. The fourth attempt finally worked, and she has now been added to the 1.1 billion Indians already included in the program.

[…]

The poor must scan their fingerprints at the ration shop to get their government allocations of rice. Retirees must do the same to get their pensions. Middle-school students cannot enter the water department’s annual painting contest until they submit their identification.

In some cities, newborns cannot leave the hospital until their parents sign them up. Even leprosy patients, whose illness damages their fingers and eyes, have been told they must pass fingerprint or iris scans to get their benefits.

The Modi government has also ordered Indians to link their IDs to their cellphone and bank accounts. States have added their own twists, like using the data to map where people live. Some employers use the ID for background checks on job applicants.

[…]

Although the system’s core fingerprint, iris and face database appears to have remained secure, at least 210 government websites have leaked other personal data — such as name, birth date, address, parents’ names, bank account number and Aadhaar number — for millions of Indians. Some of that data is still available with a simple Google search.

As Aadhaar has become mandatory for government benefits, parts of rural India have struggled with the internet connections necessary to make Aadhaar work. After a lifetime of manual labor, many Indians also have no readable prints, making authentication difficult. One recent study found that 20 percent of the households in Jharkand state had failed to get their food rations under Aadhaar-based verification — five times the failure rate of ration cards.

Source: ‘Big Brother’ in India Requires Fingerprint Scans for Food, Phones and Finances – The New York Times

Jaywalkers under surveillance in Shenzhen soon to be punished via text messages

Intellifusion, a Shenzhen-based AI firm that provides technology to the city’s police to display the faces of jaywalkers on large LED screens at intersections, is now talking with local mobile phone carriers and social media platforms such as WeChat and Sina Weibo to develop a system where offenders will receive personal text messages as soon as they violate the rules, according to Wang Jun, the company’s director of marketing solutions.

“Jaywalking has always been an issue in China and can hardly be resolved just by imposing fines or taking photos of the offenders. But a combination of technology and psychology … can greatly reduce instances of jaywalking and will prevent repeat offences,” Wang said.

[…]

For the current system installed in Shenzhen, Intellifusion installed cameras with 7 million pixels of resolution to capture photos of pedestrians crossing the road against traffic lights. Facial recognition technology identifies the individual from a database and displays a photo of the jaywalking offence, the family name of the offender and part of their government identification number on large LED screens above the pavement.

In the 10 months to February this year, as many as 13,930 jaywalking offenders were recorded and displayed on the LED screen at one busy intersection in Futian district, the Shenzhen traffic police announced last month.

Taking it a step further, in March the traffic police launched a webpage which displays photos, names and partial ID numbers of jaywalkers.

These measures have effectively reduced the number of repeat offenders, according to Wang.

Source: Jaywalkers under surveillance in Shenzhen soon to be punished via text messages | South China Morning Post

Wow, that’s a scary way to scan your entire population

Any social media accounts to declare? US wants travelers to tell

The US Department of State wants to ask visa applicants to provide details on the social media accounts they’ve used in the past five years, as well as telephone numbers, email addresses, and international travel during this period.

The plan, if approved by the Office of Management and Budget, will expand the vetting regime applied to those flagged for extra immigration scrutiny – rolled out last year – to every immigrant visa applicant and to non-immigrant visa applicants such as business travelers and tourists.

The Department of State published its notice of request for public comment in the Federal Register on Friday. The comment process concludes on May 29, 2018.

The notice explains that the Department of State wants to expand the information it collects by adding questions to its Electronic Application for Immigrant Visa and Alien Registration (DS-260).

The online form will provide a list of social media platforms – presumably the major ones – and “requires the applicant to provide any identifiers used by applicants for those platforms during the five years preceding the date of application.”

For social media platforms not on the list, visa applicants “will be given the option to provide information.”

The Department of State says that the form “will be submitted electronically over an encrypted connection to the Department via the internet,” as if to offer reassurance that it will be able to store the data securely.

It’s perhaps worth noting that Russian hackers penetrated the Department of State’s email system in 2014, and in 2016, the State Department’s Office of Inspector General (OIG) gave the agency dismal marks for both its physical and cybersecurity competency.

The Department of State estimates that its revised visa process will affect 710,000 immigrant visa applicants attempting to enter the US; its more limited review of travelers flagged for additional screening only affected an estimated 65,000 people.

But around 10 million non-immigrant visa applicants who seek to come to the US can also look forward to social media screening.

In a statement emailed to The Register, a State Department spokesperson said the proposed changes follow from President Trump’s March 2017 Memorandum and Executive Order 13780 and reflect the need for screening standards to address emerging threats.

“Under this proposal, nearly all US visa applicants will be asked to provide additional information, including their social media identifiers, prior passport numbers, information about family members, and a longer history of past travel, employment, and contact information than is collected in current visa application forms,” the spokesperson said.

The Department of State already collects limited contact information, travel history, family member information, and previous addresses from all visa applicants, the spokesperson said.

Source: Any social media accounts to declare? US wants travelers to tell • The Register

You can now use your Netflix subscription anywhere in the EU

‘This content is not available in your country’ – a damn annoying message, especially when you’re paying for it. But a new EU regulation means you can now access Netflix, Amazon Prime and other services from any country in Europe, marking an end to boring evenings in hotels watching BBC World News.

The European Commission’s ‘digital single market strategy’, which last year claimed victory over mobile roaming charges, has now lead to it passing the ‘portability regulation’, which will allow users around the EU to use region locked services more freely while travelling abroad.

Under currently active rules, what content is available in a certain territory is based on the specific local rights that a provider has secured. The new rules allow for what Phil Sherrell, head of international media, entertainment and sport for international law firm Bird and Bird, calls “copyright fiction”, allowing the normal rules to be bent temporarily while a user is travelling.

The regulation was originally passed in June 2017, but the nine-month period given to rights holders and service providers to prepare is about to expire, and thereby making the rules enforceable.

From today, content providers, whether their products are videos, music, games, live sport or e-books, will use their subscribers’ details to validate their home country, and let them access all the usual content and services available in that location all around the Union. This is mandatory for all paid services, who are also not permitted to charge extra for the new portability.

Sadly, this doesn’t mean you get extra content from other countries when you use the services back at home, just parity of experience around the EU. Another caveat to the regulation is that services which are offered for free, such as the online offerings of public service broadcasters like the BBC, are not obliged to follow the regulation. These providers instead may opt-in to the rules should they want to compete with their fee charging rivals.

[…]

Brexit of course may mean UK users only benefit from the legislation for a year or so, but that’s as yet unconfirmed. For now though, we can enjoy the simple pleasure of going abroad and, instead of sampling some of the local sights, enjoy the crucial freedom of watching, listening, playing or reading the same things that we could get at home.

Source: You can now use your Netflix subscription anywhere in the EU | WIRED UK

Chrome Is Scanning Files on Your Computer, and People Are Freaking Out

The browser you likely use to read this article scans practically all files on your Windows computer. And you probably had no idea until you read this. Don’t worry, you’re not the only one.

Last year, Google announced some upgrades to Chrome, by far the world’s most used browser—and the one security pros often recommend. The company promised to make internet surfing on Windows computers even “cleaner” and “safer ” adding what The Verge called “basic antivirus features.” What Google did was improve something called Chrome Cleanup Tool for Windows users, using software from cybersecurity and antivirus company ESET.

Tensions around the issue of digital privacy are understandably high following Facebook’s Cambridge Analytica scandal, but as far as we can tell there is no reason to worry here, and what Google is doing is above board.

In practice, Chome on Windows looks through your computer in search of malware that targets the Chrome browser itself using ESET’s antivirus engine. If it finds some suspected malware, it sends metadata of the file where the malware is stored, and some system information, to Google. Then, it asks you to for permission to remove the suspected malicious file. (You can opt-out of sending information to Google by deselecting the “Report details to Google” checkbox.)

A screenshot of the Chrome pop-up that appears if Chrome Cleanup Tool detects malware on your Windows computer.

Last week, Kelly Shortridge, who works at cybersecurity startup SecurityScorecard, noticed that Chrome was scanning files in the Documents folder of her Windows computer.

“In the current climate, it really shocked me that Google would so quietly roll out this feature without publicizing more detailed supporting documentation—even just to preemptively ease speculation,” Shortridge told me in an online chat. “Their intentions are clearly security-minded, but the lack of explicit consent and transparency seems to violate their own criteria of ‘user-friendly software’ that informs the policy for Chrome Cleanup [Tool].”

Her tweet got a lot of attention and caused other people in the infosec community—as well as average users such as me—to scratch their heads.

“Nobody likes surprises,” Haroon Meer, the founder at security consulting firm Thinkst, told me in an online chat. “When people fear a big brother, and tech behemoths going too far…a browser touching files it has no business to touch is going to set off alarm bells.”

Now, to be clear, this doesn’t mean Google can, for example, see photos you store on your windows machine. According to Google, the goal of Chrome Cleanup Tool is to make sure malware doesn’t mess up with Chrome on your computer by installing dangerous extensions, or putting ads where they’re not supposed to be.

As the head of Google Chrome security Justin Schuh explained on Twitter, the tool’s “sole purpose is to detect and remove unwanted software manipulating Chrome.” Moreover, he added, the tool only runs weekly, it only has normal user privileges (meaning it can’t go too deep into the system), is “sandboxed” (meaning its code is isolated from other programs), and users have to explicitly click on that box screenshotted above to remove the files and “cleanup.”

In other words, Chrome Cleanup Tool is less invasive than a regular “cloud” antivirus that scans your whole computer (including its more sensitive parts such as the kernel) and uploads some data to the antivirus company’s servers.

But as Johns Hopkins professor Matthew Green put it, most people “are just a little creeped out that Chrome started poking through their underwear drawer without asking.”

That’s the problem here: most users of an internet browser probably don’t expect it to scan and remove files on their computers.

Source: Chrome Is Scanning Files on Your Computer, and People Are Freaking Out – Motherboard

I really don’t think it is the job of the browser to scan your computer at all.

Grindr: Yeah, we shared your HIV status info with other companies – but we didn’t charge them! (oh and your GPS coords)

Hookup fixer Grindr is on the defensive after it shared sensitive information, including HIV status and physical location, of its app’s users with outside organizations.

The quickie booking facilitator on Monday admitted it passed, via HTTPS, people’s public profiles to third-party analytics companies to process on its behalf. That means, yes, the information was handed over in bulk, but, hey, at least it didn’t sell it!

“Grindr has never, nor will we ever sell personally identifiable user information – especially information regarding HIV status or last test date – to third parties or advertisers,” CTO Scott Chen said in a statement.

Rather than apologize, Grindr said its punters should have known better than to give it any details they didn’t want passed around to other companies. On the one hand, the data was scraped from the application’s public profiles, so, well, maybe people ought to calm down. It was all public anyway. On the other hand, perhaps people didn’t expect it to be handed over for analysis en masse.

“It’s important to remember that Grindr is a public forum,” Chen said. “We give users the option to post information about themselves including HIV status and last test date, and we make it clear in our privacy policy that if you choose to include this information in your profile, the information will also become public.”

This statement is in response to last week’s disclosure by security researchers on the ways the Grindr app shares user information with third-party advertisers and partners. Among the information found to be passed around by Grindr was the user’s HIV status, something Grindr allows members to list in their profiles.

The HIV status, along with last test date, sexual position preference, and GPS location were among the pieces of info Grindr shared via encrypted network connections with analytics companies Localytics and Apptimize.

The revelation drew sharp criticism of Grindr, with many slamming the upstart for sharing what many consider to be highly sensitive personal information with third-parties along with GPS coordinates.

Source: Grindr: Yeah, we shared your HIV status info with other companies – but we didn’t charge them! • The Register

Most of 2.2 billion Facebook users had their data scraped by externals – because it was easy to do

At this point, the social media company is just going for broke, telling the public it should just assume that “most” of the 2.2 billion Facebook users have probably had their public data scraped by “malicious actors.”

[…]

Meanwhile, reports have focused on a variety of issues that have popped up in just the last 24 hours. It’s hard to focus on what matters—and frankly, all of it seems to matter, so in turn, it ends up feeling like none of it does. This is the Trump PR playbook, and Facebook is running it perfectly. It’s the media version of too big to fail, call it too big to matter. Let us suggest that you just zero in on one detail from yesterday’s blog post about new restrictions on data access on the platform.

Mike Schroepfer, Facebook’s chief technology officer, explained that prior to yesterday, “people could enter another person’s phone number or email address into Facebook search to help find them.” This function would help you cut through all the John Smiths and locate the page of your John Smith. He gave the example of Bangladesh where the tool was used for 7 percent of all searches. Thing is, it was also useful to data-scrapers. Schroepfer wrote:

However, malicious actors have also abused these features to scrape public profile information by submitting phone numbers or email addresses they already have through search and account recovery. Given the scale and sophistication of the activity we’ve seen, we believe most people on Facebook could have had their public profile scraped in this way. So we have now disabled this feature. We’re also making changes to account recovery to reduce the risk of scraping as well.

The full meaning of that paragraph might not be readily apparent, but imagine you’re a hacker who bought a huge database of phone numbers on the dark web. Those numbers might have some use on their own, but they become way more useful for breaking into individual systems or committing fraud if you can attach more data to them. Facebook is saying that this kind of malicious actor would regularly take one of those numbers and use the platform to hunt down all publicly available data on its owner. This process, of course, could be automated and reap huge rewards with little effort. Suddenly, the hacker might have a user’s number, photos, marriage status, email address, birthday, location, pet names, and more—an excellent toolkit to do some damage.

In yesterday’s Q&A, Zuckerberg explained that Facebook did have some basic protections to prevent the sort of automation that makes this particularly convenient, but “we did see a number of folks who cycled through many thousands of IPs, hundreds of thousands of IP addresses to evade the rate-limiting system, and that wasn’t a problem we really had a solution to.” The ultimate solution was to shut the features down. As far as the impact goes, “I think the thing people should assume, given this is a feature that’s been available for a while—and a lot of people use it in the right way—but we’ve also seen some scraping, I would assume if you had that setting turned on, that someone at some point has accessed your public information in this way,” Zuckerberg said. Did you have that setting turned on? Ever? Given that Facebook says “most” accounts were affected, it’s safe to assume you did.

[…]

Mark Zuckerberg has known from the beginning that his creation was bad for privacy and security. Activists, the press, and tech experts have been saying it for years, but we the public either didn’t understand, didn’t care, or chose to ignore the warnings. That’s not totally the public’s fault. We’re only now seeing a big red example of what it means for one company, controlled by one man, to have control over seemingly limitless personal information. Even the NSA can’t keep its secret hacking tools on lockdown, why would Facebook be able to protect your information? In many respects, it was just giving it away.

Source: Facebook Just Made a Shocking Admission, and We’re All Too Exhausted to Notice

Cambridge Analytica whistleblower: Facebook data could have come from more than 87 million users

Cambridge Analytica whistleblower Christopher Wylie says the data the firm gathered from Facebook could have come from more than 87 million users and could be stored in Russia.
The number of Facebook users whose personal information was accessed by Cambridge Analytica “could be higher, absolutely,” than the 87 million users acknowledged by Facebook, Wylie told NBC’s Chuck Todd during a “Meet the Press” segment Sunday.
Wylie added that his lawyer has been contacted by US authorities, including congressional investigators and the Department of Justice, and says he plans to cooperate with them.
“We’re just setting out dates that I can actually go and sit down and meet with the authorities,” he said.
The former Cambridge Analytica employee said that “a lot of people” had access to the data and referenced a “genuine risk” that the harvested data could be stored in Russia.
“It could be stored in various parts of the world, including Russia, given the fact that the professor who was managing the data harvesting process was going back and forth between the UK and to Russia,” Wylie said.
Aleksander Kogan, a Russian data scientist who gave lectures at St. Petersburg State University, gathered Facebook data from millions of Americans. He then sold it to Cambridge Analytica, which worked with President Donald Trump’s 2016 presidential campaign.
When asked if he thought Facebook was even able to calculate the number of users affected, Wylie stressed that data can be copied once it leaves a database.
“I know that Facebook is now starting to take steps to rectify that and start to find out who had access to it and where it could have gone, but ultimately it’s not watertight to say that, you know, we can ensure that all the data is gone forever,” he said.

Source: Cambridge Analytica whistleblower: Facebook data could have come from more than 87 million users – CNNPolitics

Yes, Cops Are Now Opening iPhones With Dead People’s Fingerprints

Separate sources close to local and federal police investigations in New York and Ohio, who asked to remain anonymous as they weren’t authorized to speak on record, said it was now relatively common for fingerprints of the deceased to be depressed on the scanner of Apple iPhones, devices which have been wrapped up in increasingly powerful encryption over recent years. For instance, the technique has been used in overdose cases, said one source. In such instances, the victim’s phone could contain information leading directly to the dealer.

And it’s entirely legal for police to use the technique, even if there might be some ethical quandaries to consider. Marina Medvin, owner of Medvin Law, said that once a person is deceased, they no longer have a privacy interest in their dead body. That means they no longer have standing in court to assert privacy rights.

Relatives or other interested parties have little chance of stopping cops using fingerprints or other body parts to access smartphones too. “Once you share information with someone, you lose control over how that information is protected and used. You cannot assert your privacy rights when your friend’s phone is searched and the police see the messages that you sent to your friend. Same goes for sharing information with the deceased – after you released information to the deceased, you have lost control of privacy,” Medvin added.

Police know it too. “We do not need a search warrant to get into a victim’s phone, unless it’s shared owned,” said Ohio police homicide detective Robert Cutshall, who worked on the Artan case. In previous cases detailed by Forbes police have required warrants to use the fingerprints of the living on their iPhones.

[…]

Police are now looking at how they might use Apple’s Face ID facial recognition technology, introduced on the iPhone X. And it could provide an easier path into iPhones than Touch ID.

Marc Rogers, researcher and head of information security at Cloudflare, told Forbes he’d been poking at Face ID in recent months and had discovered it didn’t appear to require the visage of a living person to work. Whilst Face ID is supposed to use your attention in combination with natural eye movement, so fake or non-moving eyes can’t unlock devices, Rogers found that the tech can be fooled simply using photos of open eyes. That was something also verified by Vietnamese researchers when they claimed to have bypassed Face ID with specially-created masks in November 2017, said Rogers.

Secondly, Rogers discovered this was possible from many angles and the phone only seemed to need to see one open eye to unlock. “In that sense it’s easier to unlock than Touch ID – all you need to do is show your target his or her phone and the moment they glance it unlocks,” he added. Apple declined to comment for this article.

Source: Yes, Cops Are Now Opening iPhones With Dead People’s Fingerprints