$0.75 – about how much Cambridge Analytica paid per voter in bid to micro-target their minds, internal docs reveal

Cambridge Analytica bought psychological profiles on individual US voters, costing roughly 75 cents to $5 apiece, each crafted using personal information plundered from millions of Facebook accounts, according to revealed internal documents.

Over the course of the past two weeks, whistleblower Chris Wylie has made a series of claims against his former employer, Cambridge Analytica, and its parent organizations SCL Elections and SCL Group.

He has alleged CA drafted in university academic Dr Aleksander Kogan to help micro-target voters using their personal information harvested from Facebook, and that the Vote Leave campaign in the UK’s Brexit referendum “cheated” election spending limits by funneling money to Canadian political ad campaign biz AggregateIQ through a number of smaller groups.

Cambridge Analytica has denied using Facebook-sourced information in its work for Donald Trump’s US election campaign, and dubbed the allegations against it as “completely unfounded conspiracy theories.”

A set of internal CA files released Thursday by Britain’s House of Commons’ Digital, Culture, Media and Sport Select Committee includes contracts and email exchanges, plus micro-targeting strategies and case studies boasting of the organization’s influence in previous international campaigns.

Among them is a contract, dated June 4, 2014, revealing a deal struck between SCL Elections and Kogan’s biz Global Science Research, referred to as GS in the documents. It showed that Kogan was commissioned by SCL to build up psychological profiles of people, using data slurped from their Facebook accounts by a quiz app, and match them to voter records obtained by SCL.

The app was built by GS, installed by some 270,000 people, and was granted access to their social network accounts and those of their friends, up to 50 million of them. The information was sold to Cambridge Analytica by GS.

[…]

GS’s fee was a nominal £3.14, and up to $5 per person during the trial stage. The maximum payment would have been $150,000 for 30,000 records.

The price tag for the full sample was to be established after the trial, the document stated, but the total fee was not to exceed $0.75 per matched record. The total cost of the full sample stage would have been up to $1.5m for all two million matches. Wylie claimed roughly $1m was spent in the end.

[…]

Elsewhere in the cache are documents relating to the relationship between AggregateIQ and SCL.

One file laid out an AIQ contract to develop a platform called Ripon – which SCL and later CA is said to have used for micro-targeting political campaigns – in the run-up to the 2014 US mid-term elections. Although this document wasn’t signed, it indicated the first payment to AIQ was made on April 7, 2014: a handsome sum of $25,000 (CA$27,000, £18,000).

[…]

A separate contract showed the two companies had worked together before this. It is dated November 25, 2013, and set out a deal in wbhich AIQ would “assist” SCL by creating a constituent relationship management (CRM) system and help with the “acquisition of online data” for a political campaign in Trinidad and Tobago.

The payment for this work was $50,000, followed by three further installments of $50,000. The document is signed by AIQ cofounders: president Zackary Massingham, and chief operating officer Jeff Silvester. Project deliverables include data mapping, and use of behavioral datasets of qualified sources of data “that illustrate browsing activity, online behaviour and social contributions.”

A large section in the document, under the main heading for CRM deliverables, between sections labelled “reports” and “markup and CMS integration design / HTML markup,” is heavily redacted.

The document dump also revealed discussions between Rebekah Mercer, daughter of billionaire CA backer Robert Mercer, and Trump strategist Steve Bannon, about how to manage the involvement of UK-based Cambridge Analytica – a foreign company – with American elections and US election law, as well as praise for SCL from the UK’s Ministry of Defence.

Source: $0.75 – about how much Cambridge Analytica paid per voter in bid to micro-target their minds, internal docs reveal • The Register

Under Armour Data Breach: 150 Million MyFitnessPal Accounts Hacked

Under Armour Inc., joining a growing list of corporate victims of hacker attacks, said about 150 million user accounts tied to its MyFitnessPal nutrition-tracking app were breached earlier this year.

An unauthorized party stole data from the accounts in late February, Under Armour said on Thursday. It became aware of the breach earlier this week and took steps to alert users about the incident, the company said.

Shares of Under Armour fell as much as 4.6 percent to $15.59 in late trading following the announcement. The stock had been up 13 percent this year through Thursday’s close.

The data didn’t include payment-card information or government-issued identifiers, including Social Security numbers and driver’s license numbers. Still, user names, email addresses and password data were taken. And the sheer scope of the attack — affecting a user base that’s bigger than the population of Japan — would make it one of the larger breaches on record.

Source: Under Armour Data Breach: 150 Million MyFitnessPal Accounts Hacked | Fortune

Cambridge Analytica’s daddy biz SCL had ‘routine access’ to UK secrets

Cambridge Analytica’s parent biz had “routine access to UK secret information” as part of training it offered to the UK’s psyops group, according to documents released today.

A letter, published as part of a cache handed over to MPs by whisteblower Chris Wylie, details work that Strategic Communications Laboratories (SCL) carried out for the 15 (UK) Psychological Operations Group.

Dated 11 January 2012, it said that the group – which has since been subsumed into the unit 77 Brigade – received training from SCL, first as part of a commission and then on a continued basis without additional cost to the Ministry of Defence.

The author’s name is redacted, but it stated that SCL were a “UK List ‘X’ accredited company cleared to routine access to UK secret information”.

It said that five training staff from SCL provided the group with measurement of effect training over the course of two weeks, with students including Defence Science and Technology Ltd scientists, deploying military officers and senior soldiers.

It said that, because of SCL’s clearance, the final part of the package “was a classified case study from current operations in Helmand, Afghanistan”.

The author commented: “Such contemporary realism added enormous value to the course.”

The letter went on to say that, since delivery, SCL has continued to support the group “without additional charge to the MoD”, which involved “further testing of the trained product on operations in Libya and Afghanistan”.

Finally, the document’s author offered their recommendation for the service provided by SCL.

It said that, although the MoD is “officially disbarred from offering commercial endorsement”, the author would have “no hesitation in inviting SCL to tender for further contracts of this nature”.

They added: “Indeed it is my personal view that there are very few, if any, other commercial organisations that can deliver proven training and education of this very specialist nature.”

Source: Cambridge Analytica’s daddy biz had ‘routine access’ to UK secrets • The Register

Grindr’s API Surrendered Location Data to a Third-Party Website—Even After Users Opted Out

A website that allowed Gindr’s gay-dating app users to see who blocked them on the service says that by using the company’s API it was able to view unread messages, email addresses, deleted photos, and—perhaps most troubling—location data, according to a report published Wednesday.

The website, C*ckblocked, boasts of being the “first and only way to see who blocked you on Grindr.” The website’s owner, Trever Faden, told NBC that, by using Grindr’s API, he was able to access a wealth of personal information, including the location data of users—even for those who had opted to hide their locations.

“One could, without too much difficulty or even a huge amount of technological skill, easily pinpoint a user’s exact location,” Faden told NBC. But before he could access this information, Grindr users first had to supply C*ckblocked with their usernames and passwords, meaning that they voluntarily surrendered access to their accounts.

Grindr said that, once notified by Faden, it moved quickly to resolve the issue. The API that allowed C*ckblocked to function was patched on March 23rd, according to the website.

Source: Grindr’s API Surrendered Location Data to a Third-Party Website—Even After Users Opted Out

SpyParty – A Subtle Game About Human Behavior

SpyParty is a tense competitive spy game set at a high society party. It’s about subtle behavior, perception, and deception, instead of guns, car chases, and explosions. One player is the Spy, trying to accomplish missions while blending into the crowd. The other player is the Sniper, who has one bullet with which to find and terminate the Spy!

Source: SpyParty – A Subtle Game About Human Behavior

Mozilla launches Facebook container extension

This extension helps you control more of your web activity from Facebook by isolating your identity into a separate container. This makes it harder for Facebook to track your activity on other websites via third-party cookies.

Rather than stop using a service you find valuable and miss out on those adorable photos of your nephew, we think you should have tools to limit what data others can collect about you. That includes us: Mozilla does not collect data from your use of the Facebook Container extension. We only know the number of times the extension is installed or removed.

When you install this extension it will delete your Facebook cookies and log you out of Facebook. The next time you visit Facebook it will open in a new blue-colored browser tab (aka “container tab”). In that tab you can login to Facebook and use it like you normally would. If you click on a non-Facebook link or navigate to a non-Facebook website in the URL bar, these pages will load outside of the container.

Source: Facebook Container Extension: Take control of how you’re being tracked | The Firefox Frontier

The Interstitium Is Important, But Don’t Call It An Organ (Yet)

In brief: It’s called the interstitium, or a layer of fluid-filled pockets hemmed in by collagen and it can be found all over our bodies, from skin to muscles to our digestive system. The interstitium likely acts as a kind of shock absorber for the rest of our interior bits and bobs and the workings of the fluid itself could help explain everything from tumor growth to how cells move within our bodies. The authors stop short of saying “new organ,” but the word is certainly on everyone’s lips.

Is it just me, or are you feeling a bit of deja vu?

Well, maybe it’s just me, but that’s because I’ve been in this situation before. You see, just over a year ago, researchers announced that they’d discovered a different “new” organ — the mesentery. That particular collection of bodily tissue is a fan-shaped fold that helps hold our guts in place. It had been known about for centuries, but only recently discovered to be large and important enough to justify calling it an organ. It was to be the body’s 79th, but that number is entirely arbitrary.

As we discovered here at Discover, the definition of an organ is hardly settled (and we’re aware of what a church organ is, thankyouverymuch). As became apparent during the whole mesentery craze, there’s no real definition for what an organ actually is. And the human body doesn’t have 79 organs, or 80 organs, or 1,000 organs, because that number can change drastically depending on the definition. And you can bet scientists debate what an organ actually is.

“It’s a silly number,” said Paul Neumann, a professor of medicine at Dalhousie University in Canada and member of the Federative International Programme for Anatomical Terminology, in a Discover article from last year. “If a bone is an organ, there’s 206 organs right there. No two anatomists will agree on a list of organs in the body”

Calling the interstitium a new organ, then, is a bit of a stretch. It’s there, it’s certainly important, but we need a better idea of what an organ is before we can start labeling things as such.

There is a definition of sorts, but it’s got more wiggle room than your large intestine. An organ is composed of two or more tissues, is self-contained and performs a specific function, according to most definitions you get by Googling “what is an organ?” But there’s no governing body that explicitly determines what an organ is, and there’s no official definition. Things like skin, nipples, eyeballs, mesenteries and more have crossed into organ-dom and back throughout history as anatomists debated the definition.

Source: The Interstitium Is Important, But Don’t Call It An Organ (Yet)

Wylie: It’s possible that the Facebook app is listening to you

During an appearance before a committee of U.K. lawmakers today, Cambridge Analytica whistleblower Christopher Wylie breathed new life into longstanding rumors that the Facebook app listens to its users in order to target advertisements.Damian Collins, a member of parliament who chaired the committee, asked whether the Facebook app might listen to what users are discussing and use it to prioritize certain ads.

But, Wylie said in a meandering reply, it’s possible that Facebook and other smartphone apps are listening in for reasons other than speech recognition. Specifically, he said, they might be trying to ascertain what type of environment a user is in in order to “improve the contextual value of the advertising itself.”

“There’s audio that could be useful just in terms of, are you in an office environment, are you outside, are you watching TV, what are you doing right now?” Wylie said, without elaborating on how that information could help target ads.

Facebook has long denied that its app analyzes audio in order to customize ads. But users have often reported mentioning a product that they’ve never expressed an interest in online — and then being inundated with online ads for it. Reddit users, in particular, spend time collecting what they purport to be evidence that Facebook is listening to users in a particular way, such as “micro-samples” of a few seconds rather than full-on continuous natural language processing.

Source: Wylie: It’s possible that the Facebook app is listening to you | The Outline

The channel 4 video exposes on Cambridge Analytica, Aggregate IQ, losing data, electioneering in Brexit, the US, Kenya, Nigeria and many more all here

Data, Democracy and Dirty Tricks playlist

Dutch government pretends to think about referendum result against big brother unlimited surveillance, ignores it completely.

Basically not only will they allow a huge amount of different agencies to tap your internet and phone and store it without any judicary procedures, checks or balances, they will also allow these agencies to share the data with whoever they want, including foreign agencies. Surprisingly the Dutch people voted against these far reaching breaches of privacy, so the government said they thought about it and would edit the law in six tiny places which completely miss the point and the problems people have with their privacy being destroyed.

Source: Kabinet scherpt Wet op de inlichtingen- en veiligheidsdiensten 2017 aan | Nieuwsbericht | Defensie.nl

Trustwave Global IT Security Report Summarised

Hackers have moved away from simple point-of-sale (POS) terminal attacks to more refined assaults on corporations’ head offices.

An annual report from security firm Trustwave out today highlighted increased sophistication of web app hacking and social engineering tactics on the part of miscreants.

Half of the incidents investigated involved corporate and internal networks (up from 43 per cent in 2016) followed by e-commerce environments at 30 per cent. Incidents affecting POS systems decreased by more than a third to 20 per cent of the total. This is reflective of increased attack sophistication, honing in on larger service providers and franchise head offices and less on smaller high-volume targets in previous years.

In corporate network environments, phishing and social engineering at 55 per cent was the leading method of compromise followed by malicious insiders at 13 per cent and remote access at 9 per cent. “CEO fraud”, a social engineering scam encouraging executives to authorise fraudulent money transactions, continues to increase, Trustwave added.

Targeted web attacks are becoming prevalent and much more sophisticated. Many breach incidents show signs of careful planning by cybercriminals probing for weak packages and tools to exploit. Cross-site scripting (XSS) was involved in 40 per cent of attack attempts, followed by SQL Injection (SQLi) at 24 per cent, Path Traversal at 7 per cent, Local File Inclusion (LFI) at 4 per cent, and Distributed Denial of Service (DDoS) at 3 per cent.

Last year also witnessed a marked increase, up 9.5 per cent, in compromises at businesses that deliver IT services including web-hosting providers, POS integrators and help-desk providers. A breach of just one provider opens the gates to a multitude of new targets. In 2016 service provider compromises did not even register in the statistics.

Although down from the previous year, payment card data at 40 per cent still reigns supreme in terms of data types targeted in a breach. Surprisingly, incidents targeting hard cash was on the rise at 11 per cent mostly due to fraudulent ATM transaction breaches enabled by compromise of account management systems at financial institutions.

North America still led in data breaches investigated by Trustwave at 43 per cent followed by the Asia Pacific region at 30 per cent, Europe, Middle East and Africa (EMEA) at 23 per cent and Latin America at 4 per cent. The retail sector suffered the most breach incidences at 16.7 per cent followed by the finance and insurance industry at 13.1 per cent and hospitality at 11.9 per cent.

Trustwave gathered and analysed real-world data from hundreds of breach investigations the company conducted in 2017 across 21 countries. This data was added to billions of security and compliance events logged each day across the global network of Trustwave operations centres, along with data from tens of millions of network vulnerability scans, thousands of web application security scans, tens of millions of web transactions, penetration tests and more.

All the web applications tested displayed at least one vulnerability with 11 as the median number detected per application. The majority (85.9 per cent) of web application vulnerabilities involved session management allowing an attacker to eavesdrop on a user session to seize sensitive information.

Source: Gosh, these ‘hacker’ nerds are only getting more sophisticated • The Register

Facebook Blames a ‘Bug’ for Not Deleting Your Seemingly Deleted Videos

Did you ever record a video on Facebook to post directly to your friend’s wall, only to discard the take and film a new version? You may have thought those embarrassing draft versions were deleted, but Facebook kept a copy. The company is blaming it on a “bug” and swears that it’s going to delete those discarded videos now. They pinkie promise this time.

Last week, New York’s Select All broke the story that social network was keeping the seemingly deleted old videos. The continued existence of the draft videos was discovered when several users downloaded their personal Facebook archives—and found numerous videos they never published. Today, Select All got a statement from Facebook blaming the whole thing on a “bug.” From Facebook via New York:

We investigated a report that some people were seeing their old draft videos when they accessed their information from our Download Your Information tool. We discovered a bug that prevented draft videos from being deleted. We are deleting them and apologize for the inconvenience. We appreciate New York Magazine for bringing the issue to our attention.

It was revealed last month that the data-harvesting firm (and apparent bribery consultants) Cambridge Analytica had acquired the information of about 50 million Facebook users and abused that data to help President Trump get elected. Specifically, the company was exploiting the anger of voters through highly-targeted advertising. And in the wake of the ensuing scandal, people have been learning all kinds of crazy things about Facebook.

Facebook users have been downloading some of the data that the social media behemoth keeps on them and it’s not pretty. For example, Facebook has kept detailed call logs from users with Android phones. The company says that Android users had to opt-in for the feature, but that’s a bullshit cop-out when you take a look at what the screen for “opting in” actually looks like.

Source: Facebook Blames a ‘Bug’ for Not Deleting Your Seemingly Deleted Videos

T-Mobile Austria stores passwords as plain text

A customer was questioning if rumors that T-Mobile Austria was storing customer passwords in plain text, leaving the credentials like sitting ducks for hackers. Whoever was manning T-Mobile Austria’s Twitter account confirmed that this was the case, but that there was no need to worry because “our security is amazingly good.”

That line is going to bite T-Mobile Austria in the backside, if or when they next get hacked. To be fair, it’s late at night in Europe and the Twitter account was probably being handled by an overworked social media worker, but it’s not a good look. Especially when people started digging further and found various security shortcomings. The whole thread is a mind job.

But that doesn’t excuse the plain-text password storage.

Source: T-Mobile Austria stores passwords as plain text, Outlook gets message crypto, and more • The Register

‘Big Brother’ in India Requires Fingerprint Scans for Food, Phones and Finances

NEW DELHI — Seeking to build an identification system of unprecedented scope, India is scanning the fingerprints, eyes and faces of its 1.3 billion residents and connecting the data to everything from welfare benefits to mobile phones.

Civil libertarians are horrified, viewing the program, called Aadhaar, as Orwell’s Big Brother brought to life. To the government, it’s more like “big brother,” a term of endearment used by many Indians to address a stranger when asking for help.

For other countries, the technology could provide a model for how to track their residents. And for India’s top court, the ID system presents unique legal issues that will define what the constitutional right to privacy means in the digital age.

To Adita Jha, Aadhaar was simply a hassle. The 30-year-old environmental consultant in Delhi waited in line three times to sit in front of a computer that photographed her face, captured her fingerprints and snapped images of her irises. Three times, the data failed to upload. The fourth attempt finally worked, and she has now been added to the 1.1 billion Indians already included in the program.

[…]

The poor must scan their fingerprints at the ration shop to get their government allocations of rice. Retirees must do the same to get their pensions. Middle-school students cannot enter the water department’s annual painting contest until they submit their identification.

In some cities, newborns cannot leave the hospital until their parents sign them up. Even leprosy patients, whose illness damages their fingers and eyes, have been told they must pass fingerprint or iris scans to get their benefits.

The Modi government has also ordered Indians to link their IDs to their cellphone and bank accounts. States have added their own twists, like using the data to map where people live. Some employers use the ID for background checks on job applicants.

[…]

Although the system’s core fingerprint, iris and face database appears to have remained secure, at least 210 government websites have leaked other personal data — such as name, birth date, address, parents’ names, bank account number and Aadhaar number — for millions of Indians. Some of that data is still available with a simple Google search.

As Aadhaar has become mandatory for government benefits, parts of rural India have struggled with the internet connections necessary to make Aadhaar work. After a lifetime of manual labor, many Indians also have no readable prints, making authentication difficult. One recent study found that 20 percent of the households in Jharkand state had failed to get their food rations under Aadhaar-based verification — five times the failure rate of ration cards.

Source: ‘Big Brother’ in India Requires Fingerprint Scans for Food, Phones and Finances – The New York Times

NUC, NUC! Who’s there? Intel, warning you to kill a buggy keyboard app

Intel has made much of its NUC and Compute Stick mini-PCs as a way to place computers to out-of-the-way places like digital signage.

Such locations aren’t the kind of spots where keyboards and pointing devices can be found, so Intel sweetened the deal by giving the world an Android and iOS app called the “Intel Remote Keyboard” to let you mimic a keyboard and mouse from afar.

But now Chipzilla’s canned the app.

The reason is three nasty bugs that let attackers “inject keystrokes as a local user”, “inject keystrokes into another remote keyboard session” and “execute arbitrary code as a privileged user.” The bugs are CVE-2018-3641, CVE-2018-3645 and CVE-2018-3638 respectively.

Rather than patch the app, Intel’s killed it and “recommends that users of the Intel® Remote Keyboard uninstall it at their earliest convenience.”

The app’s already gone from the Play and App Stores (but Google’s cached pages about it for Android and iOS in case you fancy a look).

The Android version of the app’s been downloaded at least 500,000 times, so this is going to inconvenience plenty of people … at least until they get RDP working on Windows boxes and VNC running under Linux. The greater impact may be on Intel’s reputation for security, which has already taken a belting thanks to the Meltdown/Spectre mess.

Source: NUC, NUC! Who’s there? Intel, warning you to kill a buggy keyboard app • The Register

Center Of The Milky Way Has Thousands Of Black Holes, Study Shows

The supermassive black hole lurking at the center of our galaxy appears to have a lot of company, according to a new study that suggests the monster is surrounded by about 10,000 other black holes.

For decades, scientists have thought that black holes should sink to the center of galaxies and accumulate there, says Chuck Hailey, an astrophysicist at Columbia University. But scientists had no proof that these exotic objects had actually gathered together in the center of the Milky Way.

“This is just kind of astonishing that you could have a prediction for such a large number of objects and not find any evidence for them,” Hailey says.

He and his colleagues recently went hunting for black holes, using observations of the galactic center made by a NASA telescope called the Chandra X-ray Observatory.

Isolated black holes are almost impossible to detect, but black holes that have a companion — an orbiting star — interact with that star in ways that allow the pair to be spotted by telltale X-ray emissions. The team searched for those signals in a region stretching about three light-years out from our galaxy’s central supermassive black hole.

“So we’re looking at the very, very, very center of our galaxy. It’s a place that’s filled with a huge amount of gas and dust, and it’s jammed with a huge number of stars,” Hailey says.

What they found there: a dozen black holes paired up with stars, according to a report in the journal Nature.

Finding so many in such a small region is significant, because until now scientists have found evidence of only about five dozen black holes throughout the entire galaxy, says Hailey, who points out that our galaxy is 100,000 light-years across. (For reference, one light-year is just under 5.88 trillion miles.)

What’s more, the very center of our galaxy surely has far more than these dozen black holes that were just detected. The researchers used what’s known about black holes to extrapolate from what they saw to what they couldn’t see. Their calculations show that there must be several hundred more black holes paired with stars in the galactic center, and about 10,000 isolated black holes.

“I think this is a really intriguing result,” says Fiona Harrison, an astrophysicist at Caltech. She cautions that there are a lot of uncertainties and the team has found just a small number of X-ray sources, “but they have the right distribution and the right characteristics to be a tracer of this otherwise completely hidden population.”

Source: Center Of The Milky Way Has Thousands Of Black Holes, Study Shows : The Two-Way : NPR

Berkeley Lab Scientists Print All-Liquid 3-D Structures

Scientists from the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) have developed a way to print 3-D structures composed entirely of liquids. Using a modified 3-D printer, they injected threads of water into silicone oil — sculpting tubes made of one liquid within another liquid.

They envision their all-liquid material could be used to construct liquid electronics that power flexible, stretchable devices. The scientists also foresee chemically tuning the tubes and flowing molecules through them, leading to new ways to separate molecules or precisely deliver nanoscale building blocks to under-construction compounds.

The researchers have printed threads of water between 10 microns and 1 millimeter in diameter, and in a variety of spiraling and branching shapes up to several meters in length. What’s more, the material can conform to its surroundings and repeatedly change shape.

“It’s a new class of material that can reconfigure itself, and it has the potential to be customized into liquid reaction vessels for many uses, from chemical synthesis to ion transport to catalysis,” said Tom Russell, a visiting faculty scientist in Berkeley Lab’s Materials Sciences Division. He developed the material with Joe Forth, a postdoctoral researcher in the Materials Sciences Division, as well as other scientists from Berkeley Lab and several other institutions. They report their research March 24 in the journal Advanced Materials.

The material owes its origins to two advances: learning how to create liquid tubes inside another liquid, and then automating the process.

These schematics show the printing of water in oil using a nanoparticle supersoap. Gold nanoparticles in the water combine with polymer ligands in the oil to form an elastic film (nanoparticle supersoap) at the interface, locking the structure in place. (Credit: Berkeley Lab)

For the first step, the scientists developed a way to sheathe tubes of water in a special nanoparticle-derived surfactant that locks the water in place. The surfactant, essentially soap, prevents the tubes from breaking up into droplets. Their surfactant is so good at its job, the scientists call it a nanoparticle supersoap.

The supersoap was achieved by dispersing gold nanoparticles into water and polymer ligands into oil. The gold nanoparticles and polymer ligands want to attach to each other, but they also want to remain in their respective water and oil mediums. The ligands were developed with help from Brett Helms at the Molecular Foundry, a DOE Office of Science User Facility located at Berkeley Lab.

In practice, soon after the water is injected into the oil, dozens of ligands in the oil attach to individual nanoparticles in the water, forming a nanoparticle supersoap. These supersoaps jam together and vitrify, like glass, which stabilizes the interface between oil and water and locks the liquid structures in position.

This stability means we can stretch water into a tube, and it remains a tube. Or we can shape water into an ellipsoid, and it remains an ellipsoid,” said Russell. “We’ve used these nanoparticle supersoaps to print tubes of water that last for several months.”

Next came automation. Forth modified an off-the-shelf 3-D printer by removing the components designed to print plastic and replacing them with a syringe pump and needle that extrudes liquid. He then programmed the printer to insert the needle into the oil substrate and inject water in a predetermined pattern.

“We can squeeze liquid from a needle, and place threads of water anywhere we want in three dimensions,” said Forth. “We can also ping the material with an external force, which momentarily breaks the supersoap’s stability and changes the shape of the water threads. The structures are endlessly reconfigurable.”

Source: Berkeley Lab Scientists Print All-Liquid 3-D Structures

Jaywalkers under surveillance in Shenzhen soon to be punished via text messages

Intellifusion, a Shenzhen-based AI firm that provides technology to the city’s police to display the faces of jaywalkers on large LED screens at intersections, is now talking with local mobile phone carriers and social media platforms such as WeChat and Sina Weibo to develop a system where offenders will receive personal text messages as soon as they violate the rules, according to Wang Jun, the company’s director of marketing solutions.

“Jaywalking has always been an issue in China and can hardly be resolved just by imposing fines or taking photos of the offenders. But a combination of technology and psychology … can greatly reduce instances of jaywalking and will prevent repeat offences,” Wang said.

[…]

For the current system installed in Shenzhen, Intellifusion installed cameras with 7 million pixels of resolution to capture photos of pedestrians crossing the road against traffic lights. Facial recognition technology identifies the individual from a database and displays a photo of the jaywalking offence, the family name of the offender and part of their government identification number on large LED screens above the pavement.

In the 10 months to February this year, as many as 13,930 jaywalking offenders were recorded and displayed on the LED screen at one busy intersection in Futian district, the Shenzhen traffic police announced last month.

Taking it a step further, in March the traffic police launched a webpage which displays photos, names and partial ID numbers of jaywalkers.

These measures have effectively reduced the number of repeat offenders, according to Wang.

Source: Jaywalkers under surveillance in Shenzhen soon to be punished via text messages | South China Morning Post

Wow, that’s a scary way to scan your entire population

AI Imagines Nude Paintings as Terrifying Pools of Melting Flesh

When Robbie Barrat trained an AI to study and reproduce classical nude paintings, he expected something at least recognizable. What the AI produced instead was unfamiliar and unsettling, but still intriguing. The “paintings” look like flesh-like ice cream, spilling into pools that only vaguely recall a woman’s body. Barrat told Gizmodo these meaty blobs, disturbing and unintentional as they are, may impact both art and AI.

“Before, you would be feeding the computer a set of rules it would execute perfectly, with no room for interpretation by the computer,” Barrat said via email. “Now with AI, it’s all about the machine’s interpretation of the dataset you feed it—in this case how it (strangely) interprets the nude portraits I fed it.”

AI’s influence is certainly more pronounced in this project than in most computer generated art, but while that wasn’t what Barrat intended, he says the results were much better this way.

“Would I want the results to be more realistic? Absolutely not,” he said. “I want to get AI to generate new types of art we haven’t seen before; not force some human perspective on it.”

Barrat explained the process of training the AI to produce imagery of a curving body from some surreal parallel universe:

“I used a dataset of thousands of nude portraits I scraped, along with techniques from a new paper that recently came out called ‘Progressive Growing of GANs’ to generate the images,” he said. “The generator tries to generate paintings that fool the discriminator, and the discriminator tries to learn how to tell the difference between ‘fake’ paintings that the generator feeds it, and real paintings from the dataset of nude portraits.”

The Francis Bacon-esque paintings were purely serendipitous.

“What happened with the nude portraits is that the generator figured it could just feed the discriminator blobs of flesh, and the discriminator wasn’t able to tell the difference between strange blobs of flesh and humans, so since the generator could consistently fool the discriminator by painting these strange forms of flesh instead of realistic nude portraits; both components stopped learning and getting better at painting.”

As Barrat pointed out on Twitter, this method of working with a computer program has some art history precedent. Having an AI execute the artist’s specific directions is reminiscent of instructional art—a conceptual art technique, best exampled by Sol LeWitt, where artists provide specific instructions for others to create the artwork. (For example: Sol LeWitt’s Wall Drawing, Boston Museum: “On a wall surface, any continuous stretch of wall, using a hard pencil, place fifty point at random. The points should be evenly distributed over the area of the wall. All of the points should be connected by straight lines.”)

 Giving the AI limited autonomy to create art may be more than just a novelty, it may eventually lead to a truly new form of generating art with entirely new subjectivities.

“I want to use AI to make its own new and original artworks, not just get AI to mimic things that people were making in the 1600’s.”

Source: AI Imagines Nude Paintings as Terrifying Pools of Melting Flesh

Any social media accounts to declare? US wants travelers to tell

The US Department of State wants to ask visa applicants to provide details on the social media accounts they’ve used in the past five years, as well as telephone numbers, email addresses, and international travel during this period.

The plan, if approved by the Office of Management and Budget, will expand the vetting regime applied to those flagged for extra immigration scrutiny – rolled out last year – to every immigrant visa applicant and to non-immigrant visa applicants such as business travelers and tourists.

The Department of State published its notice of request for public comment in the Federal Register on Friday. The comment process concludes on May 29, 2018.

The notice explains that the Department of State wants to expand the information it collects by adding questions to its Electronic Application for Immigrant Visa and Alien Registration (DS-260).

The online form will provide a list of social media platforms – presumably the major ones – and “requires the applicant to provide any identifiers used by applicants for those platforms during the five years preceding the date of application.”

For social media platforms not on the list, visa applicants “will be given the option to provide information.”

The Department of State says that the form “will be submitted electronically over an encrypted connection to the Department via the internet,” as if to offer reassurance that it will be able to store the data securely.

It’s perhaps worth noting that Russian hackers penetrated the Department of State’s email system in 2014, and in 2016, the State Department’s Office of Inspector General (OIG) gave the agency dismal marks for both its physical and cybersecurity competency.

The Department of State estimates that its revised visa process will affect 710,000 immigrant visa applicants attempting to enter the US; its more limited review of travelers flagged for additional screening only affected an estimated 65,000 people.

But around 10 million non-immigrant visa applicants who seek to come to the US can also look forward to social media screening.

In a statement emailed to The Register, a State Department spokesperson said the proposed changes follow from President Trump’s March 2017 Memorandum and Executive Order 13780 and reflect the need for screening standards to address emerging threats.

“Under this proposal, nearly all US visa applicants will be asked to provide additional information, including their social media identifiers, prior passport numbers, information about family members, and a longer history of past travel, employment, and contact information than is collected in current visa application forms,” the spokesperson said.

The Department of State already collects limited contact information, travel history, family member information, and previous addresses from all visa applicants, the spokesperson said.

Source: Any social media accounts to declare? US wants travelers to tell • The Register

AI predicts your lifespan using activity tracking apps

Researchers can estimate your expected lifespan based on physiological traits like your genes or your circulating blood factor, but that’s not very practical on a grand scale. There may be a shortcut, however: the devices you already have on your body. Russian scientists have crafted an AI-based algorithm that uses the activity tracking from smartphones and smartwatches to estimate your lifespan with far greater precision than past models.

The team used a convolutional neural network to find the “biologically relevant” motion patterns in a large set of US health survey data and correlate that to both lifespans and overall health. It would look for not just step counts, but how often you switch between active and inactive periods — many of the other factors in your life, such as your sleeping habits and gym visits, are reflected in those switches. After that, it was just a matter of applying the understanding to a week’s worth of data from test subjects’ phones. You can even try it yourself through Gero Lifespan, an iPhone app that uses data from Apple Health, Fitbit and Rescuetime (a PC productivity measurement app) to predict your longevity.

This doesn’t provide a full picture of your health, as it doesn’t include your diet, genetics and other crucial factors. Doctors would ideally use both mobile apps and clinical analysis to give you a proper estimate, and the scientists are quick to acknowledge that what you see here isn’t completely ready for medical applications. The AI is still more effective than past approaches, though, and it could be useful for more accurate health risk models that help everything from insurance companies (which already use activity tracking as an incentive) to the development of anti-aging treatments.

Source: AI predicts your lifespan using activity tracking apps

No idea what the percentages are though

Emmanuel Macron Q&A: France’s President Discusses Artificial Intelligence Strategy

On Thursday, Emmanuel Macron, the president of France, gave a speech laying out a new national strategy for artificial intelligence in his country. The French government will spend €1.5 billion ($1.85 billion) over five years to support research in the field, encourage startups, and collect data that can be used, and shared, by engineers. The goal is to start catching up to the US and China and to make sure the smartest minds in AI—hello Yann LeCun—choose Paris over Palo Alto.Directly after his talk, he gave an exclusive and extensive interview, entirely in English, to WIRED Editor-in-Chief Nicholas Thompson about the topic and why he has come to care so passionately about it.

[…]

: AI will raise a lot of issues in ethics, in politics, it will question our democracy and our collective preferences. For instance, if you take healthcare: you can totally transform medical care making it much more predictive and personalized if you get access to a lot of data. We will open our data in France. I made this decision and announced it this afternoon. But the day you start dealing with privacy issues, the day you open this data and unveil personal information, you open a Pandora’s Box, with potential use cases that will not be increasing the common good and improving the way to treat you. In particular, it’s creating a potential for all the players to select you. This can be a very profitable business model: this data can be used to better treat people, it can be used to monitor patients, but it can also be sold to an insurer that will have intelligence on you and your medical risks, and could get a lot of money out of this information. The day we start to make such business out of this data is when a huge opportunity becomes a huge risk. It could totally dismantle our national cohesion and the way we live together. This leads me to the conclusion that this huge technological revolution is in fact a political revolution.

When you look at artificial intelligence today, the two leaders are the US and China. In the US, it is entirely driven by the private sector, large corporations, and some startups dealing with them. All the choices they will make are private choices that deal with collective values. That’s exactly the problem you have with Facebook and Cambridge Analytica or autonomous driving. On the other side, Chinese players collect a lot of data driven by a government whose principles and values are not ours. And Europe has not exactly the same collective preferences as US or China. If we want to defend our way to deal with privacy, our collective preference for individual freedom versus technological progress, integrity of human beings and human DNA, if you want to manage your own choice of society, your choice of civilization, you have to be able to be an acting part of this AI revolution . That’s the condition of having a say in designing and defining the rules of AI. That is one of the main reasons why I want to be part of this revolution and even to be one of its leaders. I want to frame the discussion at a global scale.

[…]

I want my country to be the place where this new perspective on AI is built, on the basis of interdisciplinarity: this means crossing maths, social sciences, technology, and philosophy. That’s absolutely critical. Because at one point in time, if you don’t frame these innovations from the start, a worst-case scenario will force you to deal with this debate down the line. I think privacy has been a hidden debate for a long time in the US. Now, it emerged because of the Facebook issue. Security was also a hidden debate of autonomous driving. Now, because we’ve had this issue with Uber, it rises to the surface. So if you don’t want to block innovation, it is better to frame it by design within ethical and philosophical boundaries. And I think we are very well equipped to do it, on top of developing the business in my country.

But I think as well that AI could totally jeopardize democracy. For instance, we are using artificial intelligence to organize the access to universities for our students That puts a lot of responsibility on an algorithm. A lot of people see it as a black box, they don’t understand how the student selection process happens. But the day they start to understand that this relies on an algorithm, this algorithm has a specific responsibility. If you want, precisely, to structure this debate, you have to create the conditions of fairness of the algorithm and of its full transparency. I have to be confident for my people that there is no bias, at least no unfair bias, in this algorithm. I have to be able to tell French citizens, “OK, I encouraged this innovation because it will allow you to get access to new services, it will improve your lives—that’s a good innovation to you.” I have to guarantee there is no bias in terms of gender, age, or other individual characteristics, except if this is the one I decided on behalf of them or in front of them. This is a huge issue that needs to be addressed. If you don’t deal with it from the very beginning, if you don’t consider it is as important as developing innovation, you will miss something and at a point in time, it will block everything. Because people will eventually reject this innovation.

[…]

your algorithm and be sure that this is trustworthy.” The power of consumption society is so strong that it gets people to accept to provide a lot of personal information in order to get access to services largely driven by artificial intelligence on their apps, laptops and so on. But at some point, as citizens, people will say, “I want to be sure that all of this personal data is not used against me, but used ethically, and that everything is monitored. I want to understand what is behind this algorithm that plays a role in my life.” And I’m sure that a lot of startups or labs or initiatives which will emerge in the future, will reach out to their customers and say “I allow you to better understand the algorithm we use and the bias or non-bias.” I’m quite sure that’s one of the next waves coming in AI. I think it will increase the pressure on private players. These new apps or sites will be able to tell people: “OK! You can go to this company or this app because we cross-check everything for you. It’s safe,” or on the contrary: “If you go to this website or this app or this research model, it’s not OK, I have no guarantee, I was not able to check or access the right information about the algorithm”.

Source: Emmanuel Macron Q&A: France’s President Discusses Artificial Intelligence Strategy | WIRED

Card Data Stolen From 5 Million Saks and Lord & Taylor Customers

Saks has been hacked — adding to the already formidable challenges faced by the luxury retailer.

A well-known ring of cybercriminals has obtained more than five million credit and debit card numbers from customers of Saks Fifth Avenue and Lord & Taylor, according to a cybersecurity research firm that specializes in tracking stolen financial data. The data, the firm said, appears to have been stolen using software that was implanted into the cash register systems at the stores and that siphoned card numbers until last month.

The Hudson’s Bay Company, the Canadian corporation that owns both retail chains, confirmed on Sunday that a breach had occurred.

“We have become aware of a data security issue involving customer payment card data at certain Saks Fifth Avenue, Saks Off 5th and Lord & Taylor stores in North America,” the company said in a statement. “We have identified the issue, and have taken steps to contain it. Once we have more clarity around the facts, we will notify our customers quickly and will offer those impacted free identity protection services, including credit and web monitoring.”

Hudson’s Bay said that its investigation was continuing but that its e-commerce platforms appeared to have been unaffected by the breach. The company declined to identify how many customer accounts or stores were affected.

The theft is one of the largest known breaches of a retailer and shows just how difficult it is to secure credit-card transaction systems despite the lessons learned from other large data breaches, including the theft of 40 million card numbers from Target in 2013 and 56 million card numbers from Home Depot in 2014. Last year, Equifax, a credit reporting firm, disclosed that sensitive financial information on 145.5 million Americans had been exposed in a breach of the company’s systems.

The research firm that identified the Saks breach, Gemini Advisory, said on Sunday that a group of Russian-speaking hackers known as Fin7 or JokerStash posted online on Wednesday that it had obtained a cache of five million stolen card numbers, which the thieves called BIGBADABOOM-2. The hackers, who have also hit other retail chains, offered 125,000 of the records for immediate sale.

Fin7 did not disclose where the numbers had been obtained. But the researchers, working in conjunction with banks, analyzed a sample of the records and determined that the card numbers all seemed to have been used at Saks and Lord & Taylor stores, mostly in New York and New Jersey, from May 2017 to March 2018.

Source: Card Data Stolen From 5 Million Saks and Lord & Taylor Customers – The New York Times

You can now use your Netflix subscription anywhere in the EU

‘This content is not available in your country’ – a damn annoying message, especially when you’re paying for it. But a new EU regulation means you can now access Netflix, Amazon Prime and other services from any country in Europe, marking an end to boring evenings in hotels watching BBC World News.

The European Commission’s ‘digital single market strategy’, which last year claimed victory over mobile roaming charges, has now lead to it passing the ‘portability regulation’, which will allow users around the EU to use region locked services more freely while travelling abroad.

Under currently active rules, what content is available in a certain territory is based on the specific local rights that a provider has secured. The new rules allow for what Phil Sherrell, head of international media, entertainment and sport for international law firm Bird and Bird, calls “copyright fiction”, allowing the normal rules to be bent temporarily while a user is travelling.

The regulation was originally passed in June 2017, but the nine-month period given to rights holders and service providers to prepare is about to expire, and thereby making the rules enforceable.

From today, content providers, whether their products are videos, music, games, live sport or e-books, will use their subscribers’ details to validate their home country, and let them access all the usual content and services available in that location all around the Union. This is mandatory for all paid services, who are also not permitted to charge extra for the new portability.

Sadly, this doesn’t mean you get extra content from other countries when you use the services back at home, just parity of experience around the EU. Another caveat to the regulation is that services which are offered for free, such as the online offerings of public service broadcasters like the BBC, are not obliged to follow the regulation. These providers instead may opt-in to the rules should they want to compete with their fee charging rivals.

[…]

Brexit of course may mean UK users only benefit from the legislation for a year or so, but that’s as yet unconfirmed. For now though, we can enjoy the simple pleasure of going abroad and, instead of sampling some of the local sights, enjoy the crucial freedom of watching, listening, playing or reading the same things that we could get at home.

Source: You can now use your Netflix subscription anywhere in the EU | WIRED UK

Chrome Is Scanning Files on Your Computer, and People Are Freaking Out

The browser you likely use to read this article scans practically all files on your Windows computer. And you probably had no idea until you read this. Don’t worry, you’re not the only one.

Last year, Google announced some upgrades to Chrome, by far the world’s most used browser—and the one security pros often recommend. The company promised to make internet surfing on Windows computers even “cleaner” and “safer ” adding what The Verge called “basic antivirus features.” What Google did was improve something called Chrome Cleanup Tool for Windows users, using software from cybersecurity and antivirus company ESET.

Tensions around the issue of digital privacy are understandably high following Facebook’s Cambridge Analytica scandal, but as far as we can tell there is no reason to worry here, and what Google is doing is above board.

In practice, Chome on Windows looks through your computer in search of malware that targets the Chrome browser itself using ESET’s antivirus engine. If it finds some suspected malware, it sends metadata of the file where the malware is stored, and some system information, to Google. Then, it asks you to for permission to remove the suspected malicious file. (You can opt-out of sending information to Google by deselecting the “Report details to Google” checkbox.)

A screenshot of the Chrome pop-up that appears if Chrome Cleanup Tool detects malware on your Windows computer.

Last week, Kelly Shortridge, who works at cybersecurity startup SecurityScorecard, noticed that Chrome was scanning files in the Documents folder of her Windows computer.

“In the current climate, it really shocked me that Google would so quietly roll out this feature without publicizing more detailed supporting documentation—even just to preemptively ease speculation,” Shortridge told me in an online chat. “Their intentions are clearly security-minded, but the lack of explicit consent and transparency seems to violate their own criteria of ‘user-friendly software’ that informs the policy for Chrome Cleanup [Tool].”

Her tweet got a lot of attention and caused other people in the infosec community—as well as average users such as me—to scratch their heads.

“Nobody likes surprises,” Haroon Meer, the founder at security consulting firm Thinkst, told me in an online chat. “When people fear a big brother, and tech behemoths going too far…a browser touching files it has no business to touch is going to set off alarm bells.”

Now, to be clear, this doesn’t mean Google can, for example, see photos you store on your windows machine. According to Google, the goal of Chrome Cleanup Tool is to make sure malware doesn’t mess up with Chrome on your computer by installing dangerous extensions, or putting ads where they’re not supposed to be.

As the head of Google Chrome security Justin Schuh explained on Twitter, the tool’s “sole purpose is to detect and remove unwanted software manipulating Chrome.” Moreover, he added, the tool only runs weekly, it only has normal user privileges (meaning it can’t go too deep into the system), is “sandboxed” (meaning its code is isolated from other programs), and users have to explicitly click on that box screenshotted above to remove the files and “cleanup.”

In other words, Chrome Cleanup Tool is less invasive than a regular “cloud” antivirus that scans your whole computer (including its more sensitive parts such as the kernel) and uploads some data to the antivirus company’s servers.

But as Johns Hopkins professor Matthew Green put it, most people “are just a little creeped out that Chrome started poking through their underwear drawer without asking.”

That’s the problem here: most users of an internet browser probably don’t expect it to scan and remove files on their computers.

Source: Chrome Is Scanning Files on Your Computer, and People Are Freaking Out – Motherboard

I really don’t think it is the job of the browser to scan your computer at all.

 
Skip to toolbar