About Robin Edgar

Organisational Structures | Technology and Science | Military, IT and Lifestyle consultancy | Social, Broadcast & Cross Media | Flying aircraft

Online photos can’t simply be re-published, EU court rules

Internet users must ask for a photographer’s permission before publishing their images, even if the photos were already freely accessible elsewhere online, the European Court of Justice ruled Tuesday.

“The posting on a website of a photograph that was freely accessible on another website with the consent of the author requires a new authorisation by that author,” the EU’s top court said in a statement.

The court had been asked to decide on a case in Germany, in which a secondary school student downloaded and used a photo that had been freely accessible on a travel website for a school project. The photo was later posted on the school’s website as well.

The photographer who took the picture argued the school’s use of his photo was a copyright infringement because he only gave the travel site permission to use it, and claimed damages amounting to €400.

The ECJ ruled in the photographer’s favor, saying that under the EU’s Copyright Directive, the school should have gotten his approval before publishing the photo.

Source: Online photos can’t simply be re-published, EU court rules – POLITICO

Hacker swipes Snapchat’s source code, publishes it on GitHub

Snapchat doesn’t just make messages disappear after a period of time. It also does the same to GitHub repositories — especially when they contain the company’s proprietary source code.

So, what happened? Well, let’s start from the beginning. A GitHub with the handle i5xx, believed to be from the village of Tando Bago in Pakistan’s southeastern Sindh province, created a GitHub repository called Source-Snapchat.

At the time of writing, the repo has been removed by GitHub following a DMCA request from Snap Inc

[…]

Four days ago, GitHub published a DMCA takedown request from Snap Inc., although it’s likely the request was filed much earlier. GitHub, like many other tech giants including Google, publishes information on DMCA takedown requests from the perspective of transparency.

[…]

To the question “Please provide a detailed description of the original copyrighted work that has allegedly been infringed. If possible, include a URL to where it is posted online,” the Snap Inc representative wrote:

“SNAPCHAT SOURCE CODE. IT WAS LEAKED AND A USER HAS PUT IT IN THIS GITHUB REPO. THERE IS NO URL TO POINT TO BECAUSE SNAP INC. DOESN’T PUBLISH IT PUBLICLY.”

The most fascinating part of this saga is that the leak doesn’t appear to be malicious, but rather comes from a researcher who found something, but wasn’t able to communicate his findings to the company.

According to several posts on a Twitter account believed to belong to i5xx, the researcher tried to contact SnapChat, but was unsuccessful.

“The problem we tried to communicate with you but did not succeed In that we decided [sic] Deploy source code,” wrote i5xx.

The account also threatened to re-upload the source code. “I will post it again until you reply :),” he said.

For what it’s worth, it’s pretty easy for security researchers to get in touch with Snap Inc. The company has an active account on HackerOne, where it runs a bug bounty program, and is extremely responsive.

According to HackerOne’s official statistics, the site replies to initial reports in 12 hours, and has paid out over $220,000 in bounties.

Source: Hacker swipes Snapchat’s source code, publishes it on GitHub

AI builds wiki entries for people that aren’t on it but should be

Human-generated knowledge bases like Wikipedia have a recall problem. First, there are the articles that should be there but are entirely missing. The unknown unknowns.

Consider Joelle Pineau, the Canadian roboticist bringing scientific rigor to artificial intelligence and who directs Facebook’s new AI Research lab in Montreal. Or Miriam Adelson, an actively publishing addiction treatment researcher who happens to be a billionaire by marriage and a major funder of her own field. Or Evelyn Wang, the new head of MIT’s revered MechE department whose accomplishments include a device that generates drinkable water from sunlight and desert air. When I wrote this a few days ago, none of them had articles on English Wikipedia, though they should by any measure of notability.

(Pineau is up now thanks to my friend and fellow science crusader Jess Wade who created an article just hours after I told her about Pineau’s absence. And if the internet is in a good mood, someone will create articles for the other two soon after this post goes live.)

But I didn’t discover those people on my own. I used a machine learning system we’re building at Primer. It discovered and described them for me. It does this much as a human would, if a human could read 500 million news articles, 39 million scientific papers, all of Wikipedia, and then write 70,000 biographical summaries of scientists.

[…]

We are publicly releasing free-licensed data about scientists that we’ve been generating along the way, starting with 30,000 computer scientists. Only 15% of them are known to Wikipedia. The data set includes 1 million news sentences that quote or describe the scientists, metadata for the source articles, a mapping to their published work in the Semantic Scholar Open Research Corpus, and mappings to their Wikipedia and Wikidata entries. We will revise and add to that data as we go. (Many thanks to Oren Etzioni and AI2 for data and feedback.) Our aim is to help the open data research community build better tools for maintaining Wikipedia and Wikidata, starting with scientific content.

Fluid Knowledge

We trained Quicksilver’s models on 30,000 English Wikipedia articles about scientists, their Wikidata entries, and over 3 million sentences from news documents describing them and their work. Then we fed in the names and affiliations of 200,000 authors of scientific papers.

In the morning we found 40,000 people missing from Wikipedia who have a similar distribution of news coverage as those who do have articles. Quicksilver doubled the number of scientists potentially eligible for a Wikipedia article overnight.

It also revealed the second flavor of the recall problem that plagues human-generated knowledge bases: information decay. For most of those 30,000 scientists who are on English Wikipedia, Quicksilver identified relevant information that was missing from their articles.

Source: Primer | Machine-Generated Knowledge Bases

Data center server BMCs are terribly outdated and insecure

BMCs can be used to remotely monitor system temperature, voltage and power consumption, operating system health, and so on, and power cycle the box if it runs into trouble, tweak configurations, and even, depending on the setup, reinstall the OS – all from the comfort of an operations center, as opposed to having to find an errant server in the middle of a data center to physically wrangle. They also provide the foundations for IPMI.

[…]

It’s a situation not unlike Intel’s Active Management Technology, a remote management component that sits under the OS or hypervisor, has total control over a system, and been exploited more than once over the years.

Waisman and his colleague Matias Soler, a senior security researcher at Immunity, examined these BMC systems, and claimed the results weren’t good. They even tried some old-school hacking techniques from the 1990s against the equipment they could get hold of, and found them to be very successful. With HP’s BMC-based remote management technology iLO4, for example, the builtin web server could be tricked into thinking a remote attacker was local, and so didn’t need to authenticate them.

“We decided to take a look at these devices and what we found was even worse than what we could have imagined,” the pair said. “Vulnerabilities that bring back memories from the 1990s, remote code execution that is 100 per cent reliable, and the possibility of moving bidirectionally between the server and the BMC, making not only an amazing lateral movement angle, but the perfect backdoor too.”

The fear is that once an intruder gets into a data center network, insecure BMC firmware could be used to turn a drama into a crisis: vulnerabilities in the technology could be exploited to hijack more systems, install malware that persists across reboots and reinstalls, or simple hide from administrators.

[…]

The duo probed whatever kit they could get hold of – mainly older equipment – and it could be that modern stuff is a lot better in terms of security with firmware that follows secure coding best practices. On the other hand, what Waisman and Soler have found and documented doesn’t inspire a terrible amount of confidence in newer gear.

Their full findings can be found here, and their slides here.

Source: Can we talk about the little backdoors in data center servers, please? • The Register

TSA says ‘Quiet Skies’ surveillance snared zero threats but put 5000 travellers under surveillance and on no fly lists

SA officials were summoned to Capitol Hill Wednesday and Thursday afternoon following Globe reports on the secret program, which sparked sharp criticism because it includes extensive surveillance of domestic fliers who are not suspected of a crime or listed on any terrorist watch list.

“Quiet Skies is the very definition of Big Brother,” Senator Edward Markey of Massachusetts, a member of the Senate Commerce, Science, and Transportation committee, said broadly about the program. “American travelers deserve to have their privacy and civil rights protected even 30,000 feet in the air.”

[…]

The teams document whether passengers fidget, use a computer, or have a “cold penetrating stare,” among other behaviors, according to agency documents.

All US citizens who enter the country from abroad are screened via Quiet Skies. Passengers may be selected through a broad, undisclosed set of criteria for enhanced surveillance by a team of air marshals on subsequent domestic flights, according to agency documents.

Dozens of air marshals told the Globe the “special mission coverage” seems to test the limits of the law, and is a waste of time and resources. Several said surveillance teams had been assigned to follow people who appeared to pose no threat — a working flight attendant, a businesswoman, a fellow law enforcement officer — and to document their actions in-flight and through airports.

[…]

The officials said about 5,000 US citizens had been closely monitored since March and none of them were deemed suspicious or merited further scrutiny, according to people with direct knowledge of the Thursday meeting.

Source: TSA says ‘Quiet Skies’ surveillance snared zero threats – The Boston Globe

Didn’t the TSA learn anything from the no-fly lists not working in the first place?!

Google keeps tracking you even when you specifically tell it not to: Maps, Search won’t take no for an answer

Google has admitted that its option to “pause” the gathering of your location data doesn’t apply to its Maps and Search apps – which will continue to track you even when you specifically choose to halt such monitoring.

Researchers at Princeton University in the US this week confirmed on both Android handhelds and iPhones that even if you go into your smartphone’s settings and turn off “location history”, Google continues to snoop on your whereabouts and save it to your personal profile.

That may seem contradictory, however, Google assured the Associated Press that it is all fine and above-board because the small print says the search biz will keep tracking you regardless.

“There are a number of different ways that Google may use location to improve people’s experience, including: Location History, Web and App Activity, and through device-level Location Services,” the giant online ad company told AP, adding: “We provide clear descriptions of these tools, and robust controls so people can turn them on or off, and delete their histories at any time.”

The mistake people make is wrongly assuming that turning off an option called “location history” actually turns off the gathering of location data – which is obviously ridiculous because if people really wanted Google not to know where they are every second of every day, they would of course go to “Web and App Activity” and “pause” all activity there, even though it makes no mention of location data.

Besides, in the pop-up explanation that appears in order to make you confirm that you want your location data turned off, Google is entirely upfront when it says, in the second paragraph: “This setting does not affect other location services on your device, like Google Location Services and Find My Device. Some location data may be saved as part of your activity on other Google services, like Search and Maps.”

Of course by “may be saved,” Google means “will be saved,” and it forgets to tell you that “Web and App Activity” is where you need to go to stop Search and Maps from storing your location data.

Misdirection

Of course, there’s no reason to assume that works either since Google makes no mention of turning off location when you “pause” web and app activity. Instead, it just tells you why that’s a bad idea: “Pausing additional Web & App Activity may limit or disable more personalized experiences across Google services. For example, you may stop seeing helpful recommendations based on the apps and sites you use.”

But it gets even weirder than that: because if you expect that turning off “Web and App Activity” would actually stop web and app activity in the same way turning off location history would turn off location data – then you’ve ended up in the wrong place again.

In that web and app activity pop-up: “If your Android usage & diagnostics setting is turned on, your device will still share information with Google, like battery level, how often you use your device and apps, and system errors. View Google settings on your Android device to change this setting.”

So if you want to turn off location, you need to go Web and App Activity.

And if you want to turn off web and app activity, you need to go to Google settings – although where precisely it’s not clear.

Source: Google keeps tracking you even when you specifically tell it not to: Maps, Search won’t take no for an answer • The Register

AI identifies heat-resistant coral reefs in Indonesia

A recent scientific survey off the coast of Sulawesi Island in Indonesia suggests that some shallow water corals may be less vulnerable to global warming than previously thought.

Between 2014 and 2017, the world’s reefs endured the worst coral bleaching event in history, as the cyclical El Niño climate event combined with anthropogenic warming to cause unprecedented increases in water temperature.

But the June survey, funded by Microsoft co-founder Paul Allen’s family foundation, found the Sulawesi reefs were surprisingly healthy.

In fact the reefs hadn’t appeared to decline significantly in condition than when they were originally surveyed in 2014 – a surprise for British scientist Dr Emma Kennedy, who led the research team.

A combination of 360-degree imaging tech and Artificial Intelligence (AI) allowed scientists to gather and analyse more than 56,000 images of shallow water reefs. Over the course of a six-week voyage, the team deployed underwater scooters fitted with 360 degree cameras that allowed them to photograph up to 1.5 miles of reef per dive, covering a total of 1487 square miles in total.

Researchers at the University of Queensland in Australia then used cutting edge AI software to handle the normally laborious process of identifying and cataloguing the reef imagery. Using the latest Deep Learning tech, they ‘taught’ the AI how to detect patterns in the complex contours and textures of the reef imagery and thus recognise different types of coral and other reef invertebrates.

Once the AI had shown between 400 and 600 images, it was able to process images autonomously. Says Dr Kennedy, “the use of AI to rapidly analyse photographs of coral has vastly improved the efficiency of what we do — what would take a coral reef scientist 10 to 15 minutes now takes the machine a few seconds.”

Source: AI identifies heat-resistant coral reefs in Indonesia | Environment | The Guardian

MS Sketch2Code uses AI to convert a picture of a wireframe to HTML – download and try

Description

Sketch2Code is a solution that uses AI to transform a handwritten user interface design from a picture to a valid HTML markup code.

Process flow

The process of transformation of a handwritten image to HTML this solution implements is detailed as follows:

  1. The user uploads an image through the website.
  2. A custom vision model predicts what HTML elements are present in the image and their location.
  3. A handwritten text recognition service reads the text inside the predicted elements.
  4. A layout algorithm uses the spatial information from all the bounding boxes of the predicted elements to generate a grid structure that accommodates all.
  5. An HTML generation engine uses all these pieces of information to generate an HTML markup code reflecting the result.
  6. <A href=”https://github.com/Microsoft/ailab/tree/master/Sketch2Code”>Sketch2Code Github</a>

AI sucks at stopping online trolls spewing toxic comments

A group of researchers from Aalto University and the University of Padua found this out when they tested seven state-of-the-art models used to detect hate speech. All of them failed to recognize foul language when subtle changes were made, according to a paper [PDF] on arXiv.

Adversarial examples can be created automatically by using algorithms to misspell certain words, swap characters for numbers or add random spaces between words or attach innocuous words such as ‘love’ in sentences.

The models failed to pick up on adversarial examples and successfully evaded detection. These tricks wouldn’t fool humans, but machine learning models are easily blindsighted. They can’t readily adapt to new information beyond what’s been spoonfed to them during the training process.

“They perform well only when tested on the same type of data they were trained on. Based on these results, we argue that for successful hate speech detection, model architecture is less important than the type of data and labeling criteria. We further show that all proposed detection techniques are brittle against adversaries who can (automatically) insert typos, change word boundaries or add innocuous words to the original hate speech,” the paper’s abstract states.

Source: AI sucks at stopping online trolls spewing toxic comments • The Register

​Google just put an AI in charge of keeping its data centers cool

Google is putting an artificial intelligence system in charge of its data center cooling after the system proved it could cut energy use.

Now Google and its AI company DeepMind are taking the project further; instead of recommendations being implemented by human staff, the AI system is directly controlling cooling in the data centers that run services including Google Search, Gmail and YouTube.

“This first-of-its-kind cloud-based control system is now safely delivering energy savings in multiple Google data centers,” Google said.

Data centers use vast amount of energy and as the demand for cloud computing rises even small tweaks to areas like cooling can produce significant time and cost savings. Google’s decision to use its own DeepMind-created system is also a good plug for its AI business.

Every five minutes, the AI pulls a snapshot of the data center cooling system from thousands of sensors. This data is fed into deep neural networks, which predict how different choices will affect future energy consumption.

The AI system then identifies tweaks that could reduce energy consumption, which are then sent back to the data center, checked by the local control system and implemented.

Google said giving the AI more responsibility came at the request of its data center operators who said that implementing the recommendations from the AI system required too much effort and supervision.

“We wanted to achieve energy savings with less operator overhead. Automating the system enabled us to implement more granular actions at greater frequency, while making fewer mistakes,” said Google data center operator Dan Fuenffinger.

Source: ​Google just put an AI in charge of keeping its data centers cool | ZDNet

How AI Can Spot Exam Cheats and Raise Standards

AI is being deployed by those who set and mark exams to reduce fraud — which remains overall a small problem — and to create far greater efficiencies in preparation and marking, and to help improve teaching and studying. From a report, which may be paywalled: From traditional paper-based exam and textbook producers such as Pearson, to digital-native companies such as Coursera, online tools and artificial intelligence are being developed to reduce costs and enhance learning. For years, multiple-choice tests have allowed scanners to score results without human intervention. Now technology is coming directly into the exam hall. Coursera has patented a system to take images of students and verify their identity against scanned documents. There are plagiarism detectors that can scan essay answers and search the web — or the work of other students — to identify copying. Webcams can monitor exam locations to spot malpractice. Even when students are working, they provide clues that can be used to clamp down on cheats. They leave electronic “fingerprints” such as keyboard pressure, speed and even writing style. Emily Glassberg Sands, Cousera’s head of data science, says: “We can validate their keystroke signatures. It’s difficult to prepare for someone hell-bent on cheating, but we are trying every way possible.”

Source: How AI Can Spot Exam Cheats and Raise Standards – Slashdot

Spyware Company Leaves ‘Terabytes’ of Selfies, Text Messages, and Location Data Exposed Online

A company that sells surveillance software to parents and employers left “terabytes of data” including photos, audio recordings, text messages and web history, exposed in a poorly-protected Amazon S3 bucket.

Image: Shutterstock

This story is part of When Spies Come Home, a Motherboard series about powerful surveillance software ordinary people use to spy on their loved ones.

A company that markets cell phone spyware to parents and employers left the data of thousands of its customers—and the information of the people they were monitoring—unprotected online.

The data exposed included selfies, text messages, audio recordings, contacts, location, hashed passwords and logins, Facebook messages, among others, according to a security researcher who asked to remain anonymous for fear of legal repercussions.

Last week, the researcher found the data on an Amazon S3 bucket owned by Spyfone, one of many companies that sell software that is designed to intercept text messages, calls, emails, and track locations of a monitored device.

Source: Spyware Company Leaves ‘Terabytes’ of Selfies, Text Messages, and Location Data Exposed Online – Motherboard

Woman sentenced to more than 5 years for leaking info about Russia hacking attempts. Trump still on the loose.

A former government contractor who pleaded guilty to leaking U.S. secrets about Russia’s attempts to hack the 2016 presidential election was sentenced Thursday to five years and three months in prison.

It was the sentence that prosecutors had recommended — the longest ever for a federal crime involving leaks to the news media — in the plea deal for Reality Winner, the Georgia woman at the center of the case. Winner was also sentenced to three years of supervised release and no fine, except for a $100 special assessment fee.

The crime carried a maximum penalty of 10 years. U.S. District Court Judge J. Randal Hall in Augusta, Georgia, was not bound to follow the plea deal, but elected to give Winner the amount of time prosecutors requested.

Source: Reality Winner sentenced to more than 5 years for leaking info about Russia hacking attempts

How a hacker network turned stolen press releases into $100 million

At a Kiev nightclub in the spring of 2012, 24-year-old Ivan Turchynov made a fateful drunken boast to some fellow hackers. For years, Turchynov said, he’d been hacking unpublished press releases from business newswires and selling them, via Moscow-based middlemen, to stock traders for a cut of the sizable profits.

Oleksandr Ieremenko, one of the hackers at the club that night, had worked with Turchynov before and decided he wanted in on the scam. With his friend Vadym Iermolovych, he hacked Business Wire, stole Turchynov’s inside access to the site, and pushed the main Moscovite ringleader, known by the screen name eggPLC, to bring them in on the scheme. The hostile takeover meant Turchynov was forced to split his business. Now, there were three hackers in on the game.

Newswires like Business Wire are clearinghouses for corporate information, holding press releases, regulatory announcements, and other market-moving information under strict embargo before sending it out to the world. Over a period of at least five years, three US newswires were hacked using a variety of methods from SQL injections and phishing emails to data-stealing malware and illicitly acquired login credentials. Traders who were active on US stock exchanges drew up shopping lists of company press releases and told the hackers when to expect them to hit the newswires. The hackers would then upload the stolen press releases to foreign servers for the traders to access in exchange for 40 percent of their profits, paid to various offshore bank accounts. Through interviews with sources involved with both the scheme and the investigation, chat logs, and court documents, The Verge has traced the evolution of what law enforcement would later call one of the largest securities fraud cases in US history.

Source: How a hacker network turned stolen press releases into $100 million – The Verge

Android data slurping measured and monitored – scary amounts and loads of location tracking

Google’s passive collection of personal data from Android and iOS has been monitored and measured in a significant academic study.

The report confirms that Google is no respecter of the Chrome browser’s “incognito mode” aka “porn mode”, collecting Chrome data to add to your personal profile, as we pointed out earlier this year.

It also reveals how phone users are being tracked without realising it. How so? It’s here that the B2B parts of Google’s vast data collection network – its publisher and advertiser products – kick into life as soon the user engages with a phone. These parts of Google receive personal data from an Android even when the phone is static and not being used.

The activity has come to light thanks to research (PDF) by computer science professor Douglas Schmidt of Vanderbilt University, conducted for the nonprofit trade association Digital Content Next. It’s already been described by one privacy activist as “the most comprehensive report on Google’s data collection practices so far”.

[…]

Overall, the study discovered that Apple retrieves much less data than Google.

“The total number of calls to Apple servers from an iOS device was much lower, just 19 per cent the number of calls to Google servers from an Android device.

Moreover, there are no ad-related calls to Apple servers, which may stem from the fact that Apple’s business model is not as dependent on advertising as Google’s. Although Apple does obtain some user location data from iOS devices, the volume of data collected is much (16x) lower than what Google collects from Android,” the study noted.

Source: Android data slurping measured and monitored • The Register

The amount of location data slurped is scary – and it continues to slurp location in many different ways, even if wifi is turned off. It’s Big Brother in your pocket, with no opt out.

Bitcoin mining now apparently accounts for almost one percent of the world’s energy consumption

According to testimony provided by Princeton computer scientist Arvind Narayanan to the Senate Committee on Energy and Natural Resources, no matter what you do to make cryptocurrency mining harware greener, it’s a drop in the bucket compared to the overall network’s flabbergasting energy consumption. Instead, Narayanan told the committee, the only thing that really determines how much energy Bitcoin uses is its price. “If the price of a cryptocurrency goes up, more energy will be used in mining it; if it goes down, less energy will be used,” he told the committee. “Little else matters. In particular, the increasing energy efficiency of mining hardware has essentially no impact on energy consumption.”

In his testimony, Narayanan estimates that Bitcoin mining now uses about five gigawatts of electricity per day (in May, estimates of Bitcoin power consumption were about half of that). He adds that when you’ve got a computer racing with all its might to earn a free Bitcoin, it’s going to be running hot as hell, which means you’re probably using even more electricity to keep the computer cool so it doesn’t die and/or burn down your entire mining center, which probably makes the overall cost associated with mining even higher.

Source: Bitcoin mining now accounts for almost one percent of the world’s energy consumption | The Outline

Huawei reverses its stance, will no longer allow bootloader unlocking – will lose many customers

In order to deliver the best user experience and prevent users from experiencing possible issues that could arise from ROM flashing, including system failure, stuttering, worsened battery performance, and risk of data being compromised, Huawei will cease providing bootloader unlock codes for devices launched after May 25, 2018. For devices launched prior to the aforementioned date, the termination of the bootloader code application service will come into effect 60 days after today’s announcement. Moving forward, Huawei remains committed to providing quality services and experiences to its customers. Thank you for your continued support.

When you take into consideration that Huawei — for years — not only supported the ROM community but actively assisted in the unlocking of Huawei bootloaders, this whole switch-up doesn’t make much sense. But, that’s the official statement, so do with it what you will.


Original Article: For years now, the custom ROM development community has flocked to Huawei phones. One of the major reasons for this is because Huawei made it incredibly easy to unlock the bootloaders of its devices, even providing a dedicated support page for the process.

Source: Huawei reverses its stance, will no longer allow bootloader unlocking

Oi, clickbait cop bot, jam this in your neural net: Hot new AI threatens to DESTROY web journos

Artificial intelligent software has been trained to detect and flag up clickbait headlines.

And here at El Reg we say thank God Larry Wall for that. What the internet needs right now is software to highlight and expunge dodgy article titles about space alien immigrants, faked moon landings, and the like.

Machine-learning eggheads continue to push the boundaries of natural language processing, and have crafted a model that can, supposedly, detect how clickbait-y a headline really is.

The system uses a convolutional neural network that converts the words in a submitted article title into vectors. These numbers are fed into a long-short-term memory network that spits out a score based on the headline’s clickbait strength. About eight times out of ten it agreed with humans on whether a title was clickbaity or not, we’re told.

The trouble is, what exactly is a clickbait headline? It’s a tough question. The AI’s team – from the International Institute of Information Technology in Hyderabad, the Manipal Institute of Technology, and Birla Institute of Technology, in India – decided to rely on the venerable Merriam-Webster dictionary to define clickbait.

Source: Oi, clickbait cop bot, jam this in your neural net: Hot new AI threatens to DESTROY web journos • The Register

Facebook Wanted to Kill This Investigative People You May Know Tool

Last year, we launched an investigation into how Facebook’s People You May Know tool makes its creepily accurate recommendations. By November, we had it mostly figured out: Facebook has nearly limitless access to all the phone numbers, email addresses, home addresses, and social media handles most people on Earth have ever used. That, plus its deep mining of people’s messaging behavior on Android, means it can make surprisingly insightful observations about who you know in real life—even if it’s wrong about your desire to be “friends” with them on Facebook.

In order to help conduct this investigation, we built a tool to keep track of the people Facebook thinks you know. Called the PYMK Inspector, it captures every recommendation made to a user for however long they want to run the tool. It’s how one of us discovered Facebook had linked us with an unknown relative. In January, after hiring a third party to do a security review of the tool, we released it publicly on Github for users who wanted to study their own People You May Know recommendations. Volunteers who downloaded the tool helped us explore whether you’ll show up in someone’s People You Know after you look at their profile. (Good news for Facebook stalkers: Our experiment found you won’t be recommended as a friend just based on looking at someone’s profile.)

Facebook wasn’t happy about the tool.

The day after we released it, a Facebook spokesperson reached out asking to chat about it, and then told us that the tool violated Facebook’s terms of service, because it asked users to give it their username and password so that it could sign in on their behalf. Facebook’s TOS states that, “You will not solicit login information or access an account belonging to someone else.” They said we would need to shut down the tool (which was impossible because it’s an open source tool) and delete any data we collected (which was also impossible because the information was stored on individual users’ computers; we weren’t collecting it centrally).

We argued that we weren’t seeking access to users’ accounts or collecting any information from them; we had just given users a tool to log into their own accounts on their own behalf, to collect information they wanted collected, which was then stored on their own computers. Facebook disagreed and escalated the conversation to their head of policy for Facebook’s Platform, who said they didn’t want users entering their Facebook credentials anywhere that wasn’t an official Facebook site—because anything else is bad security hygiene and could open users up to phishing attacks. She said we needed to take our tool off Github within a week.

Source: Facebook Wanted Us to Kill This Investigative Tool

It’s either legal to port-scan someone without consent or it’s not, fumes researcher: Halifax bank port scans you when you visit the page

Halifax Bank scans the machines of surfers that land on its login page whether or not they are customers, it has emerged.

Security researcher Paul Moore has made his objection to this practice – in which the British bank is not alone – clear, even though it is done for good reasons. The researcher claimed that performing port scans on visitors without permission is a violation of the UK’s Computer Misuse Act (CMA).

Halifax has disputed this, arguing that the port scans help it pick up evidence of malware infections on customers’ systems. The scans are legal, Halifax told Moore in response to a complaint he made on the topic last month.

If security researchers operate in a similar fashion, we almost always run into the Computer Misuse Act, even if their intent isn’t malicious. The CMA should be applied fairly…

When you visit the Halifax login page, even before you’ve logged in, JavaScript on the site, running in the browser, attempts to scan for open ports on your local computer to see if remote desktop or VNC services are running, and looks for some general remote access trojans (RATs) – backdoors, in other words. Crooks are known to abuse these remote services to snoop on victims’ banking sessions.

Moore said he wouldn’t have an issue if Halifax carried out the security checks on people’s computers after they had logged on. It’s the lack of consent and the scanning of any visitor that bothers him. “If they ran the script after you’ve logged in… they’d end up with the same end result, but they wouldn’t be scanning visitors, only customers,” Moore said.

Halifax told Moore: “We have to port scan your machine for security reasons.”

Having failed to either persuade Halifax Bank to change its practices or Action Fraud to act (thus far1), Moore last week launched a fundraising effort to privately prosecute Halifax Bank for allegedly breaching the Computer Misuse Act. This crowdfunding effort on GoFundMe aims to gather £15,000 (so far just £50 has been raised).

Halifax Bank’s “unauthorised” port scans are a clear violation of the CMA – and amounts to an action that security researchers are frequently criticised and/or convicted for, Moore argued. The CISO and part-time security researcher hopes his efforts in this matter might result in a clarification of the law.

“Ultimately, we can’t have it both ways,” Moore told El Reg. “It’s either legal to port scan someone without consent, or with consent but no malicious intent, or it’s illegal and Halifax need to change their deployment to only check customers, not visitors.”

The whole effort might smack of tilting at windmills, but Moore said he was acting on a point of principle.

“If security researchers operate in a similar fashion, we almost always run into the CMA, even if their intent isn’t malicious. The CMA should be applied fairly to both parties.”

Source: Bank on it: It’s either legal to port-scan someone without consent or it’s not, fumes researcher • The Register

Critical OpenEMR Flaws Left Medical Records Vulnerable

Security researchers have found more than 20 bugs in the world’s most popular open source software for managing medical records. Many of the vulnerabilities were classified as severe, leaving the personal information of an estimated 90 million patients exposed to bad actors.

OpenEMR is open source software that’s used by medical offices around the world to store records, handle schedules, and bill patients. According to researchers at Project Insecurity, it was also a bit of a security nightmare before a recent audit recommended a range of vital fixes.

The firm reached out to OpenEMR in July to discuss concerns it had about the software’s code. On Tuesday a report was released detailing the issues that included: “a portal authentication bypass, multiple instances of SQL injection, multiple instances of remote code execution, unauthenticated information disclosure, unrestricted file upload, CSRFs including a CSRF to RCE proof of concept, and unauthenticated administrative actions.”

Eighteen of the bugs were designated as having a “high” severity and could’ve been exploited by hackers with low-level access to systems running the software. Patches have been released to users and cloud customers.

OpenEMR’s project administrator Brady Miller told the BBC, “The OpenEMR community takes security seriously and considered this vulnerability report high priority since one of the reported vulnerabilities did not require authentication.”

Source: Critical OpenEMR Flaws Left Medical Records Vulnerable

Facebook: We’re not asking for financial data, we’re just partnering with banks

Facebook is pushing back against a report in Monday’s Wall Street Journal that the company is asking major banks to provide private financial data.

The social media giant has reportedly had talks with JPMorgan Chase, Wells Fargo, Citigroup, and US Bancorp to discuss proposed features including fraud alerts and checking account balances via Messenger.

Elisabeth Diana, a Facebook spokeswoman, told Ars that while the WSJ reported that Facebook has “asked” banks “to share detailed financial information about their customers, including card transactions and checking-account balances,” this isn’t quite right.

“Like many online companies with commerce businesses, we partner with banks and credit card companies to offer services like customer chat or account management,” she said in a statement on behalf of the social media giant. “Account linking enables people to receive real-time updates in Facebook Messenger where people can keep track of their transaction data like account balances, receipts, and shipping updates. The idea is that messaging with a bank can be better than waiting on hold over the phone—and it’s completely opt-in. We’re not using this information beyond enabling these types of experiences—not for advertising or anything else.”

Diana further explained that account linking is already live with PayPal, Citi in Singapore, and American Express in the United States.

“We’re not shoring up financial data,” she added.

In recent months, Facebook has been scrutinized for its approach to user privacy.

Late last month, Facebook CFO David Wehner said, “We are also giving people who use our services more choices around data privacy, which may have an impact on our revenue growth.”

Source: Facebook: We’re not asking for financial data, we’re just partnering with banks | Ars Technica

But should you opt in, your financial data just happens to then belong to Facebook to do with as they please…

The cashless society is a con – and big finance is behind it

All over the western world banks are shutting down cash machines and branches. They are trying to push you into using their digital payments and digital banking infrastructure. Just like Google wants everyone to access and navigate the broader internet via its privately controlled search portal, so financial institutions want everyone to access and navigate the broader economy through their systems.

Another aim is to cut costs in order to boost profits. Branches require staff. Replacing them with standardised self-service apps allows the senior managers of financial institutions to directly control and monitor interactions with customers.

Banks, of course, tell us a different story about why they do this. I recently got a letter from my bank telling me that they are shutting down local branches because “customers are turning to digital”, and they are thus “responding to changing customer preferences”. I am one of the customers they are referring to, but I never asked them to shut down the branches.

There is a feedback loop going on here. In closing down their branches, or withdrawing their cash machines, they make it harder for me to use those services. I am much more likely to “choose” a digital option if the banks deliberately make it harder for me to choose a non-digital option.

In behavioural economics this is referred to as “nudging”. If a powerful institution wants to make people choose a certain thing, the best strategy is to make it difficult to choose the alternative.

[…]

Financial institutions, likewise, are trying to nudge us towards a cashless society and digital banking. The true motive is corporate profit. Payments companies such as Visa and Mastercard want to increase the volume of digital payments services they sell, while banks want to cut costs. The nudge requires two parts. First, they must increase the inconvenience of cash, ATMs and branches. Second, they must vigorously promote the alternative. They seek to make people “learn” that they want digital, and then “choose” it.

We can learn from the Marxist philosopher Antonio Gramsci in this regard. His concept of hegemony referred to the way in which powerful parties condition the cultural and economic environment in such a way that their interests begin to be perceived as natural and inevitable by the general public. Nobody was on the streets shouting for digital payment 20 years ago, but increasingly it seems obvious and “natural” that it should take over. That belief does not come from nowhere. It is the direct result of a hegemonic project on the part of financial institutions.

We can also learn from Louis Althusser’s concept of interpellation. The basic idea is that you can get people to internalise beliefs by addressing them as if they already had those beliefs. Twenty years ago nobody believed that cash was “inconvenient”, but every time I walk into London Underground I see adverts that address me as if I was a person who finds cash inconvenient. The objective is to reverse-engineer a belief within me that it is inconvenient, and that cashlessness is in my interests. But a cashless society is not in your interest. It is in the interest of banks and payments companies. Their job is to make you believe that it is in your interest too, and they are succeeding in doing that.

The recent Visa chaos, during which millions of people who have become dependent on digital payment suddenly found themselves stranded when the monopolistic payment network crashed, was a temporary setback. Digital systems may be “convenient”, but they often come with central points of failure. Cash, on the other hand, does not crash. It does not rely on external data centres, and is not subject to remote control or remote monitoring. The cash system allows for an unmonitored “off the grid” space. This is also the reason why financial institutions and financial technology companies want to get rid of it. Cash transactions are outside the net that such institutions cast to harvest fees and data.

A cashless society brings dangers. People without bank accounts will find themselves further marginalised, disenfranchised from the cash infrastructure that previously supported them. There are also poorly understood psychological implications about cash encouraging self-control while paying by card or a mobile phone can encourage spending. And a cashless society has major surveillance implications.

Source: The cashless society is a con – and big finance is behind it | Brett Scott | Opinion | The Guardian

Anti DRM software programmer Arrested For Cracking Denuvo Anti-Piracy Tech

Denuvo’s notorious anti-piracy tech used to be seen as uncrackable. It held up against hackers’ best efforts for years, contorting itself into obtuse new shapes every time anybody broke through. In 2016, a Bulgarian hacker calling himself Voksi came along with a breakthrough that revitalized the whole Denuvo cracking scene. He’s been a pillar of it ever since. Now he’s in deep trouble.

In a post today on CrackWatch, a subreddit dedicated to removing DRM and other copy protection software from games, Voksi explained the sudden outage of the website of his hacker group, REVOLT. Yesterday, he got arrested, and the police raided his house.

“It finally happened,” Voksi wrote. “I can’t say it wasn’t expected. Denuvo filed a case against me to the Bulgarian authorities. Police came yesterday and took the server PC and my personal PC. I had to go to the police afterwards and explain myself.”

In a statement sent to Kotaku, Denuvo said that Voksi’s arrest came about through the dual efforts of Denuvo parent company Irdeto and the Bulgarian Cybercrime Unit. “The swift action of the Bulgarian police on this matter shows the power of collaboration between law enforcement and technology providers and that piracy is a serious offence that will be acted upon,” said Irdeto VP of cybersecurity services Mark Mulready.

Denuvo’s statement also included a quote from the Bulgarian Cybercrime Unit, which said: “We can confirm that a 21-year-old man was arrested on Tuesday on suspicion of offenses related to cybercrime and that computing equipment was confiscated. Our investigations are ongoing.”

Source: Renowned Hacker Arrested For Cracking Denuvo Anti-Piracy Tech

It’s a bit bizarre when the guys making locks start arresting the guys making keys. DRM is a bad idea anyway, but arresting people for breaking it shows you’d rather sweep your problems under a rug than fixing them. If you arrest enough people, pretty soon you will find there are a lot more problems in your software. This has been proven time and again and won’t change now.

Maybe the authorities should arrest the Denuvo people on charges of installing unwanted software along with your game on your PC.

Work less, get more: New Zealand firm’s four-day week an ‘unmitigated success’

The New Zealand company behind a landmark trial of a four-day working week has concluded it an unmitigated success, with 78% of employees feeling they were able to successfully manage their work-life balance, an increase of 24 percentage points.

Two-hundred-and-forty staff at Perpetual Guardian, a company which manages trusts, wills and estate planning, trialled a four-day working week over March and April, working four, eight-hour days but getting paid for five.

Academics studied the trial before, during and after its implementation, collecting qualitative and quantitative data.

Perpetual Guardian founder Andrew Barnes came up with the idea in an attempt to give his employees better work-life balance, and help them focus on the business while in the office on company time, and manage life and home commitments on their extra day off.

Jarrod Haar, professor of human resource management at Auckland University of Technology, found job and life satisfaction increased on all levels across the home and work front, with employees performing better in their jobs and enjoying them more than before the experiment.

Work-life balance, which reflected how well respondents felt they could successfully manage their work and non-work roles, increased by 24 percentage points.

In November last year just over half (54%) of staff felt they could effectively balance their work and home commitments, while after the trial this number jumped to 78%.

Staff stress levels decreased by 7 percentage points across the board as a result of the trial, while stimulation, commitment and a sense of empowerment at work all improved significantly, with overall life satisfaction increasing by 5 percentage points.

Source: Work less, get more: New Zealand firm’s four-day week an ‘unmitigated success’ | World news | The Guardian