Oi, clickbait cop bot, jam this in your neural net: Hot new AI threatens to DESTROY web journos

Artificial intelligent software has been trained to detect and flag up clickbait headlines.

And here at El Reg we say thank God Larry Wall for that. What the internet needs right now is software to highlight and expunge dodgy article titles about space alien immigrants, faked moon landings, and the like.

Machine-learning eggheads continue to push the boundaries of natural language processing, and have crafted a model that can, supposedly, detect how clickbait-y a headline really is.

The system uses a convolutional neural network that converts the words in a submitted article title into vectors. These numbers are fed into a long-short-term memory network that spits out a score based on the headline’s clickbait strength. About eight times out of ten it agreed with humans on whether a title was clickbaity or not, we’re told.

The trouble is, what exactly is a clickbait headline? It’s a tough question. The AI’s team – from the International Institute of Information Technology in Hyderabad, the Manipal Institute of Technology, and Birla Institute of Technology, in India – decided to rely on the venerable Merriam-Webster dictionary to define clickbait.

Source: Oi, clickbait cop bot, jam this in your neural net: Hot new AI threatens to DESTROY web journos • The Register

Facebook Wanted to Kill This Investigative People You May Know Tool

Last year, we launched an investigation into how Facebook’s People You May Know tool makes its creepily accurate recommendations. By November, we had it mostly figured out: Facebook has nearly limitless access to all the phone numbers, email addresses, home addresses, and social media handles most people on Earth have ever used. That, plus its deep mining of people’s messaging behavior on Android, means it can make surprisingly insightful observations about who you know in real life—even if it’s wrong about your desire to be “friends” with them on Facebook.

In order to help conduct this investigation, we built a tool to keep track of the people Facebook thinks you know. Called the PYMK Inspector, it captures every recommendation made to a user for however long they want to run the tool. It’s how one of us discovered Facebook had linked us with an unknown relative. In January, after hiring a third party to do a security review of the tool, we released it publicly on Github for users who wanted to study their own People You May Know recommendations. Volunteers who downloaded the tool helped us explore whether you’ll show up in someone’s People You Know after you look at their profile. (Good news for Facebook stalkers: Our experiment found you won’t be recommended as a friend just based on looking at someone’s profile.)

Facebook wasn’t happy about the tool.

The day after we released it, a Facebook spokesperson reached out asking to chat about it, and then told us that the tool violated Facebook’s terms of service, because it asked users to give it their username and password so that it could sign in on their behalf. Facebook’s TOS states that, “You will not solicit login information or access an account belonging to someone else.” They said we would need to shut down the tool (which was impossible because it’s an open source tool) and delete any data we collected (which was also impossible because the information was stored on individual users’ computers; we weren’t collecting it centrally).

We argued that we weren’t seeking access to users’ accounts or collecting any information from them; we had just given users a tool to log into their own accounts on their own behalf, to collect information they wanted collected, which was then stored on their own computers. Facebook disagreed and escalated the conversation to their head of policy for Facebook’s Platform, who said they didn’t want users entering their Facebook credentials anywhere that wasn’t an official Facebook site—because anything else is bad security hygiene and could open users up to phishing attacks. She said we needed to take our tool off Github within a week.

Source: Facebook Wanted Us to Kill This Investigative Tool

It’s either legal to port-scan someone without consent or it’s not, fumes researcher: Halifax bank port scans you when you visit the page

Halifax Bank scans the machines of surfers that land on its login page whether or not they are customers, it has emerged.

Security researcher Paul Moore has made his objection to this practice – in which the British bank is not alone – clear, even though it is done for good reasons. The researcher claimed that performing port scans on visitors without permission is a violation of the UK’s Computer Misuse Act (CMA).

Halifax has disputed this, arguing that the port scans help it pick up evidence of malware infections on customers’ systems. The scans are legal, Halifax told Moore in response to a complaint he made on the topic last month.

If security researchers operate in a similar fashion, we almost always run into the Computer Misuse Act, even if their intent isn’t malicious. The CMA should be applied fairly…

When you visit the Halifax login page, even before you’ve logged in, JavaScript on the site, running in the browser, attempts to scan for open ports on your local computer to see if remote desktop or VNC services are running, and looks for some general remote access trojans (RATs) – backdoors, in other words. Crooks are known to abuse these remote services to snoop on victims’ banking sessions.

Moore said he wouldn’t have an issue if Halifax carried out the security checks on people’s computers after they had logged on. It’s the lack of consent and the scanning of any visitor that bothers him. “If they ran the script after you’ve logged in… they’d end up with the same end result, but they wouldn’t be scanning visitors, only customers,” Moore said.

Halifax told Moore: “We have to port scan your machine for security reasons.”

Having failed to either persuade Halifax Bank to change its practices or Action Fraud to act (thus far1), Moore last week launched a fundraising effort to privately prosecute Halifax Bank for allegedly breaching the Computer Misuse Act. This crowdfunding effort on GoFundMe aims to gather £15,000 (so far just £50 has been raised).

Halifax Bank’s “unauthorised” port scans are a clear violation of the CMA – and amounts to an action that security researchers are frequently criticised and/or convicted for, Moore argued. The CISO and part-time security researcher hopes his efforts in this matter might result in a clarification of the law.

“Ultimately, we can’t have it both ways,” Moore told El Reg. “It’s either legal to port scan someone without consent, or with consent but no malicious intent, or it’s illegal and Halifax need to change their deployment to only check customers, not visitors.”

The whole effort might smack of tilting at windmills, but Moore said he was acting on a point of principle.

“If security researchers operate in a similar fashion, we almost always run into the CMA, even if their intent isn’t malicious. The CMA should be applied fairly to both parties.”

Source: Bank on it: It’s either legal to port-scan someone without consent or it’s not, fumes researcher • The Register

Critical OpenEMR Flaws Left Medical Records Vulnerable

Security researchers have found more than 20 bugs in the world’s most popular open source software for managing medical records. Many of the vulnerabilities were classified as severe, leaving the personal information of an estimated 90 million patients exposed to bad actors.

OpenEMR is open source software that’s used by medical offices around the world to store records, handle schedules, and bill patients. According to researchers at Project Insecurity, it was also a bit of a security nightmare before a recent audit recommended a range of vital fixes.

The firm reached out to OpenEMR in July to discuss concerns it had about the software’s code. On Tuesday a report was released detailing the issues that included: “a portal authentication bypass, multiple instances of SQL injection, multiple instances of remote code execution, unauthenticated information disclosure, unrestricted file upload, CSRFs including a CSRF to RCE proof of concept, and unauthenticated administrative actions.”

Eighteen of the bugs were designated as having a “high” severity and could’ve been exploited by hackers with low-level access to systems running the software. Patches have been released to users and cloud customers.

OpenEMR’s project administrator Brady Miller told the BBC, “The OpenEMR community takes security seriously and considered this vulnerability report high priority since one of the reported vulnerabilities did not require authentication.”

Source: Critical OpenEMR Flaws Left Medical Records Vulnerable

Facebook: We’re not asking for financial data, we’re just partnering with banks

Facebook is pushing back against a report in Monday’s Wall Street Journal that the company is asking major banks to provide private financial data.

The social media giant has reportedly had talks with JPMorgan Chase, Wells Fargo, Citigroup, and US Bancorp to discuss proposed features including fraud alerts and checking account balances via Messenger.

Elisabeth Diana, a Facebook spokeswoman, told Ars that while the WSJ reported that Facebook has “asked” banks “to share detailed financial information about their customers, including card transactions and checking-account balances,” this isn’t quite right.

“Like many online companies with commerce businesses, we partner with banks and credit card companies to offer services like customer chat or account management,” she said in a statement on behalf of the social media giant. “Account linking enables people to receive real-time updates in Facebook Messenger where people can keep track of their transaction data like account balances, receipts, and shipping updates. The idea is that messaging with a bank can be better than waiting on hold over the phone—and it’s completely opt-in. We’re not using this information beyond enabling these types of experiences—not for advertising or anything else.”

Diana further explained that account linking is already live with PayPal, Citi in Singapore, and American Express in the United States.

“We’re not shoring up financial data,” she added.

In recent months, Facebook has been scrutinized for its approach to user privacy.

Late last month, Facebook CFO David Wehner said, “We are also giving people who use our services more choices around data privacy, which may have an impact on our revenue growth.”

Source: Facebook: We’re not asking for financial data, we’re just partnering with banks | Ars Technica

But should you opt in, your financial data just happens to then belong to Facebook to do with as they please…