MS Sketch2Code uses AI to convert a picture of a wireframe to HTML – download and try

Description

Sketch2Code is a solution that uses AI to transform a handwritten user interface design from a picture to a valid HTML markup code.

Process flow

The process of transformation of a handwritten image to HTML this solution implements is detailed as follows:

  1. The user uploads an image through the website.
  2. A custom vision model predicts what HTML elements are present in the image and their location.
  3. A handwritten text recognition service reads the text inside the predicted elements.
  4. A layout algorithm uses the spatial information from all the bounding boxes of the predicted elements to generate a grid structure that accommodates all.
  5. An HTML generation engine uses all these pieces of information to generate an HTML markup code reflecting the result.
  6. <A href=”https://github.com/Microsoft/ailab/tree/master/Sketch2Code”>Sketch2Code Github</a>

AI sucks at stopping online trolls spewing toxic comments

A group of researchers from Aalto University and the University of Padua found this out when they tested seven state-of-the-art models used to detect hate speech. All of them failed to recognize foul language when subtle changes were made, according to a paper [PDF] on arXiv.

Adversarial examples can be created automatically by using algorithms to misspell certain words, swap characters for numbers or add random spaces between words or attach innocuous words such as ‘love’ in sentences.

The models failed to pick up on adversarial examples and successfully evaded detection. These tricks wouldn’t fool humans, but machine learning models are easily blindsighted. They can’t readily adapt to new information beyond what’s been spoonfed to them during the training process.

“They perform well only when tested on the same type of data they were trained on. Based on these results, we argue that for successful hate speech detection, model architecture is less important than the type of data and labeling criteria. We further show that all proposed detection techniques are brittle against adversaries who can (automatically) insert typos, change word boundaries or add innocuous words to the original hate speech,” the paper’s abstract states.

Source: AI sucks at stopping online trolls spewing toxic comments • The Register

​Google just put an AI in charge of keeping its data centers cool

Google is putting an artificial intelligence system in charge of its data center cooling after the system proved it could cut energy use.

Now Google and its AI company DeepMind are taking the project further; instead of recommendations being implemented by human staff, the AI system is directly controlling cooling in the data centers that run services including Google Search, Gmail and YouTube.

“This first-of-its-kind cloud-based control system is now safely delivering energy savings in multiple Google data centers,” Google said.

Data centers use vast amount of energy and as the demand for cloud computing rises even small tweaks to areas like cooling can produce significant time and cost savings. Google’s decision to use its own DeepMind-created system is also a good plug for its AI business.

Every five minutes, the AI pulls a snapshot of the data center cooling system from thousands of sensors. This data is fed into deep neural networks, which predict how different choices will affect future energy consumption.

The AI system then identifies tweaks that could reduce energy consumption, which are then sent back to the data center, checked by the local control system and implemented.

Google said giving the AI more responsibility came at the request of its data center operators who said that implementing the recommendations from the AI system required too much effort and supervision.

“We wanted to achieve energy savings with less operator overhead. Automating the system enabled us to implement more granular actions at greater frequency, while making fewer mistakes,” said Google data center operator Dan Fuenffinger.

Source: ​Google just put an AI in charge of keeping its data centers cool | ZDNet

How AI Can Spot Exam Cheats and Raise Standards

AI is being deployed by those who set and mark exams to reduce fraud — which remains overall a small problem — and to create far greater efficiencies in preparation and marking, and to help improve teaching and studying. From a report, which may be paywalled: From traditional paper-based exam and textbook producers such as Pearson, to digital-native companies such as Coursera, online tools and artificial intelligence are being developed to reduce costs and enhance learning. For years, multiple-choice tests have allowed scanners to score results without human intervention. Now technology is coming directly into the exam hall. Coursera has patented a system to take images of students and verify their identity against scanned documents. There are plagiarism detectors that can scan essay answers and search the web — or the work of other students — to identify copying. Webcams can monitor exam locations to spot malpractice. Even when students are working, they provide clues that can be used to clamp down on cheats. They leave electronic “fingerprints” such as keyboard pressure, speed and even writing style. Emily Glassberg Sands, Cousera’s head of data science, says: “We can validate their keystroke signatures. It’s difficult to prepare for someone hell-bent on cheating, but we are trying every way possible.”

Source: How AI Can Spot Exam Cheats and Raise Standards – Slashdot

Spyware Company Leaves ‘Terabytes’ of Selfies, Text Messages, and Location Data Exposed Online

A company that sells surveillance software to parents and employers left “terabytes of data” including photos, audio recordings, text messages and web history, exposed in a poorly-protected Amazon S3 bucket.

Image: Shutterstock

This story is part of When Spies Come Home, a Motherboard series about powerful surveillance software ordinary people use to spy on their loved ones.

A company that markets cell phone spyware to parents and employers left the data of thousands of its customers—and the information of the people they were monitoring—unprotected online.

The data exposed included selfies, text messages, audio recordings, contacts, location, hashed passwords and logins, Facebook messages, among others, according to a security researcher who asked to remain anonymous for fear of legal repercussions.

Last week, the researcher found the data on an Amazon S3 bucket owned by Spyfone, one of many companies that sell software that is designed to intercept text messages, calls, emails, and track locations of a monitored device.

Source: Spyware Company Leaves ‘Terabytes’ of Selfies, Text Messages, and Location Data Exposed Online – Motherboard

Woman sentenced to more than 5 years for leaking info about Russia hacking attempts. Trump still on the loose.

A former government contractor who pleaded guilty to leaking U.S. secrets about Russia’s attempts to hack the 2016 presidential election was sentenced Thursday to five years and three months in prison.

It was the sentence that prosecutors had recommended — the longest ever for a federal crime involving leaks to the news media — in the plea deal for Reality Winner, the Georgia woman at the center of the case. Winner was also sentenced to three years of supervised release and no fine, except for a $100 special assessment fee.

The crime carried a maximum penalty of 10 years. U.S. District Court Judge J. Randal Hall in Augusta, Georgia, was not bound to follow the plea deal, but elected to give Winner the amount of time prosecutors requested.

Source: Reality Winner sentenced to more than 5 years for leaking info about Russia hacking attempts

How a hacker network turned stolen press releases into $100 million

At a Kiev nightclub in the spring of 2012, 24-year-old Ivan Turchynov made a fateful drunken boast to some fellow hackers. For years, Turchynov said, he’d been hacking unpublished press releases from business newswires and selling them, via Moscow-based middlemen, to stock traders for a cut of the sizable profits.

Oleksandr Ieremenko, one of the hackers at the club that night, had worked with Turchynov before and decided he wanted in on the scam. With his friend Vadym Iermolovych, he hacked Business Wire, stole Turchynov’s inside access to the site, and pushed the main Moscovite ringleader, known by the screen name eggPLC, to bring them in on the scheme. The hostile takeover meant Turchynov was forced to split his business. Now, there were three hackers in on the game.

Newswires like Business Wire are clearinghouses for corporate information, holding press releases, regulatory announcements, and other market-moving information under strict embargo before sending it out to the world. Over a period of at least five years, three US newswires were hacked using a variety of methods from SQL injections and phishing emails to data-stealing malware and illicitly acquired login credentials. Traders who were active on US stock exchanges drew up shopping lists of company press releases and told the hackers when to expect them to hit the newswires. The hackers would then upload the stolen press releases to foreign servers for the traders to access in exchange for 40 percent of their profits, paid to various offshore bank accounts. Through interviews with sources involved with both the scheme and the investigation, chat logs, and court documents, The Verge has traced the evolution of what law enforcement would later call one of the largest securities fraud cases in US history.

Source: How a hacker network turned stolen press releases into $100 million – The Verge

Android data slurping measured and monitored – scary amounts and loads of location tracking

Google’s passive collection of personal data from Android and iOS has been monitored and measured in a significant academic study.

The report confirms that Google is no respecter of the Chrome browser’s “incognito mode” aka “porn mode”, collecting Chrome data to add to your personal profile, as we pointed out earlier this year.

It also reveals how phone users are being tracked without realising it. How so? It’s here that the B2B parts of Google’s vast data collection network – its publisher and advertiser products – kick into life as soon the user engages with a phone. These parts of Google receive personal data from an Android even when the phone is static and not being used.

The activity has come to light thanks to research (PDF) by computer science professor Douglas Schmidt of Vanderbilt University, conducted for the nonprofit trade association Digital Content Next. It’s already been described by one privacy activist as “the most comprehensive report on Google’s data collection practices so far”.

[…]

Overall, the study discovered that Apple retrieves much less data than Google.

“The total number of calls to Apple servers from an iOS device was much lower, just 19 per cent the number of calls to Google servers from an Android device.

Moreover, there are no ad-related calls to Apple servers, which may stem from the fact that Apple’s business model is not as dependent on advertising as Google’s. Although Apple does obtain some user location data from iOS devices, the volume of data collected is much (16x) lower than what Google collects from Android,” the study noted.

Source: Android data slurping measured and monitored • The Register

The amount of location data slurped is scary – and it continues to slurp location in many different ways, even if wifi is turned off. It’s Big Brother in your pocket, with no opt out.

Bitcoin mining now apparently accounts for almost one percent of the world’s energy consumption

According to testimony provided by Princeton computer scientist Arvind Narayanan to the Senate Committee on Energy and Natural Resources, no matter what you do to make cryptocurrency mining harware greener, it’s a drop in the bucket compared to the overall network’s flabbergasting energy consumption. Instead, Narayanan told the committee, the only thing that really determines how much energy Bitcoin uses is its price. “If the price of a cryptocurrency goes up, more energy will be used in mining it; if it goes down, less energy will be used,” he told the committee. “Little else matters. In particular, the increasing energy efficiency of mining hardware has essentially no impact on energy consumption.”

In his testimony, Narayanan estimates that Bitcoin mining now uses about five gigawatts of electricity per day (in May, estimates of Bitcoin power consumption were about half of that). He adds that when you’ve got a computer racing with all its might to earn a free Bitcoin, it’s going to be running hot as hell, which means you’re probably using even more electricity to keep the computer cool so it doesn’t die and/or burn down your entire mining center, which probably makes the overall cost associated with mining even higher.

Source: Bitcoin mining now accounts for almost one percent of the world’s energy consumption | The Outline

Huawei reverses its stance, will no longer allow bootloader unlocking – will lose many customers

In order to deliver the best user experience and prevent users from experiencing possible issues that could arise from ROM flashing, including system failure, stuttering, worsened battery performance, and risk of data being compromised, Huawei will cease providing bootloader unlock codes for devices launched after May 25, 2018. For devices launched prior to the aforementioned date, the termination of the bootloader code application service will come into effect 60 days after today’s announcement. Moving forward, Huawei remains committed to providing quality services and experiences to its customers. Thank you for your continued support.

When you take into consideration that Huawei — for years — not only supported the ROM community but actively assisted in the unlocking of Huawei bootloaders, this whole switch-up doesn’t make much sense. But, that’s the official statement, so do with it what you will.


Original Article: For years now, the custom ROM development community has flocked to Huawei phones. One of the major reasons for this is because Huawei made it incredibly easy to unlock the bootloaders of its devices, even providing a dedicated support page for the process.

Source: Huawei reverses its stance, will no longer allow bootloader unlocking

Oi, clickbait cop bot, jam this in your neural net: Hot new AI threatens to DESTROY web journos

Artificial intelligent software has been trained to detect and flag up clickbait headlines.

And here at El Reg we say thank God Larry Wall for that. What the internet needs right now is software to highlight and expunge dodgy article titles about space alien immigrants, faked moon landings, and the like.

Machine-learning eggheads continue to push the boundaries of natural language processing, and have crafted a model that can, supposedly, detect how clickbait-y a headline really is.

The system uses a convolutional neural network that converts the words in a submitted article title into vectors. These numbers are fed into a long-short-term memory network that spits out a score based on the headline’s clickbait strength. About eight times out of ten it agreed with humans on whether a title was clickbaity or not, we’re told.

The trouble is, what exactly is a clickbait headline? It’s a tough question. The AI’s team – from the International Institute of Information Technology in Hyderabad, the Manipal Institute of Technology, and Birla Institute of Technology, in India – decided to rely on the venerable Merriam-Webster dictionary to define clickbait.

Source: Oi, clickbait cop bot, jam this in your neural net: Hot new AI threatens to DESTROY web journos • The Register

Facebook Wanted to Kill This Investigative People You May Know Tool

Last year, we launched an investigation into how Facebook’s People You May Know tool makes its creepily accurate recommendations. By November, we had it mostly figured out: Facebook has nearly limitless access to all the phone numbers, email addresses, home addresses, and social media handles most people on Earth have ever used. That, plus its deep mining of people’s messaging behavior on Android, means it can make surprisingly insightful observations about who you know in real life—even if it’s wrong about your desire to be “friends” with them on Facebook.

In order to help conduct this investigation, we built a tool to keep track of the people Facebook thinks you know. Called the PYMK Inspector, it captures every recommendation made to a user for however long they want to run the tool. It’s how one of us discovered Facebook had linked us with an unknown relative. In January, after hiring a third party to do a security review of the tool, we released it publicly on Github for users who wanted to study their own People You May Know recommendations. Volunteers who downloaded the tool helped us explore whether you’ll show up in someone’s People You Know after you look at their profile. (Good news for Facebook stalkers: Our experiment found you won’t be recommended as a friend just based on looking at someone’s profile.)

Facebook wasn’t happy about the tool.

The day after we released it, a Facebook spokesperson reached out asking to chat about it, and then told us that the tool violated Facebook’s terms of service, because it asked users to give it their username and password so that it could sign in on their behalf. Facebook’s TOS states that, “You will not solicit login information or access an account belonging to someone else.” They said we would need to shut down the tool (which was impossible because it’s an open source tool) and delete any data we collected (which was also impossible because the information was stored on individual users’ computers; we weren’t collecting it centrally).

We argued that we weren’t seeking access to users’ accounts or collecting any information from them; we had just given users a tool to log into their own accounts on their own behalf, to collect information they wanted collected, which was then stored on their own computers. Facebook disagreed and escalated the conversation to their head of policy for Facebook’s Platform, who said they didn’t want users entering their Facebook credentials anywhere that wasn’t an official Facebook site—because anything else is bad security hygiene and could open users up to phishing attacks. She said we needed to take our tool off Github within a week.

Source: Facebook Wanted Us to Kill This Investigative Tool

It’s either legal to port-scan someone without consent or it’s not, fumes researcher: Halifax bank port scans you when you visit the page

Halifax Bank scans the machines of surfers that land on its login page whether or not they are customers, it has emerged.

Security researcher Paul Moore has made his objection to this practice – in which the British bank is not alone – clear, even though it is done for good reasons. The researcher claimed that performing port scans on visitors without permission is a violation of the UK’s Computer Misuse Act (CMA).

Halifax has disputed this, arguing that the port scans help it pick up evidence of malware infections on customers’ systems. The scans are legal, Halifax told Moore in response to a complaint he made on the topic last month.

If security researchers operate in a similar fashion, we almost always run into the Computer Misuse Act, even if their intent isn’t malicious. The CMA should be applied fairly…

When you visit the Halifax login page, even before you’ve logged in, JavaScript on the site, running in the browser, attempts to scan for open ports on your local computer to see if remote desktop or VNC services are running, and looks for some general remote access trojans (RATs) – backdoors, in other words. Crooks are known to abuse these remote services to snoop on victims’ banking sessions.

Moore said he wouldn’t have an issue if Halifax carried out the security checks on people’s computers after they had logged on. It’s the lack of consent and the scanning of any visitor that bothers him. “If they ran the script after you’ve logged in… they’d end up with the same end result, but they wouldn’t be scanning visitors, only customers,” Moore said.

Halifax told Moore: “We have to port scan your machine for security reasons.”

Having failed to either persuade Halifax Bank to change its practices or Action Fraud to act (thus far1), Moore last week launched a fundraising effort to privately prosecute Halifax Bank for allegedly breaching the Computer Misuse Act. This crowdfunding effort on GoFundMe aims to gather £15,000 (so far just £50 has been raised).

Halifax Bank’s “unauthorised” port scans are a clear violation of the CMA – and amounts to an action that security researchers are frequently criticised and/or convicted for, Moore argued. The CISO and part-time security researcher hopes his efforts in this matter might result in a clarification of the law.

“Ultimately, we can’t have it both ways,” Moore told El Reg. “It’s either legal to port scan someone without consent, or with consent but no malicious intent, or it’s illegal and Halifax need to change their deployment to only check customers, not visitors.”

The whole effort might smack of tilting at windmills, but Moore said he was acting on a point of principle.

“If security researchers operate in a similar fashion, we almost always run into the CMA, even if their intent isn’t malicious. The CMA should be applied fairly to both parties.”

Source: Bank on it: It’s either legal to port-scan someone without consent or it’s not, fumes researcher • The Register

Critical OpenEMR Flaws Left Medical Records Vulnerable

Security researchers have found more than 20 bugs in the world’s most popular open source software for managing medical records. Many of the vulnerabilities were classified as severe, leaving the personal information of an estimated 90 million patients exposed to bad actors.

OpenEMR is open source software that’s used by medical offices around the world to store records, handle schedules, and bill patients. According to researchers at Project Insecurity, it was also a bit of a security nightmare before a recent audit recommended a range of vital fixes.

The firm reached out to OpenEMR in July to discuss concerns it had about the software’s code. On Tuesday a report was released detailing the issues that included: “a portal authentication bypass, multiple instances of SQL injection, multiple instances of remote code execution, unauthenticated information disclosure, unrestricted file upload, CSRFs including a CSRF to RCE proof of concept, and unauthenticated administrative actions.”

Eighteen of the bugs were designated as having a “high” severity and could’ve been exploited by hackers with low-level access to systems running the software. Patches have been released to users and cloud customers.

OpenEMR’s project administrator Brady Miller told the BBC, “The OpenEMR community takes security seriously and considered this vulnerability report high priority since one of the reported vulnerabilities did not require authentication.”

Source: Critical OpenEMR Flaws Left Medical Records Vulnerable

Facebook: We’re not asking for financial data, we’re just partnering with banks

Facebook is pushing back against a report in Monday’s Wall Street Journal that the company is asking major banks to provide private financial data.

The social media giant has reportedly had talks with JPMorgan Chase, Wells Fargo, Citigroup, and US Bancorp to discuss proposed features including fraud alerts and checking account balances via Messenger.

Elisabeth Diana, a Facebook spokeswoman, told Ars that while the WSJ reported that Facebook has “asked” banks “to share detailed financial information about their customers, including card transactions and checking-account balances,” this isn’t quite right.

“Like many online companies with commerce businesses, we partner with banks and credit card companies to offer services like customer chat or account management,” she said in a statement on behalf of the social media giant. “Account linking enables people to receive real-time updates in Facebook Messenger where people can keep track of their transaction data like account balances, receipts, and shipping updates. The idea is that messaging with a bank can be better than waiting on hold over the phone—and it’s completely opt-in. We’re not using this information beyond enabling these types of experiences—not for advertising or anything else.”

Diana further explained that account linking is already live with PayPal, Citi in Singapore, and American Express in the United States.

“We’re not shoring up financial data,” she added.

In recent months, Facebook has been scrutinized for its approach to user privacy.

Late last month, Facebook CFO David Wehner said, “We are also giving people who use our services more choices around data privacy, which may have an impact on our revenue growth.”

Source: Facebook: We’re not asking for financial data, we’re just partnering with banks | Ars Technica

But should you opt in, your financial data just happens to then belong to Facebook to do with as they please…