MS Sketch2Code uses AI to convert a picture of a wireframe to HTML – download and try

Description

Sketch2Code is a solution that uses AI to transform a handwritten user interface design from a picture to a valid HTML markup code.

Process flow

The process of transformation of a handwritten image to HTML this solution implements is detailed as follows:

  1. The user uploads an image through the website.
  2. A custom vision model predicts what HTML elements are present in the image and their location.
  3. A handwritten text recognition service reads the text inside the predicted elements.
  4. A layout algorithm uses the spatial information from all the bounding boxes of the predicted elements to generate a grid structure that accommodates all.
  5. An HTML generation engine uses all these pieces of information to generate an HTML markup code reflecting the result.
  6. <A href=”https://github.com/Microsoft/ailab/tree/master/Sketch2Code”>Sketch2Code Github</a>

AI sucks at stopping online trolls spewing toxic comments

A group of researchers from Aalto University and the University of Padua found this out when they tested seven state-of-the-art models used to detect hate speech. All of them failed to recognize foul language when subtle changes were made, according to a paper [PDF] on arXiv.

Adversarial examples can be created automatically by using algorithms to misspell certain words, swap characters for numbers or add random spaces between words or attach innocuous words such as ‘love’ in sentences.

The models failed to pick up on adversarial examples and successfully evaded detection. These tricks wouldn’t fool humans, but machine learning models are easily blindsighted. They can’t readily adapt to new information beyond what’s been spoonfed to them during the training process.

“They perform well only when tested on the same type of data they were trained on. Based on these results, we argue that for successful hate speech detection, model architecture is less important than the type of data and labeling criteria. We further show that all proposed detection techniques are brittle against adversaries who can (automatically) insert typos, change word boundaries or add innocuous words to the original hate speech,” the paper’s abstract states.

Source: AI sucks at stopping online trolls spewing toxic comments • The Register