What Is Ultra-Processed Food?

We eat a lot of ultra-processed food, and these foods tend to be sugary and not so great for us. But the problem isn’t necessarily the fact that they’re ultra-processed. This is a weird and arguably unfair way to categorize foods, so let’s take a look at what “ultra-processed” really means.

This terminology comes from a classification scheme called NOVA that splits foods into four groups:

Unprocessed or “minimally processed” foods (group 1) include fruits, vegetables, and meats. Perhaps you’ve pulled a carrot out of the ground and washed it, or killed a cow and sliced off a steak. Foods in this category can be processed in ways that don’t add extra ingredients. They can be cooked, ground, dried, or frozen.

Processed culinary ingredients (group 2) include sugar, salt, and oils. If you combine ingredients in this group, for example to make salted butter, they stay in this group.

Processed foods (group 3) are what you get when you combine groups 1 and 2. Bread, wine, and canned veggies are included. Additives are allowed if they “preserve [a food’s] original properties” like ascorbic acid added to canned fruit to keep it from browning.

Ultra-processed foods (group 4) don’t have a strict definition, but NOVA hints at some properties. They “typically” have five or more ingredients. They may be aggressively marketed and highly profitable. A food is automatically in group 4 if it includes “substances not commonly used in culinary preparations, and additives whose purpose is to imitate sensory qualities of group 1 foods or of culinary preparations of these foods, or to disguise undesirable sensory qualities of the final product.”

That last group feels a little disingenous. I’ve definitely seen things in my kitchen that are supposedly only used to make “ultra-processed” foods: food coloring, flavor extracts, artificial sweeteners, anti-caking agents (cornstarch, anyone?) and tools for extrusion and molding, to name a few.
[…]
So when we talk about ultra-processed foods, we have to remember that it’s a vague category that only loosely communicates the nutrition of its foods. Just like BMI combines muscley athletes with obese people because it makes for convenient math, NOVA categories combine things of drastically different nutritional quality.

Source: What Is Ultra-Processed Food?

LoopX Startup Pulls ICO Exit Scam and Disappears with $4.5 Million

A cryptocurrency startup named LoopX has pulled an exit scam after collecting around $4.5 million from users during an ICO (Initial Coin Offering) held for the past weeks.

The LoopX team disappeared out of the blue at the start of the week when it took down its website and deleted its Facebook, Telegram, and YouTube channels without any explanation.

The company’s former Twitter profile now lists only one tweet, a link to a TheNextWeb article detailing the exit scam, but it is unclear if the LoopX team posted this link themselves, or if somebody else claimed the account name after it was vacated.
Victims tracking funds as they dissipate

People who invested in the startup are now tracking funds move from account to account in a BitcoinTalk forum thread, and banding together in the hopes of filing a class action lawsuit.

Before the site went down, LoopX claimed to have gathered $4.5 million of the $12 million they wanted to raise for creating a new cryptocurrency trading mobile app based on a proprietary trading algorithm.

In an email sent to customers last week, LoopX owners made an ironic statement of “We will have some more surprises for you throughout the week. Stay tuned!”

This was probably not the surprise many users were expecting, but some users did see red flags with the entire LoopX operation and tried to warn would-be investors last month, via LoopX’s official Reddit channel.

Source: LoopX Startup Pulls ICO Exit Scam and Disappears with $4.5 Million

Telegram desktop app exploited for malware, cryptocurrency mining

Telegram has fixed a security flaw in its desktop app that hackers spent several months exploiting to install remote-control malware and cryptocurrency miners on vulnerable Windows PCs.The programming cockup was spotted by researchers at Kaspersky in October. It is believed miscreants have been leveraging the bug since at least March. The vulnerability stems from how its online chat app handles Unicode characters for languages that are read right-to-left, such as Hebrew and Arabic.

Source: Shock horror! Telegram messaging app proves insecure yet again! • The Register

While Western Union wired customers’ money, hackers transferred their personal details. WU won’t tell us what exactly was hacked

A Register reader, who wished to remain anonymous, showed us a copy of a letter dated January 31 that he received from the money-transfer outfit. The missive admitted that a supposedly secure data storage company used by Western Union was compromised: a database full of the wire-transfer giant's customer records was vulnerable to plundering, and hackers were quick to oblige. [...] According to the letter, the storage archive contained customers' contact details, bank names, Western Union internal customer ID numbers, as well as transaction amounts, times and identification numbers. Credit card data was definitely not taken, it stressed. [...] The red-faced biz was quick to point out that none of its internal payment or financial systems were affected in the attack. It also isn’t saying who the third-party storage supplier was, giving other customers of the slovenly provider time to check whether or not they have been hacked too. Western Union says that, so far, it isn't aware of any fraudulent activity stemming from the data security cockup, but just to be on the safe side it is enrolling affected customers in a year of free identity-fraud protection.

Source: While Western Union wired customers’ money, hackers transferred their personal deets • The Register

Moth brain uploaded to computer, taught to recognise numbers

MothNet’s computer code, according to the boffins, contains layers of artificial neurons to simulate the bug’s antenna lobe and mushroom body, which are common parts of insect brains.

Crucially, instead of recognizing smells, the duo taught MothNet to identify handwritten digits in the MNIST dataset. This database is often used to train and test pattern recognition in computer vision applications.

The academics used supervised learning to train MothNet, feeding it about 15 to 20 images of each digit from zero to nine, and rewarding it when it recognized the numbers correctly.

Receptor neurons in the artificial brain processed the incoming images, and passed the information down to the antenna lobe, which learned the features of each number. This lobe was connected, by a set of projection neurons, to the sparse mushroom body. This section was wired up to extrinsic neurons, each ultimately representing an individual integer between zero and nine.
[…]
MothNet achieved 75 per cent to 85 per cent accuracy, the paper stated, despite relatively few training examples, seemingly outperforming more traditional neural networks when given the same amount of training data.
[…]
It shows that the simplest biological neural network of an insect brain can be taught simple image recognition tasks, and potentially exceed other models when training examples and processing resources are scarce. The researchers believe that these biological neural networks (BNNs) can be “combined and stacked into larger, deeper neural nets.”

Source: Roses are red, are you single, we wonder? ‘Cos this moth-brain AI can read your phone number • The Register

Roses are red, Facebook is blue. Think private means private? More fool you

In a decision (PDF) handed down yesterday, chief judge Janet DiFiore said that a court could ask someone to hand over any relevant materials as part of discovery ahead of a trial – even if they are private.

The threshold for disclosure in a court case “is not whether the materials sought are private but whether they are reasonably calculated to contain relevant information”, she said.

The ruling is the latest in an ongoing battle over whether a woman injured in a horse-riding accident should hand over privately posted pictures to the man she has accused of negligence in the accident.

Kelly Forman suffered spinal and brain injuries after falling from a horse owned by Mark Henkins, who she accuses of fitting her with a faulty stirrup.

Forman said the accident had led to memory loss and difficulty communicating, which she said caused her to become reclusive and have problems using a computer or composing coherent messages.

Because Forman said she had been a regular Facebook user before the accident, Henkins sought an order to gain access to posts and photos she made privately on Facebook before and after the accident, saying this would provide evidence on how her lifestyle had been affected.

For instance, the court noted he argued that “the timestamps on Facebook messages would reveal the amount of time it takes the plaintiff to write a post or respond to a message”.
[…]
The judge acknowledged Forman’s argument that disclosure of social media materials posted under private settings was an “unjustified invasion of privacy”, but said that other private materials relevant to litigation – including medical records – can be ordered for disclosure.

DiFiore also noted that, although the court was assuming, for the purposes of resolving the case, that setting a post to “private” meant that the they should be characterised as such, there was “significant controversy” about this.

“Views range from the position taken by plaintiff that anything shielded by privacy settings is private, to the position taken by one commentator that anything contained in a social media website is not ‘private’,” she pointed out in a footnote.

Source: Roses are red, Facebook is blue. Think private means private? More fool you • The Register

Mpeg-2 now patentfree!

This is the list of patents (Attachm​​ent 1) covered by the MPEG-2 Patent Portfolio License as of January 1, 2018. Under the MPEG-2 Patent Portfolio License, royalties are payable for products manufactured or sold in countries with an active MPEG-2 Patent Portfolio Patent at the time of manufacture or sale. Please note that the last US patent expired February 13, 2018, and patents remain active in Philippines and Malaysia after that date. ​

Source: PatentList

Look out, Wiki-geeks. Now Google trains AI to write Wikipedia articles

A paper, out last month and just accepted for this year’s International Conference on Learning Representations (ICLR) in April, describes just how difficult text summarization really is.

A few companies have had a crack at it. Salesforce trained a recurrent neural network with reinforcement learning to take information and retell it in a nutshell, and the results weren’t bad.

However, the computer-generated sentences are simple and short; they lacked the creative flair and rhythm of text written by humans. Google Brain’s latest effort is slightly better: the sentences are longer and seem more natural.
[…]
The model works by taking the top ten web pages of a given subject – excluding the Wikipedia entry – or scraping information from the links in the references section of a Wikipedia article. Most of the selected pages are used for training, and a few are kept back to develop and test the system.

The paragraphs from each page are ranked and the text from all the pages are added to create one long document. The text is encoded and shortened, by splitting it into 32,000 individual words and used as input.

This is then fed into an abstractive model, where the long sentences in the input are cut shorter. It’s a clever trick used to both create and summarize text. The generated sentences are taken from the earlier extraction phase and aren’t built from scratch, which explains why the structure is pretty repetitive and stiff.

Mohammad Saleh, co-author of the paper and a software engineer in Google AI’s team, told The Register: “The extraction phase is a bottleneck that determines which parts of the input will be fed to the abstraction stage. Ideally, we would like to pass all the input from reference documents.

“Designing models and hardware that can support longer input sequences is currently an active area of research that can alleviate these limitations.”

We are still a very long way off from effective text summarization or generation. And while the Google Brain project is rather interesting, it would probably be unwise to use a system like this to automatically generate Wikipedia entries. For now, anyway.

Source: Look out, Wiki-geeks. Now Google trains AI to write Wikipedia articles • The Register

Gfycat Uses Artificial Intelligence to Fight Deepfakes Porn

Gfycat says it’s figured out a way to train an artificial intelligence to spot fraudulent videos. The technology builds on a number of tools Gfycat already used to index the GIFs on its platform.
[..]
Gfycat’s AI approach leverages two tools it already developed, both (of course) named after felines: Project Angora and Project Maru. When a user uploads a low-quality GIF of, say, Taylor Swift to Gfycat, Project Angora can search the web for a higher-res version to replace it with. In other words, it can find the same clip of Swift singing “Shake It Off” and upload a nicer version.

Now let’s say you don’t tag your clip “Taylor Swift.” Not a problem. Project Maru can purportedly differentiate between individual faces and will automatically tag the GIF with Swift’s name. This makes sense from Gfycat’s perspective—it wants to index the millions of clips users upload to the platform monthly.

Here’s where deepfakes come in. Created by amateurs, most deepfakes aren’t entirely believable. If you look closely, the frames don’t quite match up; in the below clip, Donald Trump’s face doesn’t completely cover Angela Merkel’s throughout. Your brain does some of the work, filling in the gaps where the technology failed to turn one person’s face into another.

Project Maru is not nearly as forgiving as the human brain. When Gfycat’s engineers ran deepfakes through its AI tool, it would register that a clip resembled, say, Nicolas Cage, but not enough to issue a positive match, because the face isn’t rendered perfectly in every frame. Using Maru is one way that Gfycat can spot a deepfake—it smells a rat when a GIF only partially resembles a celebrity.

Source: Gfycat Uses Artificial Intelligence to Fight Deepfakes Porn | WIRED