Chelsea Manning jailed for refusing to testify on Wikileaks

Former Army intelligence analyst Chelsea Manning, who served years in prison for leaking one of the largest troves of classified documents in U.S. history, has been sent to jail for refusing to testify before a grand jury investigating Wikileaks.

U.S. District Judge Claude Hilton ordered Manning to jail for contempt of court Friday after a brief hearing in which Manning confirmed she has no intention of testifying. She told the judge she “will accept whatever you bring upon me.”

Manning has said she objects to the secrecy of the grand jury process, and that she already revealed everything she knows at her court-martial.

The judge said she will remain jailed until she testifies or until the grand jury concludes its work.

[…]

Manning anticipated being jailed. In a statement before Friday’s hearing, she said she invoked her First, Fourth and Sixth amendment protections when she appeared before the grand jury in Alexandria on Wednesday. She said she already answered every substantive question during her 2013 court-martial, and is prepared to face the consequences of refusing to answer again.

“In solidarity with many activists facing the odds, I will stand by my principles. I will exhaust every legal remedy available,” she said.

Manning served seven years of a 35-year military sentence for leaking a trove of military and diplomatic documents to the anti-secrecy website before then-President Barack Obama commuted her sentence.

Source: Chelsea Manning jailed for refusing to testify on Wikileaks

Researchers are training image-generating AI with fewer labels by letting the model infer the labels

Generative AI models have a propensity for learning complex data distributions, which is why they’re great at producing human-like speech and convincing images of burgers and faces. But training these models requires lots of labeled data, and depending on the task at hand, the necessary corpora are sometimes in short supply.

The solution might lie in an approach proposed by researchers at Google and ETH Zurich. In a paper published on the preprint server Arxiv.org (“High-Fidelity Image Generation With Fewer Labels“), they describe a “semantic extractor” that can pull out features from training data, along with methods of inferring labels for an entire training set from a small subset of labeled images. These self- and semi-supervised techniques together, they say, can outperform state-of-the-art methods on popular benchmarks like ImageNet.

“In a nutshell, instead of providing hand-annotated ground truth labels for real images to the discriminator, we … provide inferred ones,” the paper’s authors explained.

In one of several unsupervised methods the researchers posit, they first extract a feature representation — a set of techniques for automatically discovering the representations needed for raw data classification — on a target training dataset using the aforementioned feature extractor. They then perform cluster analysis — i.e., grouping the representations in such a way that those in the same group share more in common than those in other groups. And lastly, they train a GAN — a two-part neural network consisting of generators that produce samples and discriminators that attempt to distinguish between the generated samples and real-world samples — by inferring labels.

Source: Researchers are training image-generating AI with fewer labels | VentureBeat

Google launches TensorFlow Lite 1.0 for mobile and embedded devices

Google today introduced TensorFlow Lite 1.0, its framework for developers deploying AI models on mobile and IoT devices. Improvements include selective registration and quantization during and after training for faster, smaller models. Quantization has led to 4 times compression of some models.

“We are going to fully support it. We’re not going to break things and make sure we guarantee its compatibility. I think a lot of people who deploy this on phones want those guarantees,” TensorFlow engineering director Rajat Monga told VentureBeat in a phone interview.

Lite begins with training AI models on TensorFlow, then is converted to create Lite models for operating on mobile devices. Lite was first introduced at the I/O developer conference in May 2017 and in developer preview later that year.

The TensorFlow Lite team at Google also shared its roadmap for the future today, designed to shrink and speed up AI models for edge deployment, including things like model acceleration, especially for Android developers using neural nets, as well as a Keras-based connecting pruning kit and additional quantization enhancements.

Other changes on the way:

  • Support for control flow, which is essential to the operation of models like recurrent neural networks
  • CPU performance optimization with Lite models, potentially involving partnerships with other companies
  • Expand coverage of GPU delegate operations and finalize the API to make it generally available

A TensorFlow 2.0 model converter to make Lite models will be made available for developers to better understand how things wrong in the conversion process and how to fix it.

TensorFlow Lite is deployed by more than two billion devices today, TensorFlow Lite engineer Raziel Alvarez said onstage at the TensorFlow Dev Summit being held at Google offices in Sunnyvale, California.

TensorFlow Lite increasingly makes TensorFlow Mobile obsolete, except for users who want to utilize it for training, but a solution is in the works, Alvarez said.

Source: Google launches TensorFlow Lite 1.0 for mobile and embedded devices | VentureBeat