The Linkielist

Linking ideas with the world

The Linkielist

Facebook is using AI to understand videos and create new products

Facebook has taken the wraps off a project called Learning from Videos. It uses artificial intelligence to understand and learn audio, textual, and visual representations in public user videos on the social network.

Learning from Videos has a number of aims, such as improving Facebook AI systems related to content recommendations and policy enforcement. The project is in its early stages, but it’s already bearing fruit. Facebook says it has already harnessed the tech to enhance Instagram Reels recommendations, such as surfacing videos of people doing the same dance to the same music. The system is showing improved results in speech recognition errors as well, which could bolster auto-captioning features and make it easier to detect hate speech in videos.

[…]

The company says the project is looking at videos in hundreds of languages and from almost every country. This aspect of the project will make AI systems more accurate and allow them to “adapt to our fast moving world and recognize the nuances and visual cues across different cultures and regions.”

Facebook says that it’s keeping privacy in mind when it comes to Learning from Videos. “We’re building and maintaining a strong privacy foundation that uses automated solutions to enforce privacy at scale,” it wrote in a blog post. “By embedding this work at the infrastructure level, we can consistently apply privacy requirements across our systems and support efforts like AI. This includes implementing technical safeguards throughout the data lifecycle.”

[…]

Source: Facebook is using AI to understand videos and create new products | Engadget

Bucks County woman created ‘deepfake’ videos to harass rivals on her daughter’s cheerleading squad, DA says

A Bucks County woman anonymously sent coaches on her teen daughter’s cheerleading squad fake photos and videos that depicted the girl’s rivals naked, drinking, or smoking, all in a bid to embarrass them and force them from the team, prosecutors say.

The woman, Raffaela Spone, also sent the manipulated images to the girls, and, in anonymous messages, urged them to kill themselves, Bucks County District Attorney Matt Weintraub’s office said.

[…]

The affidavit says Spone last year created the doctored images of at least three members of the Victory Vipers, a traveling cheerleading squad based in Doylestown. There was no indication that her high school-age daughter, who was not publicly identified, knew what her mother was doing, according to court records.

Police in Hilltown Township were contacted by one of the victim’s parents in July, when that girl began receiving harassing text messages from an anonymous number, the affidavit said. The girl and her coaches at Victory Vipers were also sent photos that appeared to depict her naked, drinking, and smoking a vape. Her parents were concerned, they told police, because the videos could have caused their daughter to be removed from the team.

As police investigated, two more families came forward to say their daughters had been receiving similar messages from an unknown number, the affidavit said. The other victims were sent photos of themselves in bikinis, with accompanying text saying the subjects were “drinking at the shore.”

After analyzing the videos, detectives determined they were “deepfakes” — digitally altered but realistic looking images — created by mapping the girls’ social media photos onto other images.

[…]

Source: Bucks County woman created ‘deepfake’ videos to harass rivals on her daughter’s cheerleading squad, DA says

Facebook uses one billion Instagram photos to build massive object-recognition AI that partly trained itself

Known as SEER, short for SElf-supERvised, this massive convolutional neural network contains over a billion parameters. If you show it images of things, it will describe in words what it recognizes: a bicycle, a banana, a red-and-blue striped golfing umbrella, and so on. While its capabilities aren’t all that novel, the way it was trained differs from the techniques used to teach other types of computer vision models. Essentially, SEER partly taught itself using an approach called self-supervision.

First, it learned how to group the Instagram pictures by their similarity without any supervision, using an algorithm nicknamed SwAV. The team then fine-tuned the model by teaching it to associate a million photos taken from the ImageNet dataset with their corresponding human-written labels. This stage was a traditional supervised method: humans curated the photos and labels, and this is passed on to the neural network that was pretrained by itself.

[…]

“SwAV uses online clustering to rapidly group images with similar visual concepts and leverage their similarities. With SwAV, we were able to improve over the previous state of the art in self-supervised learning — and did so with 6x less training time.”

SEER thus learned to associate an image of, say, a red apple with the description “red apple.” Once trained, the model’s object-recognition skills were tested using 50,000 pictures from ImageNet it had not seen before: in each test it had to produce a set of predictions of what was pictured, ranked in confidence from high to low. Its top prediction in each test was accurate 84.2 per cent of time, we’re told.

The model doesn’t score as highly as its peers in ImageNet benchmarking. The downside of models like SEER is that they’re less accurate than their supervised cousins. Yet there are advantages to training in a semi-supervised way, Goyal, first author of the project’s paper on SEER, told The Register.

“Using self-supervision pretraining, we can learn on a more diverse set of images as we don’t require labels, data curation or any other metadata,” she said. “This means that the model can learn about more visual concepts in the world in contrast to the supervised training where we can only train on limited or small datasets that are highly curated and don’t allow us to capture visual diversity of the world.”

[…]

SEER was trained over eight days using 512 GPUs. The code for the model isn’t publicly available, although VISSL, the PyTorch library that was used to build SEER, is now up on GitHub.

[…]

Source: Facebook uses one billion Instagram photos to build massive object-recognition AI that partly trained itself • The Register

Furious AI Researcher Creates Site Shaming Non-Reproducible Machine Learning Papers

The Next Web tells the story of an AI researcher who discovered the results of a machine learning research paper couldn’t be reproduced. But then they’d heard similar stories from Reddit’s Machine Learning forum: “Easier to compile a list of reproducible ones…,” one user responded.

“Probably 50%-75% of all papers are unreproducible. It’s sad, but it’s true,” another user wrote. “Think about it, most papers are ‘optimized’ to get into a conference. More often than not the authors know that a paper they’re trying to get into a conference isn’t very good! So they don’t have to worry about reproducibility because nobody will try to reproduce them.” A few other users posted links to machine learning papers they had failed to implement and voiced their frustration with code implementation not being a requirement in ML conferences.

The next day, ContributionSecure14 created “Papers Without Code,” a website that aims to create a centralized list of machine learning papers that are not implementable…

Papers Without Code includes a submission page, where researchers can submit unreproducible machine learning papers along with the details of their efforts, such as how much time they spent trying to reproduce the results… If the authors do not reply in a timely fashion, the paper will be added to the list of unreproducible machine learning papers.

Source: Furious AI Researcher Creates Site Shaming Non-Reproducible Machine Learning Papers – Slashdot

Waymo simulated (not very many) real-world (if the world was limited to 100 sq miles) crashes to prove its self-driving cars can prevent deaths

In a bid to prove that its robot drivers are safer than humans, Waymo simulated dozens of real-world fatal crashes that took place in Arizona over nearly a decade. The Google spinoff discovered that replacing either vehicle in a two-car crash with its robot-guided minivans would nearly eliminate all deaths, according to data it publicized today.

The results are meant to bolster Waymo’s case that autonomous vehicles operate more safely than human-driven ones. With millions of people dying in auto crashes globally every year, AV operators are increasingly leaning on this safety case to spur regulators to pass legislation allowing more fully autonomous vehicles on the road.

But that case has been difficult to prove out, thanks to the very limited number of autonomous vehicles operating on public roads today. To provide more statistical support for its argument, Waymo has turned to counterfactuals, or “what if?” scenarios, meant to showcase how its robot vehicles would react in real-world situations.

Last year, the company published 6.1 million miles of driving data in 2019 and 2020, including 18 crashes and 29 near-miss collisions. In those incidents where its safety operators took control of the vehicle to avoid a crash, Waymo’s engineers simulated what would have happened had the driver not disengaged the vehicle’s self-driving system to generate a counterfactual. The company has also made some of its data available to academic researchers.

That work in counterfactuals continues in this most recent data release. Through a third party, Waymo collected information on every fatal crash that took place in Chandler, Arizona, a suburban community outside Phoenix, between 2008 and 2017. Focusing just on the crashes that took place within its operational design domain, or the approximately 100-square-mile area in which the company permits its cars to drive, Waymo identified 72 crashes to reconstruct in simulation in order to determine how its autonomous system would respond in similar situations.

[…]

The results show that Waymo’s autonomous vehicles would have “avoided or mitigated” 88 out of 91 total simulations, said Trent Victor, director of safety research and best practices at Waymo. Moreover, for the crashes that were mitigated, Waymo’s vehicles would have reduced the likelihood of serious injury by a factor of 1.3 to 15 times, Victor said.

[…]

Source: Waymo simulated real-world crashes to prove its self-driving cars can prevent deaths – The Verge

OK, it’s a good idea, but surely they could have modelled Waymo response on hundreds of thousands of crash scenarios instead of this very tightly controlled tiny subset?

FortressIQ just comes out and says it: To really understand business processes, feed your staff’s screen activity to an AI

In a sign that interest in process mining is heating up, vendor FortressIQ is launching an analytics platform with a novel approach to understanding how users really work – it “videos” their on-screen activity for later analysis.

According to the San Francisco-based biz, its Process Intelligence platform will allow organisations to be better prepared for business transformation, the rollout of new applications, and digital projects by helping customers understand how people actually do their jobs, as opposed to how the business thinks they work.

The goal of process mining itself is not new. German vendor Celonis has already marked out the territory and raised approximately $290m in a funding round in November 2019, when it was valued at $2.5bn.

Celonis works by recording a users’ application logs, and by applying machine learning to data across a number of applications, purports to figure out how processes work in real life. FortressIQ, which raised $30m in May 2020, uses a different approach – recording all the user’s screen activity and using AI and computer vision to try to understand all their behaviour.

Pankaj Chowdhry, CEO at FortressIQ, told The Register that the company had built was a “virtual process analyst”, a software agent which taps into a user’s video card on the desktop or laptop. It streams a low-bandwidth version of what is occuring on the screen to provide the raw data for the machine-learning models.

“We built machine learning and computer vision AI that will, in essence, watch that movie, and convert it into a structured activity,” he said.

In an effort to assure those forgiven for being a little freaked out by the recording of users’ every on-screen move, the company said it anonymises the data it analyses to show which processes are better than others, rather than which user is better. Similarly, it said it guarantees the privacy of on-screen data.

Nonetheless, users should be aware of potential kickbacks when deploying the technology, said Tom Seal, senior research director with IDC.

“Businesses will be somewhat wary about provoking that negative reaction, particularly with the remote working that’s been triggered by COVID,” he said.

At the same time, remote working may be where the approach to process mining can show its worth, helping to understand how people adapt their working patterns in the current conditions.

FortressIQ may have an advantage over rivals in that it captures all data from the users’ screen, rather than the applications the organisation thinks should be involved in a process, said Seal. “It’s seeing activity that the application logs won’t pick up, so there is an advantage there.”

Of course, there is still the possibility that users get around prescribed processes using Post-It notes, whiteboards and phone apps, which nobody should put beyond them.

Celonis and FortressIQ come from very different places. The German firm has a background in engineering and manufacturing, with an early use case at Siemens led by Lars Reinkemeyer who has since joined the software vendor as veep for customer transformation. He literally wrote the book on process mining while at the University of California, Santa Barbara. FortressIQ, on the other hand, was founded by Chowdhry who worked as AI leader at global business process outsourcer Genpact before going it alone.

And it’s not just these two players. Software giant SAP has bought Signavio, a specialist in business process analysis and management, in a deal said to be worth $1.2bn to help understand users’ processes as it readies them for the cloud and application upgrades. ®

Source: FortressIQ just comes out and says it: To really understand business processes, feed your staff’s screen activity to an AI • The Register

This site posted every face from Parler’s Capitol Hill insurrection videos

Late last week, a website called Faces of the Riot appeared online, showing nothing but a vast grid of more than 6,000 images of faces, each one tagged only with a string of characters associated with the Parler video in which it appeared. The site’s creator tells WIRED that he used simple, open source machine-learning and facial recognition software to detect, extract, and deduplicate every face from the 827 videos that were posted to Parler from inside and outside the Capitol building on January 6, the day when radicalized Trump supporters stormed the building in a riot that resulted in five people’s deaths. The creator of Faces of the Riot says his goal is to allow anyone to easily sort through the faces pulled from those videos to identify someone they may know, or recognize who took part in the mob, or even to reference the collected faces against FBI wanted posters and send a tip to law enforcement if they spot someone.

[…]

Aside from the clear privacy concerns it raises, Faces of the Riot’s indiscriminate posting of faces doesn’t distinguish between lawbreakers—who trampled barriers, broke into the Capitol building, and trespassed in legislative chambers—and people who merely attended the protests outside. A recent upgrade to the site adds hyperlinks from faces to the video source, so that visitors can click on any face and see what the person was filmed doing on Parler. The Faces of the Riot creator, who says he’s a college student in the “greater DC area,” intends that added feature to help contextualize every face’s inclusion on the site and differentiate between bystanders, peaceful protesters, and violent insurrectionists.

He concedes that he and a co-creator are still working to scrub “non-rioter” faces, including those of police and press who were present. A message at the top of the site also warns against vigilante investigations, instead suggesting users report those they recognize to the FBI, with a link to an FBI tip page.

[…]

Despite its disclaimers and limitations, Faces of the Riot represents the serious privacy dangers of pervasive facial recognition technology, says Evan Greer, the campaign director for digital civil liberties nonprofit Fight for the Future. “Whether it’s used by an individual or by the government, this technology has profound implications for human rights and freedom of expression,” says Greer, whose organization has fought for a legislative ban on facial recognition technologies.

[…]

The site’s developer counters that Faces of the Riot leans not on facial recognition but facial detection. While he did use the open source machine-learning tool TensorFlow and the facial recognition software Dlib to analyze the Parler videos, he says he used that software only to detect and “cluster” faces from the 11 hours of video of the Capitol riot; Dlib allowed him to deduplicate the 200,000 images of faces extracted from video frames to around 6,000 unique faces

[…]

The Faces of the Riot site’s creator initially saw the data as a chance to experiment with machine-learning tools but quickly saw the potential for a more public project. “After about 10 minutes I thought, ‘This is actually a workable idea and I can do something that will help people,'” he says. Faces of the Riot is the first website he’s ever created.

[…]

But McDonald also points out that Faces of the Riot demonstrates just how accessible facial recognition technologies have become. “It shows how this tool that has been restricted only to people who have the most education, the most power, the most privilege is now in this more democratized state,” McDonald says.

The Faces of the Riot site’s creator sees it as more than an art project or demonstration

[…]

Source: This site posted every face from Parler’s Capitol Hill insurrection videos | Ars Technica

Prostate Cancer can be precisely diagnosed using a urine test with artificial intelligence

Prostate cancer is one of the most common cancers among men. Patients are determined to have prostate cancer primarily based on PSA, a cancer factor in blood. However, as diagnostic accuracy is as low as 30%, a considerable number of patients undergo additional invasive biopsy and thus suffer from resultant side effects, such as bleeding and pain.The Korea Institute of Science and Technology (KIST) announced that the collaborative research team led by Dr. Kwan Hyi Lee from the Biomaterials Research Center and Professor In Gab Jeong from Asan Medical Center developed a technique for diagnosing prostate cancer from urine within only 20 minutes with almost 100% accuracy. The research team developed this technique by introducing a smart AI analysis method to an electrical-signal-based ultrasensitive biosensor.As a noninvasive method, a diagnostic test using urine is convenient for patients and does not need invasive biopsy, thereby diagnosing cancer without side effects. However, as the concentration of cancer factors is low in urine, urine-based biosensors are only used for classifying risk groups rather than for precise diagnosis thus far.

Source: Cancer can be precisely diagnosed using a urine test with artificial intelligence

AI upstart stealing facial data told to delete data and algorithms

Everalbum, a consumer photo app maker that shut down on August 31, 2020, and has since relaunched as a facial recognition provider under the name Paravision, on Monday reached a settlement with the FTC over the 2017 introduction of a feature called “Friends” in its discontinued Ever app. The watchdog agency claims the app deployed facial recognition code to organize users’ photos by default, without permission.

According to the FTC, between July 2018 and April 2019, Everalbum told people that it would not employ facial recognition on users’ content without consent. The company allegedly let users in certain regions – Illinois, Texas, Washington, and the EU – make that choice, but automatically activated the feature for those located elsewhere.

The agency further claims that Everalbum’s use of facial recognition went beyond supporting the Friends feature. The company is alleged to have combined users’ faces with facial images from other information to create four datasets that informed its facial recognition technology, which became the basis of a face detection service for enterprise customers.

The company also is said to have told consumers using its app that it would delete their data if they deactivated their accounts, but didn’t do so until at least October 2019.

The FTC, in announcing the case and its settlement, said Everalbum/Paravision will be required to delete: photos and videos belonging to Ever app users who deactivated their accounts; all face embeddings – vector representations of facial features – from users who did not grant consent; and “any facial recognition models or algorithms developed with Ever users’ photos or videos.”

The FTC has not done this in past privacy cases with technology companies. According to FTC Commissioner Rohit Chopra, when Google and YouTube agreed to pay $170m over allegations the companies had collected data from children without parental consent, the FTC settlement “allowed Google and YouTube to profit from its conduct, even after paying a civil penalty.”

Likewise, when the FTC voted to approve a settlement with Facebook over claims it had violated its 2012 privacy settlement agreement, he said, Facebook did not have to give up any of its facial recognition technology or data.

“Commissioners have previously voted to allow data protection law violators to retain algorithms and technologies that derive much of their value from ill-gotten data,” said Chopra in a statement [PDF]. “This is an important course correction.”

[…]

Source: Privacy pilfering project punished by FTC purge penalty: AI upstart told to delete data and algorithms • The Register

‘DALL-E’ AI generates an image out of anything you describe

with DALL-E (a portmanteau of “Wall-E” and “Dali”), an AI app that can create an image out of nearly any description. For example, if you ask for “a cat made of sushi” or a “high quality illustration of a giraffe turtle chimera,” it will deliver those things, often with startlingly good quality (and sometimes not).DALL-E can create images based on a description of its attributes, like “a pentagonal green clock,” or “a collection of glasses is sitting on a table.” In the latter example, it places both drinking and eye glasses on a table with varying degrees of success.

It can also draw and combine multiple objects and provide different points of view, including cutaways and object interiors. Unlike past text-to-image programs, it even infers details that aren’t mentioned in the description but would be required for a realistic image. For instance, with the description “a painting of a fox sitting in a field during winter,” the agent was able to determine that a shadow was needed.

“Unlike a 3D rendering engine, whose inputs must be specified unambiguously and in complete detail, DALL·E is often able to ‘fill in the blanks’ when the caption implies that the image must contain a certain detail that is not explicitly stated,” according to the OpenAI team.

'DALL-E' AI generates an image out of anything you describe

OpenAI also exploits a capability called “zero-shot reasoning.” This allows an agent to generate an answer from a description and cue without any additional training, and has been used for translation and other chores. This time, the researchers applied it to the visual domain to perform both image-to-image and text-to-image translation. In one example, it was able to generate an image of a cat from a sketch, with the cue “the exact same cat on the top as the sketch on the bottom.”

The system has numerous other talents, like understanding how telephones and other objects change over time, grasping geographic facts and landmarks and creating images in photographic, illustration and even clip-art styles.

For now, DALL-E is pretty limited. Sometimes, it delivers what you expect from the description and other times you just get some weird or crappy images. As with other AI systems, even the researchers themselves don’t understand exactly how it produces certain images due to the black box nature of the system.

Still, if developed further, DALL-E has vast potential to disrupt fields like stock photography and illustration, with all the good and bad that entails. “In the future, we plan to analyze how models like DALL·E relate to societal issues like economic impact on certain work processes and professions, the potential for bias in the model outputs, and the longer term ethical challenges implied by this technology,” the team wrote. To play with DALL-E yourself, check out OpenAI’s blog.

Source: ‘DALL-E’ AI generates an image out of anything you describe | Engadget

Artificial intelligence classifies supernova explosions with unprecedented accuracy

Artificial intelligence is classifying real supernova explosions without the traditional use of spectra, thanks to a team of astronomers at the Center for Astrophysics | Harvard & Smithsonian. The complete data sets and resulting classifications are publicly available for open use.

By training a to categorize supernovae based on their visible characteristics, the astronomers were able to classify real data from the Pan-STARRS1 Medium Deep Survey for 2,315 supernovae with an accuracy rate of 82-percent without the use of spectra.

The astronomers developed a that classifies different types of supernovae based on their light curves, or how their brightness changes over time. “We have approximately 2,500 supernovae with light curves from the Pan-STARRS1 Medium Deep Survey, and of those, 500 supernovae with spectra that can be used for classification,” said Griffin Hosseinzadeh, a postdoctoral researcher at the CfA and lead author on the first of two papers published in The Astrophysical Journal. “We trained the classifier using those 500 supernovae to classify the remaining supernovae where we were not able to observe the spectrum.”

Edo Berger, an at the CfA explained that by asking the to answer specific questions, the results become increasingly more accurate. “The machine learning looks for a correlation with the original 500 spectroscopic labels. We ask it to compare the supernovae in different categories: color, rate of evolution, or brightness. By feeding it real existing knowledge, it leads to the highest accuracy, between 80- and 90-percent.”

Although this is not the first machine learning project for supernovae classification, it is the first time that astronomers have had access to a real data set large enough to train an artificial intelligence-based supernovae classifier, making it possible to create machine learning algorithms without the use of simulations.

[…]

The project has implications not only for archival data, but also for data that will be collected by future telescopes. The Vera C. Rubin Observatory is expected to go online in 2023, and will lead to the discovery of millions of new supernovae each year. This presents both opportunities and challenges for astrophysicists, where limited telescope time leads to limited spectral classifications.

“When the Rubin Observatory goes online it will increase our discovery rate of supernovae by 100-fold, but our spectroscopic resources will not increase,” said Ashley Villar, a Simons Junior Fellow at Columbia University and lead author on the second of the two papers, adding that while roughly 10,000 supernovae are currently discovered each year, scientists only take spectra of about 10-percent of those objects. “If this holds true, it means that only 0.1-percent of discovered by the Rubin Observatory each year will get a spectroscopic label. The remaining 99.9-percent of data will be unusable without methods like ours.”

Unlike past efforts, where data sets and classifications have been available to only a limited number of astronomers, the from the new algorithm will be made publicly available. The astronomers have created easy-to-use, accessible software, and also released all of the data from Pan-STARRS1 Medium Deep Survey along with the new classifications for use in other projects. Hosseinzadeh said, “It was really important to us that these projects be useful for the entire supernova community, not just for our group. There are so many projects that can be done with these data that we could never do them all ourselves.” Berger added, “These projects are open data for open science.”

Source: Artificial intelligence classifies supernova explosions with unprecedented accuracy

Air Force Flies AI Copilot on U-2 Spy Plane in first. Very Star Wars referenced

For Star Wars fans, an X-Wing fighter isn’t complete without R2-D2. Whether you need to fire up converters, increase power, or fix a broken stabilizer, that trusty droid, full of lively beeps and squeaks, is the ultimate copilot.

Teaming artificial intelligence (AI) with pilots is no longer just a matter for science fiction or blockbuster movies. On Tuesday, December 15, the Air Force successfully flew an AI copilot on a U-2 spy plane in California: the first time AI has controlled a U.S. military system.

[…]

With call sign ARTUµ, we trained µZero—a world-leading computer program that dominates chess, Go, and even video games without prior knowledge of their rules—to operate a U-2 spy plane. Though lacking those lively beeps and squeaks, ARTUµ surpassed its motion picture namesake in one distinctive feature: it was the mission commander, the final decision authority on the human-machine team

[…]

Our demo flew a reconnaissance mission during a simulated missile strike at Beale Air Force Base on Tuesday. ARTUµ searched for enemy launchers while our pilot searched for threatening aircraft, both sharing the U-2’s radar. With no pilot override, ARTUµ made final calls on devoting the radar to missile hunting versus self-protection. Luke Skywalker certainly never took such orders from his X-Wing sidekick!

[…]

to trust AI, software design is key. Like a breaker box for code, the U-2 gave ARTUµ complete radar control while “switching off” access to other subsystems.

[…]

Like a digital Yoda, our small-but-mighty U-2 FedLab trained µZero’s gaming algorithms to operate a radar—reconstructing them to learn the good side of reconnaissance (enemies found) from the dark side (U-2s lost)—all while interacting with a pilot. Running over a million training simulations at their “digital Dagobah,” they had ARTUµ mission-ready in just over a month.

[…]

That autonomous future will happen eventually. But today’s AI can be easily fooled by adversary tactics, precisely what future warfare will throw at it.

us air force maj “vudu”, u 2 dragon lady pilot for the 9th reconnaissance wing, prepares to taxi after returning from a training sortie at beale air force, california, dec 15, 2020
U.S. Air Force Maj. “Vudu”, U-2 Dragon Lady pilot for the 9th Reconnaissance Wing, prepares to taxi after returning from a training sortie at Beale Air Force, California, Dec. 15, 2020.

A1C Luis A.Ruiz-Vazquez

Like board or video games, human pilots could only try outperformingDARPA’s AI while obeying the rules of the dogfighting simulation, rules the AI had algorithmically learned and mastered. The loss is a wakeup call for new digital trickery to outfox machine learning principles themselves. Even R2-D2 confused computer terminals with harmful power sockets!

[…]

Source: Air Force Flies AI Copilot on U-2 Spy Plane: Exclusive Details

Alphabet’s internet Loon balloon kept on station in the sky using AI that beat human-developed control code

Loon, known for its giant billowing broadband-beaming balloons, says it has figured out how to use machine-learning algorithms to keep its lofty vehicles hovering in place autonomously in the stratosphere.

The 15-metre-wide balloons relay internet connections between people’s homes and ground stations that could be thousands of kilometres apart. To form a steady network that can route data over long distances reliably, the balloons have to stay in place, and do so all by themselves.

Loon’s AI-based solution to this station-keeping problem has been described in a research paper published in Nature on Wednesday, and basically it works by adjusting the balloons’ altitude to catch the right wind currents to ensure they are where they need to be.

The machine-learning software, we’re told, managed to successfully keep the Loon gas bags bobbing up and down in the skies above in the Pacific Ocean in an experiment that lasted 39 days. Previously, the Loon team used a non-AI controller that used a handcrafted algorithm known as StationSeeker to do the job, though decided to experiment to see whether it could find a more efficient method using machine learning.

“As far as we know, this is the world’s first deployment of reinforcement learning in a production aerospace system,” said Loon CTO Salvatore Candido.

The AI is built out of a feed-forward neural network that learns to decide whether a balloon should fly up or go down by taking into account variables, such as wind speed, solar elevation, and how much power the equipment has left. The decision is then fed to a controller system to move the balloon in place.

By training the model in simulation, the neural network steadily improved over time using reinforcement learning as it repeated the same task over and over again under different scenarios. Loon tested the performance of StationSeeker against the reinforcement learning model in simulation.

“A trial consists of two simulated days of station-keeping at a fixed location, during which controllers receive inputs and emit commands at 3-min intervals,” according to the paper. The performance was then judged by how long the balloons could stay within a 50km radius of a hypothetical ground station.

The AI algorithm scored 55.1 per cent efficiency, compared to 40.5 per cent for StationSeeker. The researchers reckon that the autonomous algorithm is near optimum performance, considering that the best theoretical models reach somewhere between 56.8 to 68.7 per cent.

When Loon and Google ran the controller in the real experiment, which involved a balloon hovering above the Pacific Ocean, they found: “Overall, the [reinforcement learning] system kept balloons in range of the desired location more often while using less power… Using less power to steer the balloon means more power is available to connect people to the internet, information, and other people.”

[…]

Source: Alphabet’s internet Loon balloon kept on station in the sky using AI that beat human-developed control code • The Register

DeepMind’s A.I. can now predict protein shapes from their DNA sequences | Fortune

Researchers have made a major breakthrough using artificial intelligence that could revolutionize the hunt for new medicines.

The scientists have created A.I. software that uses a protein’s DNA sequence to predict its three-dimensional structure to within an atom’s width of accuracy.

The achievement, which solves a 50-year-old challenge in molecular biology, was accomplished by a team from DeepMind, the London-based artificial intelligence company that is part of Google parent Alphabet.

[…]

Across more than 100 proteins, DeepMind’s A.I. software, which it called AlphaFold 2, was able to predict the structure to within about an atom’s width of accuracy in two-thirds of cases and was highly accurate in most of the remaining one-third of cases, according to John Moult, a molecular biologist at the University of Maryland who is director of the competition, called the Critical Assessment of Structure Prediction, or CASP. It was far better than any other method in the competition, he said.

[…]

DeepMind had not yet determined how it would provide academic researchers with access to the protein structure prediction software or whether it would seek commercial collaborations with pharmaceutical and biotechnology firms. He said the company would announce “further details on how we’re going to be able to give access to the system in a scalable way” sometime next year.

“This computational work represents a stunning advance on the protein-folding problem,” Venki Ramakrishnan, a Nobel Prize–winning structural biologist who is also the outgoing president of the Royal Society, Britain’s most prestigious scientific body, said of AlphaFold 2.

Janet Thornton, an expert in protein structure and former director of the European Molecular Biology Laboratory’s European Bioinformatics Institute, said that DeepMind’s breakthrough opened up the way to mapping the entire “human proteome”—the set of all proteins found within the human body. Currently, only about a quarter of human proteins have been used as targets for medicines, she said. Now, many more proteins could be targeted, creating a huge opportunity to invent new medicines.

[…]

As part of CASP’s efforts to verify the capabilities of DeepMind’s system, Lupas used the predictions from AlphaFold 2 to see if it could solve the final portion of a protein’s structure that he had been unable to complete using X-ray crystallography for more than a decade. With the predictions generated by AlphaFold 2, Lupas said he was able to determine the shape of the final protein segment in just half an hour.

AlphaFold 2 has also already been used to accurately predict the structure of a protein called ORF3a that is found in SARS-CoV-2, the virus that causes COVID-19, which scientists might be able to use as a target for future treatments.

Lupas said he thought the A.I. software would “change the game entirely” for those who work on proteins. Currently, DNA sequences are known for about 200 million proteins, and tens of millions more are being discovered every year. But 3D structures have been mapped for less than 200,000 of them.

AlphaFold 2 was only trained to predict the structure of single proteins. But in nature, proteins are often present in complex arrangements with other proteins. Jumper said the next step was to develop an A.I. system that could predict complicated dynamics between proteins—such as how two proteins will bind to one another or the way that proteins in close proximity morph one another’s shapes.

[…]

Source: DeepMind’s A.I. can now predict protein shapes from their DNA sequences | Fortune

Split-Second ‘Phantom’ Images Can Fool Tesla’s Autopilot

one group of researchers has been focused on what autonomous driving systems might see that a human driver doesn’t—including “phantom” objects and signs that aren’t really there, which could wreak havoc on the road.

Researchers at Israel’s Ben Gurion University of the Negev have spent the last two years experimenting with those “phantom” images to trick semi-autonomous driving systems. They previously revealed that they could use split-second light projections on roads to successfully trick Tesla’s driver-assistance systems into automatically stopping without warning when its camera sees spoofed images of road signs or pedestrians. In new research, they’ve found they can pull off the same trick with just a few frames of a road sign injected on a billboard’s video

[…]

“The driver won’t even notice at all. So somebody’s car will just react, and they won’t understand why.”

In their first round of research, published earlier this year, the team projected images of human figures onto a road, as well as road signs onto trees and other surfaces. They found that at night, when the projections were visible, they could fool both a Tesla Model X running the HW2.5 Autopilot driver-assistance system—the most recent version available at the time, now the second-most-recent —and a Mobileye 630 device. They managed to make a Tesla stop for a phantom pedestrian that appeared for a fraction of a second, and tricked the Mobileye device into communicating the incorrect speed limit to the driver with a projected road sign.

In this latest set of experiments, the researchers injected frames of a phantom stop sign on digital billboards, simulating what they describe as a scenario in which someone hacked into a roadside billboard to alter its video. They also upgraded to Tesla’s most recent version of Autopilot known as HW3.

[…]

an image that appeared for 0.42 seconds would reliably trick the Tesla, while one that appeared for just an eighth of a second would fool the Mobileye device. They also experimented with finding spots in a video frame that would attract the least notice from a human eye, going so far as to develop their own algorithm for identifying key blocks of pixels in an image so that a half-second phantom road sign could be slipped into the “uninteresting” portions.

[…]

Source: Split-Second ‘Phantom’ Images Can Fool Tesla’s Autopilot | WIRED

Google’s SoundFilter AI separates any sound or voice from mixed-audio recordings

Researchers at Google claim to have developed a machine learning model that can separate a sound source from noisy, single-channel audio based on only a short sample of the target source. In a paper, they say their SoundFilter system can be tuned to filter arbitrary sound sources, even those it hasn’t seen during training.

The researchers believe a noise-eliminating system like SoundFilter could be used to create a range of useful technologies. For instance, Google drew on audio from thousands of its own meetings and YouTube videos to train the noise-canceling algorithm in Google Meet. Meanwhile, a team of Carnegie Mellon researchers created a “sound-action-vision” corpus to anticipate where objects will move when subjected to physical force.

SoundFilter treats the task of sound separation as a one-shot learning problem. The model receives as input the audio mixture to be filtered and a single short example of the kind of sound to be filtered out. Once trained, SoundFilter is expected to extract this kind of sound from the mixture if present.

[…]

Source: Google’s SoundFilter AI separates any sound or voice from mixed-audio recordings | VentureBeat

Carbon footprint for ‘training GPT-3’ AI same as driving to the moon and back

Training OpenAI’s giant GPT-3 text-generating model is akin to driving a car to the Moon and back, computer scientists reckon.

More specifically, they estimated teaching the neural super-network in a Microsoft data center using Nvidia GPUs required roughly 190,000 kWh, which using the average carbon intensity of America would have produced 85,000 kg of CO2 equivalents, the same amount produced by a new car in Europe driving 700,000 km, or 435,000 miles, which is about twice the distance between Earth and the Moon, some 480,000 miles. Phew.

This assumes the data-center used to train GPT-3 was fully reliant on fossil fuels, which may not be true. The point, from what we can tell, is not that GPT-3 and its Azure cloud in particular have this exact scale of carbon footprint, it’s to draw attention to the large amount of energy required to train state-of-the-art neural networks.

The eggheads who produced this guesstimate are based at the University of Copenhagen in Denmark, and are also behind an open-source tool called Carbontracker, which aims to predict the carbon footprint of AI algorithms. Lasse Wolff Anthony, one of Carbontracker’s creators and co-author of a study of the subject of AI power usage, believes this drain on resources is something the community should start thinking about now, as the energy costs of AI have risen 300,000-fold between 2012 and 2018, it is claimed.

[…]

Source: AI me to the Moon… Carbon footprint for ‘training GPT-3’ same as driving to our natural satellite and back • The Register

AI has cracked a key mathematical puzzle for understanding our world – Partial Differential Equations

Unless you’re a physicist or an engineer, there really isn’t much reason for you to know about partial differential equations. I know. After years of poring over them in undergrad while studying mechanical engineering, I’ve never used them since in the real world.

But partial differential equations, or PDEs, are also kind of magical. They’re a category of math equations that are really good at describing change over space and time, and thus very handy for describing the physical phenomena in our universe. They can be used to model everything from planetary orbits to plate tectonics to the air turbulence that disturbs a flight, which in turn allows us to do practical things like predict seismic activity and design safe planes.

The catch is PDEs are notoriously hard to solve. And here, the meaning of “solve” is perhaps best illustrated by an example. Say you are trying to simulate air turbulence to test a new plane design. There is a known PDE called Navier-Stokes that is used to describe the motion of any fluid. “Solving” Navier-Stokes allows you to take a snapshot of the air’s motion (a.k.a. wind conditions) at any point in time and model how it will continue to move, or how it was moving before.

These calculations are highly complex and computationally intensive, which is why disciplines that use a lot of PDEs often rely on supercomputers to do the math. It’s also why the AI field has taken a special interest in these equations. If we could use deep learning to speed up the process of solving them, it could do a whole lot of good for scientific inquiry and engineering.

Now researchers at Caltech have introduced a new deep-learning technique for solving PDEs that is dramatically more accurate than deep-learning methods developed previously. It’s also much more generalizable, capable of solving entire families of PDEs—such as the Navier-Stokes equation for any type of fluid—without needing retraining. Finally, it is 1,000 times faster than traditional mathematical formulas, which would ease our reliance on supercomputers and increase our computational capacity to model even bigger problems. That’s right. Bring it on.

Hammer time

Before we dive into how the researchers did this, let’s first appreciate the results. In the gif below, you can see an impressive demonstration. The first column shows two snapshots of a fluid’s motion; the second shows how the fluid continued to move in real life; and the third shows how the neural network predicted the fluid would move. It basically looks identical to the second.

The paper has gotten a lot of buzz on Twitter, and even a shout-out from rapper MC Hammer. Yes, really.

[…]

Neural networks are usually trained to approximate functions between inputs and outputs defined in Euclidean space, your classic graph with x, y, and z axes. But this time, the researchers decided to define the inputs and outputs in Fourier space, which is a special type of graph for plotting wave frequencies. The intuition that they drew upon from work in other fields is that something like the motion of air can actually be described as a combination of wave frequencies, says Anima Anandkumar, a Caltech professor who oversaw the research alongside her colleagues, professors Andrew Stuart and Kaushik Bhattacharya. The general direction of the wind at a macro level is like a low frequency with very long, lethargic waves, while the little eddies that form at the micro level are like high frequencies with very short and rapid ones.

Why does this matter? Because it’s far easier to approximate a Fourier function in Fourier space than to wrangle with PDEs in Euclidean space, which greatly simplifies the neural network’s job. Cue major accuracy and efficiency gains: in addition to its huge speed advantage over traditional methods, their technique achieves a 30% lower error rate when solving Navier-Stokes than previous deep-learning methods.

[…]

Source: AI has cracked a key mathematical puzzle for understanding our world | MIT Technology Review

Announcing: Graph-Native Machine Learning in Neo4j!

We’re delighted to announce you can now take advantage of graph-native machine learning (ML) inside of Neo4j! We’ve just released a preview of Neo4j’s Graph Data Science™ Library version 1.4, which includes graph embeddings and an ML model catalog.

Together, these enable you to create representations of your graph and make graph predictions – all within Neo4j.

[…]

Graph Embeddings

The graph embedding algorithms are the star of the show in this release.

These algorithms are used to transform the topology and features of your graph into fixed-length vectors (or embeddings) that uniquely represent each node.

Graph embeddings are powerful, because they preserve the key features of the graph while reducing dimensionality in a way that can be decoded. This means you can capture the complexity and structure of your graph and transform it for use in various ML predictions.

 

Graph embeddings capture the nuances of graphs in a way that can be used to make predictions or lower dimensional visualizations.

In this release, we are offering three embedding options that learn the graph topology and, in some cases, node properties to calculate more accurate representations:

Node2Vec:

    • This is probably the most well-known graph embedding algorithm. It uses random walks to sample a graph, and a neural network to learn the best representation of each node.

FastRP:

    • A more recent graph embedding algorithm that uses linear algebra to project a graph into lower dimensional space. In GDS 1.4, we’ve extended the original implementation to support node features and directionality as well.
    • FastRP is up to 75,000 times faster than Node2Vec, while providing equivalent accuracy!

GraphSAGE:

    • This is an embedding technique using inductive representation learning on graphs, via graph convolutional neural networks, where the graph is sampled to learn a function that can predict embeddings (rather than learning embeddings directly). This means you can learn on a subset of your graph and use that representative function for new data and make continuous predictions as your graph updates. (Wow!)
    • If you’d like a deeper dive into how it works, check out the GraphSAGE session from the NODES event.

 

 

Graph embeddings available in the Neo4j Graph Data Science Library v1.4 . The caution marks indicate that, while directions are supported, our internal benchmarks don’t show performance improvements.

Graph ML Model Catalog

GraphSAGE trains a model to predict node embeddings for unseen parts of the graph, or new data as mentioned above.

To really capitalize on what GraphSAGE can do, we needed to add a catalog to be able to store and reference these predictive models. This model catalog lives in the Neo4j analytics workspace and contains versioning information (what data was this trained on?), time stamps and, of course, the model names.

When you want to use a model, you can provide the name of the model to GraphSAGE, along with the named graph you want to apply it to.

 

GraphSAGE ML Models are stored in the Neo4j analytics workspace.

[…]

Source: Announcing: Graph-Native Machine Learning in Neo4j!

Google’s breast cancer-predicting AI research is useless without transparency, critics say

Back in January, Google Health, the branch of Google focused on health-related research, clinical tools, and partnerships for health care services, released an AI model trained on over 90,000 mammogram X-rays that the company said achieved better results than human radiologists. Google claimed that the algorithm could recognize more false negatives — the kind of images that look normal but contain breast cancer — than previous work, but some clinicians, data scientists, and engineers take issue with that statement. In a rebuttal published today in the journal Nature, over 19 coauthors affiliated with McGill University, the City University of New York (CUNY), Harvard University, and Stanford University said that the lack of detailed methods and code in Google’s research “undermines its scientific value.”

Science in general has a reproducibility problem — a 2016 poll of 1,500 scientists reported that 70% of them had tried but failed to reproduce at least one other scientist’s experiment — but it’s particularly acute in the AI field. At ICML 2019, 30% of authors failed to submit their code with their papers by the start of the conference. Studies often provide benchmark results in lieu of source code, which becomes problematic when the thoroughness of the benchmarks comes into question. One recent report found that 60% to 70% of answers given by natural language processing models were embedded somewhere in the benchmark training sets, indicating that the models were often simply memorizing answers. Another study — a meta-analysis of over 3,000 AI papers — found that metrics used to benchmark AI and machine learning models tended to be inconsistent, irregularly tracked, and not particularly informative.

In their rebuttal, the coauthors of the Nature commentary point out that Google’s breast cancer model research lacks details, including a description of model development as well as the data processing and training pipelines used. Google omitted the definition of several hyperparameters for the model’s architecture (the variables used by the model to make diagnostic predictions), and it also didn’t disclose the variables used to augment the dataset on which the model was trained. This could “significantly” affect performance, the Nature coauthors claim; for instance, it’s possible that one of the data augmentations Google used resulted in multiple instances of the same patient, biasing the final results.

[…]

Source: Google’s breast cancer-predicting AI research is useless without transparency, critics say | VentureBeat

NVIDIA Uses AI to Slash Bandwidth on Video Calls

NVIDIA Research has invented a way to use AI to dramatically reduce video call bandwidth while simultaneously improving quality.

What the researchers have achieved has remarkable results: by replacing the traditional h.264 video codec with a neural network, they have managed to reduce the required bandwidth for a video call by an order of magnitude. In one example, the required data rate fell from 97.28 KB/frame to a measly 0.1165 KB/frame – a reduction to 0.1% of required bandwidth.

The mechanism behind AI-assisted video conferencing is breathtakingly simple. The technology works by replacing traditional full video frames with neural data. Typically, video calls work by sending h.264 encoded frames to the recipient, and those frames are extremely data-heavy. With AI-assisted video calls, first, the sender sends a reference image of the caller. Then, instead of sending a stream of pixel-packed images, it sends specific reference points on the image around the eyes, nose, and mouth.

A generative adversarial network (or GAN, a type of neural network) on the receiver side then uses the reference image combined with the keypoints to reconstruct subsequent images. Because the keypoints are so much smaller than full pixel images, much less data is sent and therefore an internet connection can be much slower but still provide a clear and functional video chat.

In the researchers’ initial example, they show that a fast internet connection results in pretty much the same quality of stream using both the traditional method and the new neural network method. But what’s most impressive is their subsequent examples, where internet speeds show a considerable degradation of quality using the traditional method, while the neural network is able to produce extremely clear and artifact-free video feeds.

The neural network can work even when the subject is wearing a mask, glasses, headphones, or a hat.

With this technology, more people can enjoy a greater number of features all while using monumentally less data.

But the technology use cases don’t stop there: because the neural network is using reference data instead of the full stream, the technology will allow someone to even change the camera angle to appear like they are looking directly at the screen even if they are not. Called “Free View,” this would allow someone who has a separate camera off-screen to seemingly keep eye contact with those on a video call.

NVIDIA can also use this same method for character animations. Using different keypoints from the original feed, they can add clothing, hair, or even animate video game characters.

Using this kind of neural network will have huge implications for the modern workforce that will not only serve to relieve strain on networks, but also give users more freedom when working remotely. However, because of the way this technology works, there will almost certainly be questions on how it can be deployed and lead to possible issues with “deep fakes” that become more believable and harder to detect.

(Via NVIDIA via DP Review)

Source: NVIDIA Uses AI to Slash Bandwidth on Video Calls

Nvidia unveils $59 Nvidia Jetson Nano 2GB mini AI board

New Jetson Nano mini AI computer

The Jetson Nano 2GB Developer Kit, announced this week, is a single-board computer – like the Raspberry Pi – though geared towards machine learning rather than general computing. If you like the idea of simple AI projects running on a dedicated board, such as building your own mini self-driving car or an object-recognition system for your home, this one might be for you.

It runs Nvidia CUDA code and provides a Linux-based environment. At only $59 a pop, it’s pretty cheap and a nifty bit of hardware if you’re just dipping your toes in deep learning. As its name suggests, it has 2GB of RAM, plus four Arm Cortex-A57 CPU cores clocked at 1.43GHz and a 128-core Nvidia Maxwell GPU. There are other bits and pieces like gigabit Ethernet, HDMI output, a microSD slot for storage, USB interfaces, GPIO and UART pins, Wi-Fi depending on you region, and more.

“While today’s students and engineers are programming computers, in the near future they’ll be interacting with, and imparting AI to, robots,” said Deepu Talla, vice president and general manager of Edge Computing at Nvidia. “The new Jetson Nano is the ultimate starter AI computer that allows hands-on learning and experimentation at an incredibly affordable price.”

Source: Nvidia unveils $59 Nvidia Jetson Nano 2GB mini AI board, machine learning that slashes vid-chat data by 90%, and new super for Britain • The Register

FakeCatcher Deepfake Tool looks for a heartbeat

In the endlessly escalating war between those striving to create flawless deepfake videos and those developing automated tools that make them easy to spot, the latter camp has found a very clever way to expose videos that have been digitally modified by looking for literal signs of life: a person’s heartbeat.

If you’ve ever had a doctor attach a pulse oximeter to the tip of your finger, then you’ve already experienced a technique known as photoplethysmography where subtle color shifts in your skin as blood is pumped through in waves allows your pulse to be measured. It’s the same technique that the Apple Watch and wearable fitness tracking devices use to measure your heartbeat during exercise, but it’s not just limited to your fingertips and wrists.

Though not apparent to the naked eye, the color of your face exhibits the same phenomenon, subtly shifting in color as your heart endlessly pumps blood through the arteries and veins under your skin, and even a basic webcam can be used to spot the effect and even measure your pulse. The technique has allowed for the development of contactless monitors for infants, simply requiring a non-obtrusive camera to be pointed at them while they sleep, but now is being leveraged to root out fake news.

Researchers from Binghamton University in Binghamton, New York, worked with Intel to develop a tool called FakeCatcher, and their findings were recently published in a paper titled, “FakeCatcher: Detection of Synthetic Portrait Videos using Biological Signals.” Deepfakes are typically created by matching individual frames of a video to a library of headshots, often times containing thousands of images of a particular person, and then subtly adjusting and tweaking the face being swapped in to match the existing one perfectly. Unbeknownst to the naked eye, those images still contain the telltale biological signs of the person having a pulse, but the machine learning tools used to create deepfakes don’t take into account that when the final video is played back, the moving face should still exhibit a measurable pulse. The random way in which a deepfake video is created results in an unstable pulse measurement when photoplethysmography detection techniques are applied to it, making them easier to spot.

From their testing, the researchers found that FakeCatcher was not only able to spot deepfake videos more than 90 percent of the time, but with the same amount of accuracy, it was also able to determine which of four different deepfake tools—Face2Face, NeuralTex, DeepFakes, or FaceSwap—was used to create the deceptive video. Of course, now that the research and the existence of the FakeCatcher tool has been revealed, it will give those developing the deepfake creation tools the opportunity to improve their own software and to ensure that as a deepfake videos are being created, those subtle shifts in skin color are included to fool photoplethysmography tools as well. But this is good while it lasts.

Source: Intel and Binghamton Researchers Unveil FakeCatcher Deepfake Tool

OpenAI Sells out to Microsoft: exclusive license for mega-brain GPT-3 for anything and everything

Microsoft has bagged exclusive rights to use OpenAI’s GPT-3 technology, allowing the Windows giant to embed the powerful text-generating machine-learning model into its own products.

“Today, I’m very excited to announce that Microsoft is teaming up with OpenAI to exclusively license GPT-3, allowing us to leverage its technical innovations to develop and deliver advanced AI solutions for our customers, as well as create new solutions that harness the amazing power of advanced natural language generation,” Microsoft CTO Kevin Scott said on Tuesday.

Right now, GPT-3 is only available to a few teams hand picked by OpenAI. The general-purpose text-in-text-out tool is accessible via an Azure-hosted API, and is being used by, for instance, Reddit to develop automated content moderation algorithms, and by academics investigating how the language model could be used to spread spam and misinformation at a scale so large it would be difficult to filter out.

GPT-3 won’t be available on Google Cloud, Amazon Web Services etc

Microsoft has been cosying up to OpenAI for a while; it last year pledged to invest $1bn in the San Francisco-based startup. As part of that deal, OpenAI got access to Microsoft’s cloud empire to run its experiments, and Microsoft was named its “preferred partner” for commercial products. Due to the exclusive license now brokered, GPT-3 won’t be available on rival cloud services, such as Google Cloud and Amazon Web Services.

[…]

GPT-3 is a massive model containing 175 billion parameters, and was trained on all manner of text scraped from the internet. It’s able to perform all sorts of tasks, including answering questions, translating languages, writing prose, performing simple arithmetic, and even attempting code generation. Although impressive, it remains to be seen if it can be utilized in products for the masses rather than being an object of curiosity.

Source: Get ready for Clippy 9000: Microsoft exclusively licenses OpenAI’s mega-brain GPT-3 for anything and everything • The Register

Because everybody loves a monopolist

Official launch of ELLIS Units – 15th of September 2020! | European Lab for Learning & Intelligent Systems

The European Laboratory for Learning and Intelligent Systems (ELLIS) is officially launching its 30 ELLIS research units on Tuesday, September 15. Since the first 17 units were announced in December 2019, the ELLIS initiative has gained significant momentum, adding another 13 units at top research institutions across Europe. To highlight this rapid progress toward securing the future of European AI research, each unit will be presenting its research focus. While an in-person launch was initially planned in spring at the Royal Society in London, the event was postponed as a result of the global COVID-19 pandemic and will now take place online. The event will be will be open to the general public via livestreaming. A detailed agenda and the YouTube link will be posted shortly.

Source: Official launch of ELLIS Units – 15th of September 2020! | European Lab for Learning & Intelligent Systems