Now, Bengio says deep learning needs to be fixed. He believes it won’t realize its full potential, and won’t deliver a true AI revolution, until it can go beyond pattern recognition and learn more about cause and effect. In other words, he says, deep learning needs to start asking why things happen.
[…]
Machine learning systems including deep learning are highly specific, trained for a particular task, like recognizing cats in images, or spoken commands in audio. Since bursting onto the scene around 2012, deep learning has demonstrated a particularly impressive ability to recognize patterns in data; it’s been put to many practical uses, from spotting signs of cancer in medical scans to uncovering fraud in financial data.
But deep learning is fundamentally blind to cause and effect. Unlike a real doctor, a deep learning algorithm cannot explain why a particular image may suggest disease. This means deep learning must be used cautiously in critical situations.
[…]
At his research lab, Bengio is working on a version of deep learning capable of recognizing simple cause-and-effect relationships. He and colleagues recently posted a research paper outlining the approach. They used a dataset that maps causal relationships between real-world phenomena, such as smoking and lung cancer, in terms of probabilities. They also generated synthetic datasets of causal relationships.
[…]
Others believe the focus on deep learning may be part of the problem. Gary Marcus, a professor emeritus at NYU and the author of a recent book that highlights the limits of deep learning, Rebooting AI: Building Artificial Intelligence We Can Trust, says Bengio’s interest in causal reasoning signals a welcome shift in thinking.
“Too much of deep learning has focused on correlation without causation, and that often leaves deep learning systems at a loss when they are tested on conditions that aren’t quite the same as the ones they were trained on,” he says.
Marcus adds that the lesson from human experience is obvious. “When children ask ‘why?’ they are asking about causality,” he says. “When machines start asking why, they will be a lot smarter.”
This is a hugely important – and old – question in this field. Without the ‘why’, humans must ‘just trust’ answers given by AI that seem intuitively strange. When you’re talking about health care or human related activities such as liability ‘just accept what I’m telling you’ isn’t good enough.
Citing sources familiar with the program, Bloomberg reported Thursday that “dozens” of workers for the e-commerce giant who are based in Romania and India are tasked with reviewing footage collected by Cloud Cams—Amazon’s app-controlled, Alexa-compatible indoor security devices—to help improve AI functionality and better determine potential threats. Bloomberg reported that at one point, these human workers were responsible for reviewing and annotating roughly 150 security snippets of up to 30 seconds in length each day that they worked.
Two sources who spoke with Bloomberg told the outlet that some clips depicted private imagery, such as what Bloomberg described as “rare instances of people having sex.” An Amazon spokesperson told Gizmodo that reviewed clips are submitted either through employee trials or customer feedback submissions for improving the service.
[…]
So to be clear, customers are sharing clips for troubleshooting purposes, but they aren’t necessarily aware of what happens with that clip after doing so.
More troubling, however, is an accusation from one source who spoke with Bloomberg that some of these human workers tasked with annotating the clips may be sharing them with members outside of their restricted teams, despite the fact that reviews happen in a restricted area that prohibits phones. When asked about this, a spokesperson told Gizmodo by email that Amazon’s rules “strictly prohibit employee access to or use of video clips submitted for troubleshooting, and have a zero tolerance policy for about of our systems.”
[…]
To be clear, it’s not just Amazon who’s been accused of allowing human workers to listen in on whatever is going on in your home. Motherboard has reported that both Xbox recordings and Skype calls are reviewed by human contractors. Apple, too, was accused of capturing sensitive recordings that contractors had access to. The fact is these systems just aren’t ready for primetime and need human intervention to function and improve—a fact that tech companies have successfully downplayed in favor of appearing to be magical wizards of innovation.
System76, the Denver-based Linux PC manufacturer and developer of Pop OS, has some stellar news for those of us who prefer our laptops a little more open. Later this month the company will begin shipping two of their laptop models with its Coreboot-powered open source firmware.
The Darter Pro laptop
System76
Beginning today, System76 will start taking pre-orders for both the Galago Pro and Darter Pro laptops. The systems will ship out later in October, and include the company’s Coreboot-based open source firmware which was previously teased at the 2019 Open Source Firmware Conference.
(Coreboot, formerly known as LinuxBIOS, is a software project aimed at replacing proprietary firmware found in most computers with a lightweight firmware designed to perform only the minimum number of tasks necessary to load and run a modern 32-bit or 64-bit operating system.)
What’s so great about ripping out the proprietary firmware included in machines like this and replacing it with an open alternative? To begin with, it’s leaner. System76 claims that users can boot from power off to the desktop 29% faster with its Coreboot-based firmware.
The U.S. is slowly being gripped by a flooding crisis as seas rise and waterways overflow with ever more alarming frequency. An idea at the forefront for how to help Americans cope is so-called managed retreat, a process of moving away from affected areas and letting former neighborhoods return to nature. It’s an idea increasingly en vogue as it becomes clearer that barriers won’t be enough to keep floodwaters at bay.
But new research shows a startling finding: Americans are already retreating. More than 40,000 households have been bought out by the federal government over the past three decades. The research published in Science Advances on Wednesday also reveals that there are disparities between which communities opt-in for buyout programs and, even more granularly, which households take the offers and relocate away. The cutting-edge research answers questions that have been out there for a while and raises a whole host of new ones that will only become more pressing in the coming decades as Earth continues to warm.
“People are using buyouts and doing managed retreat,” AR Siders, a climate governance researcher at Harvard and study author, said during a press call. “No matter how difficult managed retreat sounds, we know that there are a thousand communities in the United States, all over the country, who have made it work. I want to hear their stories, I want to know how they did it.”
“The anti-climate effort has been largely underwritten by conservative billionaires,” says the Guardian, “often working through secretive funding networks. They have displaced corporations as the prime supporters of 91 think tanks, advocacy groups and industry associations which have worked to block action on climate change.”