Wherever artificial intelligence is deployed, you will find it has failed in some amusing way. Take the strange errors made by translation algorithms that confuse having someone for dinner with, well, having someone for dinner.
But as AI is used in ever more critical situations, such as driving autonomous cars, making medical diagnoses, or drawing life-or-death conclusions from intelligence information, these failures will no longer be a laughing matter. That’s why DARPA, the research arm of the US military, is addressing AI’s most basic flaw: it has zero common sense.
“Common sense is the dark matter of artificial intelligence,” says Oren Etzioni, CEO of the Allen Institute for AI, a research nonprofit based in Seattle that is exploring the limits of the technology. “It’s a little bit ineffable, but you see its effects on everything.”
DARPA’s new Machine Common Sense (MCS) program will run a competition that asks AI algorithms to make sense of questions like this one:
A student puts two identical plants in the same type and amount of soil. She gives them the same amount of water. She puts one of these plants near a window and the other in a dark room. The plant near the window will produce more (A) oxygen (B) carbon dioxide (C) water.
A computer program needs some understanding of the way photosynthesis works in order to tackle the question. Simply feeding a machine lots of previous questions won’t solve the problem reliably.
These benchmarks will focus on language because it can so easily trip machines up, and because it makes testing relatively straightforward. Etzioni says the questions offer a way to measure progress toward common-sense understanding, which will be crucial.
Tech companies are busy commercializing machine-learning techniques that are powerful but fundamentally limited. Deep learning, for instance, makes it possible to recognize words in speech or objects in images, often with incredible accuracy. But the approach typically relies on feeding large quantities of labeled data—a raw audio signal or the pixels in an image—into a big neural network. The system can learn to pick out important patterns, but it can easily make mistakes because it has no concept of the broader world.