When AI asks dumb questions, it gets smart fast

If someone showed you a photo of a crocodile and asked whether it was a bird, you might laugh—and then, if you were patient and kind, help them identify the animal. Such real-world, and sometimes dumb, interactions may be key to helping artificial intelligence learn, according to a new study in which the strategy dramatically improved an AI’s accuracy at interpreting novel images. The approach could help AI researchers more quickly design programs that do everything from diagnose disease to direct robots or other devices around homes on their own.

[…]

To help AIs expand their understanding of the world, researchers are now trying to develop a way for computer programs to both locate gaps in their knowledge and figure out how to ask strangers to fill them—a bit like a child asks a parent why the sky is blue. The ultimate aim in the new study was an AI that could correctly answer a variety of questions about images it has not seen before.

[…]

in the new study, researchers at Stanford University led by Ranjay Krishna, now at the University of Washington, Seattle, trained a machine-leaning system not only to spot gaps in its knowledge but to compose (often dumb) questions about images that strangers would patiently answer. (Q: “What is the shape of the sink?” A: “It’s a square.”)

It’s important to think about how AI presents itself, says Kurt Gray, a social psychologist at the University of North Carolina, Chapel Hill, who has studied human-AI interaction but was not involved in the work. “In this case, you want it to be kind of like a kid, right?” he says. Otherwise, people might think you’re a troll for asking seemingly ridiculous questions.

The team “rewarded” its AI for writing intelligible questions: When people actually responded to a query, the system received feedback telling it to adjust its inner workings so as to behave similarly in the future. Over time, the AI implicitly picked up lessons in language and social norms, honing its ability to ask questions that were sensical and easily answerable.

piece of coconut cake
Q: What type of dessert is that in the picture? A: hi dear it’s coconut cake, it tastes amazing 🙂 R. Krishna et al., PNAS, DOI: 2115730119 (2022)

The new AI has several components, some of them neural networks, complex mathematical functions inspired by the brain’s architecture. “There are many moving pieces … that all need to play together,” Krishna says. One component selected an image on Instagram—say a sunset—and a second asked a question about that image—for example, “Is this photo taken at night?” Additional components extracted facts from reader responses and learned about images from them.

Across 8 months and more than 200,000 questions on Instagram, the system’s accuracy at answering questions similar to those it had posed increased 118%, the team reports today in the Proceedings of the National Academy of Sciences. A comparison system that posted questions on Instagram but was not explicitly trained to maximize response rates improved its accuracy only 72%, in part because people more frequently ignored it.

The main innovation, Jaques says, was rewarding the system for getting humans to respond, “which is not that crazy from a technical perspective, but very important from a research-direction perspective.” She’s also impressed by the large-scale, real-world deployment on Instagram. (Humans checked all AI-generated questions for offensive material before posting them.)

[…]

 

Source: When AI asks dumb questions, it gets smart fast | Science | AAAS

Organisational Structures | Technology and Science | Military, IT and Lifestyle consultancy | Social, Broadcast & Cross Media | Flying aircraft