About 10 percent of Alphabet’s market value – some $120 billion – was wiped out this week after Google proudly presented Bard, its answer to Microsoft’s next-gen AI offerings, and the system bungled a simple question.
In a promotional video to show off Bard, a web search assistant to compete against Microsoft’s ChatGPT-enhanced Bing, the software answered a science question incorrectly, sending Alphabet’s share price down amid an overall lackluster launch by the Chocolate Factory.
In an example query-response offered by Google’s spinners, Bard was asked to explain discoveries made by NASA’s James Webb Space Telescope (JWST) at a level a nine-year-old would understand. Some of the text generated by the model, however, was wrong.
Bard claimed “JWST took the very first pictures of a planet outside of our own solar system,” yet the first image of just such an exoplanet, 2M1207b, was actually captured by the European Southern Observatory’s Very Large Telescope in 2004, according to NASA.
This is a bit of a harsh reaction by the market considering that ChatGPT comes with all kinds of disclaimers saying don’t trust it (and you shouldn’t!) and Bing will also make mistakes. The problem is that these systems are created using very imperfect human input, so they never will be perfect. They need to be fact checked, just like the responses you get on the 1st page of a search engine. They are not perfect either. Expecting perfection is unrealistic and will never happen.
Organisational Structures | Technology and Science | Military, IT and Lifestyle consultancy | Social, Broadcast & Cross Media | Flying aircraft