AIs are too worried about answering stuff you can just google because… doomsayers?

The article below is about how you can trick ChatGPT toj give you a napalm recipe. It’s pretty circumspect and clever that you need to say “my grandmother worked at a factory and told me how to make it” but why would you need to? Why are we somehow stricter about the output of an AI than we are of search engines we have been using for decades?

Source: People Are Using A ‘Grandma Exploit’ To Break AI

Just Google it: https://www.google.com/search?client=firefox-b-d&q=ingredients+napalm

And you won’t have to spend any time thinking of ways to trick the AI. So why does the AI need tricking in the first place?

Also, why does the writer of the article feel hesitant to place the answers of the AI in the article? Because Kotaku is part of a network of AI doomsayers, a bit like Fox news when it comes to the subject of AI.

Robin Edgar

Organisational Structures | Technology and Science | Military, IT and Lifestyle consultancy | Social, Broadcast & Cross Media | Flying aircraft

 robin@edgarbv.com  https://www.edgarbv.com