How we fooled Google’s AI into thinking a 3D-printed turtle was a gun

Students at MIT in the US claim they have developed an algorithm for creating 3D objects and pictures that trick image-recognition systems into severely misidentifying them. Think toy turtles labeled rifles, and baseballs as cups of coffee.

It’s well known that machine-learning software can be easily hoodwinked: Google’s AI-in-the-cloud can be misled by noise; protestors and activists can wear scarves or glasses to fool people-recognition systems; intelligent antivirus can be outsmarted; and so on. It’s a crucial topic of study because as surveillance equipment, and similar technology, relies more and more on neural networks to quickly identify things and people, there has to be less room for error.

Robin Edgar

Organisational Structures | Technology and Science | Military, IT and Lifestyle consultancy | Social, Broadcast & Cross Media | Flying aircraft

 robin@edgarbv.com  https://www.edgarbv.com

Leave a Reply