On Wednesday, I phoned my bank’s automated service line. To start, the bank asked me to say in my own words why I was calling. Rather than speak out loud, I clicked a file on my nearby laptop to play a sound clip: “check my balance,” my voice said. But this wasn’t actually my voice. It was a synthetic clone I had made using readily available artificial intelligence technology.
“Okay,” the bank replied. It then asked me to enter or say my date of birth as the first piece of authentication. After typing that in, the bank said “please say, ‘my voice is my password.’”
Again, I played a sound file from my computer. “My voice is my password,” the voice said. The bank’s security system spent a few seconds authenticating the voice.
“Thank you,” the bank said. I was in.
I couldn’t believe it—it had worked. I had used an AI-powered replica of a voice to break into a bank account. After that, I had access to the account information, including balances and a list of recent transactions and transfers.
Banks across the U.S. and Europe use this sort of voice verification to let customers log into their account over the phone. Some banks tout voice identification as equivalent to a fingerprint, a secure and convenient way for users to interact with their bank. But this experiment shatters the idea that voice-based biometric security provides foolproof protection in a world where anyone can now generate synthetic voices for cheap or sometimes at no cost. I used a free voice creation service from ElevenLabs, an AI-voice company.
Now, abuse of AI-voices can extend to fraud and hacking. Some experts I spoke to after doing this experiment are now calling for banks to ditch voice authentication altogether, although real-world abuse at this time could be rare.
Source: How I Broke Into a Bank Account With an AI-Generated Voice