I couldn’t believe it — it had worked. I had used an AI-powered replica of a voice to break into a bank account. After that, I had access to the account information, including balances and a list of recent transactions and transfers. Banks across the U.S. and Europe use this sort of voice verification to let customers log into their account over the phone. Some banks tout voice identification as equivalent to a fingerprint, a secure and convenient way for users to interact with their bank. But this experiment shatters the idea that voice-based biometric security provides foolproof protection in a world where anyone can now generate synthetic voices for cheap or sometimes at no cost. I used a free voice creation service from ElevenLabs, an AI-voice company. Now, abuse of AI-voices can extend to fraud and hacking. Some experts I spoke to after doing this experiment are now calling for banks to ditch voice authentication altogether, although real-world abuse at this time could be rare. A Lloyds Bank spokesperson said in a statement that “Voice ID is an optional security measure, however we are confident that it provides higher levels of security than traditional knowledge-based authentication methods, and that our layered approach to security and fraud prevention continues to provide the right level of protection for customers’ accounts, while still making them easy to access when needed.”
The Consumer Financial Protection Bureau, one of the U.S. agencies that regulates the financial industry, said: “The CFPB is concerned with data security, and companies are on notice that they’ll be held accountable for shoddy practices. We expect that any firm follow the law, regardless of technology used.”
Read more of this story at Slashdot.