Though issues around authentication are key for an ever-growing list of industries, banks and financial institutions perhaps face the most severe consequences of getting it wrong.
Increasingly, they’re looking to voice biometrics as a secure and convenient way of providing access to their services. Customers simply have to speak to an authentication system that can recognise unique markers and almost instantly confirm who they are. But is this really the end of bank fraud or simply another challenge for would-be fraudsters to rise to?
Banks face a tricky balance when authenticating customers as the process needs to provide enough security to prevent fraudulent access, while not being so cumbersome that customers have difficulty with, or actively avoid, using the services.
“Which card reader do I need to use for this account?” “Which aunt’s birthday is my memorable date for this bank?” Biometric authentication bypasses a lot of these issues by allowing users to present themselves, or at least measurable aspects of themselves, as proof of identity, most commonly their fingerprints, face and voice.
And voice recognition has the advantage of simplicity as the user doesn’t need any technology more sophisticated than a landline phone.
Banks and the companies that provide their voice biometrics make bold claims for the ability to distinguish individuals’ voices. Hundreds of speech characteristics are analysed, from accent and speed to physical characteristics of vocal chords.
But in practice the technology hasn’t always been perfect. In 2017 a BBC reporter and his non-identical twin brother managed to bypass HSBC’s system, albeit only after eight attempts. HSBC subsequently claimed to have increased the sensitivity of their system.
War of the neural networks?
However, it’s not just a sibling with a similar voice you need to worry about. What if someone tries to access your financial services with your voice or a synthesised version of it?
“Deepfake voice algorithms have already been reported that can perfectly imitate someone’s voice using just a five-second snippet,” says Ray Walsh of privacy education and review site ProPrivacy.
And the idea that this could be used for financial fraud isn’t theoretical. Walsh points to a 2019 incident in which an energy company was tricked into handing over nearly a quarter of a million pounds after phone conversations with what turned out to be deepfaked recordings of their parent company’s chief executive.
The deepfake in that incident fooled human beings, not a voice biometrics system. Voice recognition algorithms can’t be tricked in the same way that a human on the end of the phone could be. But this doesn’t mean they can’t be fooled by sufficiently advanced deepfakes, perhaps resulting in an artificial intelligence (AI) arms race, with criminals improving their AI deepfakes and financial institutions tweaking the neural networks that power their own voiceprint detection to keep up.
Can voice recognition keep up with our age?
Deepfakes aren’t the only thing that might threaten voice recognition systems as, like the rest of us, they can fall victim to age. Voices change over time; a 2017 study by voice authentication company Pindrop found that over two years the failure rate of authentication more than doubled.
If someone’s using a voice biometric service frequently, it is theoretically possible to recalibrate the model of their voice you’ve stored with new information as they sign in, allowing for some compensation for this. But this introduces risks, potentially allowing the mechanism to be more easily compromised.
Deepfake voice algorithms have already been reported that can perfectly imitate someone’s voice using just a five-second snippet
And banking customers don’t necessarily call that often. A survey by Pindrop found almost half of customers only called once in an eight-month period, long enough that vocal changes could prevent the system from verifying the customer, requiring alternative verification methods, which could be more easily compromised.
This can, in theory, be compensated for as neural networks can be trained to allow for the typical effects of ageing. Banking voicetech provider Nuance say their system should allow for someone to create a voiceprint aged 20 and not have to update it for 60 years, but it will obviously take a while to find out if that’s true in practice.
The convenience of voice biometrics as an authentication method makes its appeal obvious, but it remains to be seen exactly how viable the technology will be in the long term. And for now, even if financial institutions are confident in their systems’ ability to sniff out deepfakes and predict how voices will change with age, their customers might not share that optimism.
After all, a 2019 survey by Paysafe Group found that 56 per cent of consumers in North America and Europe have concerns about biometrics and 81 per cent prefer the traditional password-based approach.
As much as banks are pushing them to, customers may not be quite ready to say that, figuratively or literally, “my voice is my password”.