How safe is your voice as an identity marker? As biometric technology continues to make strides, opinion is split on whether voice tech is a blessing or a curse when it comes to fighting fraud.
Biometrics are based on physical or behavioural measurements, like facial recognition or an individual’s hand movements. Voice scans authenticate a person’s identity based on vocal modalities such as pitch and intensity, which are compared against an existing database of voice samples.
HSBC UK’s voice ID technology prevented £249 million worth of fraud in the last year, according to the bank. Since its launch in 2016 the technology has prevented £981 million of customers’ money from falling into the hands of fraudsters, with the rate of attempted fraud down 50% year-on-year as of May 2021.
“Telephone fraudsters may attempt to impersonate customers by stealing or guessing personal information to pass security checks, but replicating someone’s voice is much more difficult,” says David Callington, head of fraud at HSBC UK.
Voice ID detects whether the voice matches that on file for the customer “and therefore whether the caller is genuine”, Callington explained. The bank’s system allows it to make changes to different security settings: for example, limiting the number of attempts that can be made before manual authorisation is required. It regularly reviews and changes the system to enhance security, Callington added.
NatWest also uses voice biometrics as an alternative to security mechanisms based on passwords to other static identifiers, which can be stolen or forgotten. The bank deploys a voice biometric solution from AI-based speech recognition firm Nuance, which screens incoming calls and compares voice characteristics – including pitch, cadence, and accent – to a digital library of voices associated with fraud against the bank. The software quickly flags suspicious calls and alerts the call centre agent to potential fraud attempts.
As well as a library of “bad” voices, NatWest agents now have a “whitelist” of genuine customer voices that can be used for rapid authentication, without the need for customers to remember passwords and other identifying information.
Jason Costain, head of fraud prevention at Natwest, says the bank “can detect when we get a fraudulent voice coming in across our network as soon as it happens”. Voice biometric technology is giving it a clear picture of what its customers sound like – and what criminal voices sound like, too.
“Using a combination of biometric and behavioural data, we now have far greater confidence that we are speaking to our genuine customers and keeping them safe.”
War of attrition
However, the rise of “deepfakes” means that voice biometrics can be cloned and used to fraudulent ends. As the technology improves and becomes more widely available, fraudsters follow the money, says Susan Morrow, head of R&D at Avoco Secure, a digital identity specialist. They then create systems to exploit the technology using the same techniques.
While biometric technology is often viewed as the ultimate in authentication and verification, “this is a war of attrition, and voice biometrics – like any other tech – can only be seen as risk-reduction, not a cure,” says Morrow. “Just as deepfakes for video have arisen, deepfakes for audio will increasingly be used for crimes that involve impersonation.”
So how reliable is voice as a biometric marker, and should banks and public services rely on it? Security is not achieved by a single measure, especially when a system has multiple moving parts, as is the case with payments, says Morrow.
“Voice biometric is a useful measure but it is only part of an overall system, and it will be exploited. As with any system, security measures need to be part of an ecosystem of checks and measures.”
As customers part with their biometric data, there’s also an issue of trust.
Research by identity and authentication firm Callsign shows that just 38% of consumers feel comfortable using static biometrics, such as fingerprint ID or facial recognition, to confirm their identity when using a service or buying a product.
“The problem with static biometrics is that it’s intrusive and not privacy preserving,” says Chris Stephens, head of solution engineering – UK, Europe and South Africa at Callsign. “Static biometrics are also prone to inherent biases and once compromised, there is nothing anyone can do to stop attackers getting in.”
However, a recent survey by GetApp, a Gartner company, shows that younger generations seem more comfortable with the idea of using biometric technology like voice scan compared with older generations. More than half of Generation Z (born approximately from the mid-1990s to the early 2010s) said they had voluntarily shared biometric data with a private company, compared to 29% of over 50s.
“These results should not come as a surprise, as a third of millennials and Generation Z have most probably had experience with this type of technology, for example with chatbots and voice-activated devices such as Siri and Amazon Alexa,” says Sonia Navarrete, senior content analyst at GetApp.
Layering verification
Organisations are clearly reaping the rewards of their investments in voice biometrics, particularly banks and financial services companies. However, it might be wise to view these systems as part of a broader, holistic approach to anti-fraud measures.
There are security limitations if businesses focus solely on voice technology, says Stephens. However, by layering in other verification requirements – for example, behavioural biometrics like location or the way the person uses a mouse – consumers can access services such as online banking just as quickly, easily and securely.
“This also means that businesses only hold the information that is completely necessary, thereby preserving privacy and building trust with customers.”