An intelligence agent stalks a corridor, landing on an imposing security door. Leaning into a panel that brushes its frame, they put their face into position and, with a satisfying series of computerised bleeps and boops, their identity is confirmed – the portal opens.
These body-powered gateways were not long ago firmly in the realm of science fiction. Yet today, most of our devices include fingerprint scanners and facial recognition software.
Our unique biological traits make biometrics a secure and convenient means to authenticate our identity. Given the alarming frequency that plain-text passwords are leaked online, it’s little surprise that consumer technology companies and enterprises are turning to biometric authentication like voice, face ID and fingerprints for their users.
“Increasing the length and complexity of passwords increases their resilience to a so-called brute-force attack, which attempts to try all the possible combinations of characters,” says Steven Furnell, IEEE senior member and professor of cyber security at the University of Nottingham. “Advice from the National Cyber Security Centre is to build longer passwords by combining three random words – unfortunately, this isn’t always guaranteed to work, as some systems and services still insist on checking composition and demand a mix of character types.”
No matter the complexity of the passphrase, they can’t compete with our biological information. While a single password could leak onto the web and cause all kinds of chaos, flesh and blood are a lot trickier to copy. So biometrics would seem an appropriate alternative. Indeed, Furnell says they’re the “key to non-intrusive, frictionless security”.
But there may be hidden dangers to relinquishing our biological information to the digital sphere, and what feels frictionless today could come at a cost in future.
Take the US’s withdrawal from Afghanistan in 2021. It didn’t just leave citizens to the mercy of the Taliban: it left their biometric data up for grabs too. In 2007, the US trialled a tool there called Handheld Interagency Identity Detection Equipment (HIIDE), using these devices to record fingerprint, iris, and facial data. Initially developed to locate insurgents, US forces extended their use of HIIDE to those who cooperated. Ultimately, the personal data of more than 1.5 million Afghans was matched against a database of biometric data and stored in a centralised repository. When this fell into the wrong hands, it revealed indisputable information about the people the US had worked with and put them at risk.
These databases, whether created intentionally or as accidental by-products, are one of the chief issues of biometric security, says Britain’s Biometrics and Surveillance Camera Commissioner, Fraser Sampson.
“At a very simplistic level, biometrics is about measuring and matching. And for matching, a biometric needs a comparator,” Sampson says. “A collection of comparators is a database. Essentially, if you retain biometric material, you’ve created a database.”
There are many issues with centralised databases; one is that they’re prone to leaking. When you throw biometric data into the mix, complications that are reminiscent of humanity’s darkest moments come to the fore. In the field of biometric surveillance, says Sampson, one person’s idea of protection may be another person’s oppression.
“While humanitarian uses of biometric identity can save lives, the same biometric data can be used for domination and exploitation,” says Sampson. “It can be used to marginalise and persecute people on grounds of their race, ethnicity and religious beliefs.”
The benefits of biometrics – their uniqueness, their incontestable ties to real humans – are exploitable as their weaknesses, too.
The abilities of determined, capable hackers with resources should never be underestimated. While biometrics are generally difficult to spoof right now – especially as, for many hackers, lower-effort attacks are more fruitful – what is true today may not be the case tomorrow, as attackers leverage better computing and become more sophisticated.
“Nobody I’m aware of has yet been able to demonstrate an unhackable system,” Sampson says, “or an unreachable database. The stakes make it worth it, whether that’s hostile state activity or reconnaissance, or commercial hacking. If there’s a commercial value to crack something, you can sell that.”
As the use of biometrics increases and converges, there will probably be fewer, but bigger, databases, if these trends continue – reducing the likelihood of breaches and error, but potentially also increasing the impact should these leaks happen.
However, adds futurist and co-author of Six Principles for Self-Sovereign Biometrics, Heather Vescent, it’s “not impossible, but it is very hard for someone to spoof a biometric”.
It’s unlikely, then, that cyber researchers and attackers will arrive at an impasse soon – locked in an endless game of Whac-A-Mole, as is the Sisyphean case with traditional perimeter defence.
But, Vescent adds, the best security on the planet can be useless if stored incorrectly.
“Any data is only as secure as the system in which it is stored,” Vescent says. “Sometimes these systems can be easily peneftrated due to poor identity and access management protocols. This has nothing to do with the security of biometrics. It’s to do with the security of stored data.
“This means the real concern about using biometrics is about how data is stored, how secure the system is, and how much control the owner of the biometric has over it.”
In Six Principles, Vescent and her co-authors advise that to reduce these risks, biometrics should not be stored in centralised databases.
Crucially, users should own, and be able to control, their biometric data. This data should also be just one component in a wider security landscape – for example, as a supplementary measure, used to provide confidence ratings, or in tandem with other proven techniques such as passphrases.
For Sampson, one of the main questions is avoiding the potential for state overreach. To prevent this, biometrics-based initiatives should be conducted in partnership with trusted private sector providers; these should be auditable, transparent, and conducted under agreed governance arrangements and standards.
But before we race towards using our faces, fingerprints, and voices as a salve for all our security woes, perhaps it’s worth properly considering the potential for undesirable, second-order consequences. After all, if we waive over all that makes us uniquely human in the name of security – what do we have left?