The coronavirus pandemic has had a big impact on many areas of our lives, not least our relationship with the office. Organisations have had their hands forced when it comes to accelerating digital transformation, while digitally identifying and onboarding both customers and employees has pulled focus on the digital identity sector.
It’s good news for digital ID specialists, but the flipside is it’s also a huge opportunity for cybercriminals engaging in identity fraud.
According to identity management company GBG, identity fraud has hit a tipping point thanks to COVID-19: one in five consumers’ identities have been stolen and one in three are now more worried about fraud.
GBG research also worryingly reveals that more than a quarter (28 per cent) of businesses admit to high levels of fraud being accepted by the organisation, with half (51 per cent) of those in financial services seeing fraud attempts rise.
This imbalance between criminal opportunity and organisational apathy must be addressed as newer threats combine with old and impact both businesses and consumers alike. They are threats that many businesses are simply unprepared for.
To tip the balance back in the direction of the digital defenders, we must understand what the threats are and how investment in digital identity technology can reduce risk exposure.
1. Frankenstein fraud
Synthetic ID, or Frankenstein, fraud combines genuine and falsified information to create a new identity. According to former US Department of Defense director of security operations Keith Price, who is now cybersecurity director at Littlefish, it is “one of the fastest-growing methods of financial crime”.
In July 2020, two men were arrested in connection with fraudulent applications for pandemic “bounce back” loans, totalling £550,000, using such identities. GBG’s general manager Gus Tomlinson is concerned this type of fraud could be further complicated by the September 2020 database breach at Nitro PDF. Along with credit data breaches, he says, fraudsters will be able to “present corroboratory evidence of previous financial activity that could be deemed as valid proof”.
Price sees the solution sitting with businesses being “able to pivot security and fraud detection capabilities”, which is where behavioural biometrics and pattern recognition come to the rescue. When 400 synthetic accounts were discovered by a European bank client, banking fraud prevention specialists buguroo deployed such a system to look under the covers. This discovered “all these accounts were linked by the fact that they were accessed by the same people, via the same device and same networks”, says Buguroo vice president Tim Ayling.
2. Account takeover
Why go to the effort of creating a new identity when you can hijack a real one? That’s the premise of account takeover, as employed in the Twitter hack of July 2020. Focusing a spear-phishing attack on a small number of employees, criminals gained access to the credentials and visibility of internal processes that ultimately helped take control of high-profile accounts, including US President Joe Biden and American rapper Kanye West. “Fake tweets were sent,” Greg Chapman, chief technology officer at CM Security, explains, “which caught out respondents who engaged, and the attackers moved to steal cryptocurrency.”
Matthew Gracey-McMinn, head of threat research at Netacea, warns that commonly used passwords can also be fed against known email logins using bots. “We have recently seen a streaming service hit by an attacker who tried 300,000 unique username and password combinations during a five-hour attack,” he says. The 0.005 per cent success rate, with 1,500 correct guesses, is a big win. Or would have been, had the attack not been detected and blocked.
Such bot management is the best way for businesses to detect and stop these threats. Gracey-McMinn recommends using a password manager to enable unique passwords for every site and backing this up with a second authentication factor.
3. Deepfakes
Deepfake technology manipulates video and audio so convincingly that it presents what appears to be a real person. “The criminal underworld is not far off from making deepfake attacks look and sound truly authentic,” warns Ben King, chief security officer, Europe, Middle East and Africa, at Okta. “We must anticipate a surge of attacks as criminals learn to more successfully imitate speech mannerisms using artificial intelligence (AI) layered on top of numerous voice samples from the target.”
Because video and voice are more persuasive than an email or text message, deepfakes can “falsely trigger a person into an action, such as handing over data or transferring funds”, according to Daniel Cohen, chief product officer for anti-fraud at RSA. Indeed, in 2019 it was reported that the chief executive of a UK-based energy company was tricked by deepfake audio of his German parent company boss into transferring almost £200,000 in a sophisticated fraud.
While AI-based tools can help to mitigate the risk, user caution is the most effective counter weapon, says Paolo Passeri, cyberintelligence principal at Netskope. “My advice is to always double check every request,” he continues. “Call the person to whom the money must be sent to verify the request is legitimate for the strongest authentication possible.”
4. SIM swapping
In a smartphone-centric world, SIM swapping is becoming more of a problem. Using a variety of open-source intelligence methods - trawling social media postings or corporate site profiles for example - fraudsters seek to get enough information to convince your mobile phone network provider they are the owner of the account. They then request a SIM swap to seize control of the phone number. This gives them visibility of two-factor codes sent via SMS and from there control of the accounts they protect.
Kaspersky principal security researcher David Emm points out that Action Fraud found a 400 per cent increase in reports of SIM-swap fraud last year. One couple had £25,000 stolen by an attacker while on holiday and a Californian man reportedly lost $1 million in SIM-swap fraud.
Mobile providers should alert customers by SMS if there is a SIM-swap request and follow the Brazil lead by flagging banks and disabling financial transactions for the next 48 hours, Emm adds. John Gilbert, general manager UK and Ireland at Yubico, advises account takeover attempts can be thwarted with “stronger two-factor authentication, boosting login security beyond just SMS text messages”.
5. Replay attacks
A replay attack happens when an attacker sits in the middle of a supposedly secure communication, intercepting the traffic and then resending the communication later, often to conduct financial fraud. An attacker could fool the victim into completing a transaction to them rather than the originator, for example.
“In such a case the attacker is able to capture even an encrypted message and send again with the extra payload, with the original encryption still in place,” says Steven Jupp, chief executive at High Impact Office. A protocol designed to protect devices against such an attack, the replay protected memory block was recently found to have a vulnerability that could allow it to be bypassed. Although there are few readily available mitigation technologies on the market, Jupp says the “consensus for solution is to utilise time-stamping and random key pairs, which are used just once in a message transaction”.
He also suggests that incorporating unique verified packets into the messages could provide an ability to verify the message came from the correct device, which wouldn’t be possible by resending. Blockchain could be of help, but Jupp warns of “inherent risks with utilised public blockchains, plus delay and costs when sending large data sizes”.