In a world of constantly evolving cyber threats, the rise of artificial intelligence is not just the latest in a long line of potentially dangerous escalations. Instead, this could be a massively damaging development, with AI tools theoretically enabling infinitely more threats to be generated at speed, with greater complexity.
However, the flip side to this is that AI also enables more effective real-time defences to be rolled out. It’s something a number of commercial cybersecurity providers are already making the most of.
So, who benefits most from the rise of AI: the good guys or the bad guys? Here, two cybersecurity experts – City, University of London’s Professor Muttukrishnan Rajarajan, and the Chartered Institute of Information Security’s Amanda Finch – have their say.
The rise of AI enables cyber crime at speed
We’re in the early stages of assessing the impact of AI on cybersecurity, but one thing is clear: AI poses massive problems because it allows attacks to be launched against systems in real time and, once set in motion, continuously and with minimum effort.
Responding to that is something I worry the good guys haven’t grasped yet. The fact is that whatever line of defence might be put in place, AI malware is finding a way around it. Cracking good passwords, for example, is not necessarily a new feasibility; it’s just that what might have taken months or years before may now take days or even minutes for an AI system.
AI phishing attacks will reach a new level of sophistication too, not least because AI can create customised phishing emails that will be hard for people to differentiate.
And part of the bigger problem is that there are going to be more and more means by which AI malware can find an entry point. As a result of the Internet of Things, for instance, we have ever more smart devices that are connected intuitively. They talk to each other without much input from us. That brings huge conveniences, but such connectivity also opens up huge vulnerabilities.
AI also means that resources will be a massively important issue. AI is not cheap, so to employ it in defending against a cyber attack is going to prove costly. That is something big business may be able to cover, but it likely leaves smaller firms open to attack. And that’s a problem because, in dealing with those smaller businesses, that still leaves bigger businesses exposed by the back door, throughout their supply chains.
In the longer run, quantum computing is going to help with defending from AI-based attacks. We are already seeing some larger organisations and governments using quantum systems. But the widespread commercial use of quantum is some way off. That allows me to come to this conclusion: if I had to bet right now on whether the good guys or the bad guys are going to win the early stages in this AI ‘war’, I’d have to put my money on the bad guys.
AI will help the good guys to stay good
There are, of course, valid reasons for concern about the advent of artificial intelligence in the cybersecurity world. In some regards, our problems will get bigger. But I think there are many reasons to regard this as a boon too.
For one, AI will usher in a whole new set of technologies which, by enabling increased automation, will do away with a lot of the repetitive tasks that are currently necessary.
That automation will also bring a much greater level of observation – both continuous and global, but also deeper – giving us the ability to spot suspicious patterns that are currently much trickier to identify. Our defences are simply going to be that much more sophisticated, and vulnerabilities are going to come to light that much faster. For instance, at the moment, we have to deal with a lot of false positives, but eventually deep learning tools will help to reduce the likelihood of their occurrence.
I think the arrival of AI will encourage security professionals to think differently, too. It’s one thing to introduce new technology, but ultimately it’s about finding innovations in terms of how that technology is harnessed. AI will lead to a flurry of start-up creation from those who see unrealised potential in its application in cybersecurity, or who see the need for greater protection from AI-based attacks for smaller businesses and organisations that don’t have the resources. That will, in turn, be good for the economy. We are already seeing some amazing firms emerging in the UK.
Can AI bring enhanced levels of compliance to the cybersecurity profession? That’s a tricky question. Obviously, the bad guys don’t follow the rules, but AI will help the good guys to stay good. It will encourage more widespread adoption of best practice guidelines, and that for me is more important than implementing further laws and regulations.
Of course, we can’t pretend that the implementation of AI in cybersecurity isn’t another big escalation in the arms race between the defenders and attackers of cyberspace. But there never really was any end to that arms race in sight. Cyber changes every year, and AI is just the latest thing.
That may sound casual, but perhaps there’s even a positive in that: it has certainly put cybersecurity on the map. When I started out, people would look blankly at you if you used the word ‘firewall’. Now, in part because of this conversation around AI, most people have at least a basic understanding of the need for cybersecurity. The problem may be bigger, yes, but we’re all that much more savvy about it, too. Now we just have to get on with things and deal with it.
As told to Josh Sims