Artificial intelligence has opened up a new frontline in cybersecurity, with the technology being deployed to both attack and defend corporate operations.
But while much discussion has focused on the ability of AI to fuel attacks and bolster defences, there is the potential for AI systems themselves to become a chink in the cyber armour of UK organisations.
One in six businesses in the UK has deployed at least one AI application in their operations, according to research commissioned by the Department for Culture, Media and Sport. Such applications have unique security requirements. And, neglecting these could open the gates to bad actors intent on causing damage.
How hackers are attacking your AI tools
Organisations most commonly use AI in customer-service chatbots, which are underpinned by large language models (LLM) that generate humanised responses when prompted. According to Kevin Breen, director of cyber threat research at Immersive Labs, these models are particularly vulnerable to cyber attacks.
“Prompt injection is currently the most common form of attack observed against LLMs,” he explains. “The focus is on tricking the model into revealing its underlying instructions or to trick the model into generating content it should not be allowed to create.”
Another potential weakness stems from AI’s inability to access data and information that is more current than the system’s most recent training update. To counter this limitation, LLMs have an added capability to incorporate functions into the AI context through a process known as function calling.
Breen explains that accessing up-to-date weather information is a common example of such an operation. “Asking an application what the weather is like in London, for instance, will prompt the AI to tell the application what function to use and what data to send. As these functions are sent to the AI, they become part of the context.”
Malicious users can modify the context with a prompt injection and force the AI to list all of its functions, signatures and parameters, warns Breen. “If developers aren’t properly sanitising these results, this can lead to attacks like SQL injection or even code execution, if some functions are able to run code.”
And, since LLMs are used to pass data to third-party applications and services, the UK’s National Cyber Security Centre has warned that malicious prompt injection will become a growing source of risk in the near term.
For this reason, any business training LLMs on sensitive data such as customer records or financial information must be especially vigilant, explains Dr Peter Garraghan, a professor of computer science at Lancaster University, as well as chief executive and chief technology officer at Mindgard AI. He adds that the risks of improperly secured AI extend beyond data leakage.
“Malicious actors can potentially exploit vulnerabilities to manipulate model outputs, leading to incorrect decisions or biased results. This could have severe consequences in high-stakes applications like credit scoring, medical diagnosis or content moderation.”
The evolving threat landscape
Understanding the potential attack surface is essential. Generative AI has unique characteristics that exacerbate security challenges, according to Herain Oberoi, general manager, data and AI security, governance, compliance and privacy at Microsoft.
“Its high connectivity to data makes data security and governance more challenging than ever and its use of natural language means that the technical barrier for bad actors is lower, as a simple sentence can be used to attack AI applications. Plus, its non-deterministic nature makes it susceptible to manipulation.”
Security teams must therefore ensure that their existing cybersecurity frameworks and risk management processes are extended to cover AI systems.
“Firms should include AI assets in asset inventories, data flow diagrams, threat models, red teaming, pen testing, incident response playbooks and so on,” explains Garraghan. “In one sense, AI is just another software tool and incorporates a lot of standard IT thinking. But it also has very significant differences and requires specialist skills and tools to secure properly.”
Decision-makers and security-operations teams must also treat AI security as a continuous task. Garraghan says this starts with an organisational culture that emphasises the importance of responsible and secure AI development.
“This means establishing clear policies around data handling, model testing and deployment approvals. It also means training everyone interacting with AI, from data scientists to business users, on the risks and best practices. AI security is a highly dynamic field, so continuous education is essential.”
How protected is your AI?
While ensuring effective security measures may seem like a daunting task, the good news is that some AI vulnerabilities might already be covered, according to Liam Mayron, staff product manager for security products at Fastly.
“Teams should realise that they’re probably not starting from zero. Some of the security tools that are already in place can help to monitor newly deployed LLMs and AI tools from an application-security perspective, even if they’re not built for it,” Mayron adds. “The key is to ensure and verify that these existing security tools have visibility into your AI applications.”
As these applications continue to proliferate, proactively reviewing the security frameworks that protect them will go a long way towards safeguarding business operations in the future.