Cybercriminals often make the opening move with new technology.
AI-driven cyber attacks are possible because criminals are training bots to adapt to different environments and using contextual data to pretend to be trustworthy.
AI is helping criminals breach businesses through the front door and the backdoor, bypassing a network’s security controls and authentication procedures.
We are already seeing dark versions of ChatGPT, such as WormGPT and FraudGPT; plus AI-generated attack vectors that use the technology to craft sophisticated threats that an organisation’s traditional security can miss.
This could include more sophisticated phishing techniques, false social media profiles, deceptive chatbots or fake advertising. Bad actors now use AI to generate illicit code and develop more sophisticated malware. Attackers can put instructions in the white text of a PDF and get AI to modify answers. And hackers are also using AI to trick users into thinking a dangerous software program is safe.
Ultimately, AI has made it easier for the bad actors to attack thousands of organisations at once. They do not necessarily need a bot network, just a computer. Scripting engines that are already very intelligent thanks to AI are an emerging threat vector. Pieter VanIperen, chief information security officer (CISO) at Own Company, explains how cybercriminals are taking advantage of this emerging technology.
What should businesses be looking out for? Are AI threats easy to spot?
The biggest problem is noticing attacks among all the noise.
Things are messy, and data at companies isn’t that orderly. You need to test a lot because someone will always input something they shouldn’t.
The dilemma is that you need to know that an attack is happening so you can attempt to stop it and improve your longer-term resilience. This is where we will see a major shift over the next few years. It will become harder for criminals to hide that they are attacking you, and our ability to detect sooner will improve.
Indeed, the future of cybersecurity will depend on AI and machine learning to help identify breaches and attacks by highlighting potential risks. AI is also improving biometric technology, which is assisting businesses in firming up their security.
The cybersecurity industry is busy educating clients so they have a better understanding of prevention and therefore become better at spotting attack signals within all the noise. It all comes back to being robust in your approach to cyber threats.
You believe businesses should focus on being able to recover quickly when they are attacked. Does AI help or hinder this?
In the world of AI and cybercrime, there are things everyone can do better now to help when they are attacked. These include detecting anomalies in your operations, and having a better understanding of your data and the history of that data. This is important because AI will make maintaining integrity and availability more challenging. Everyone needs to be more honest about what their systems can do, and every business has a responsibility to make sure their data is not tampered with and can be trusted.
The emergence of AI will therefore make recovery and resilience more important than ever. There is an opportunity now to get ahead of what is coming, which is why collaboration with experts is crucial.
Why do the bad guys always seem to be ahead of governments with new technology such as AI?
To be fair, we are seeing several governments step up.
In the US, we have seen announcements from the Defense Information Systems Agency (DISA) and the Department of Homeland Security (DHS) regarding public protection and the responsible use of AI.
At the end of July, the UK government announced a new AI Opportunities Action Plan, while the European Union announced its AI Act in 2023.
Around the world, AI is moving to the top of the priority list for governments and more are devising regulation.
However, a lot of the fight around AI will be led by businesses. They are the ones most likely to be attacked. And unfortunately, the bad guys are first movers when it comes to new tech, after all they often have very little to lose if it goes wrong.
When talking about cybercrime it can seem all doom and gloom. What is the good news about AI and using it to fight criminals?
The good guys can use AI to win, but we have to learn from the bad guys first – and that will be painful.
Even with AI, humans can still do better than computers. We naturally develop heuristics, or mental shortcuts. Computers need vast amounts of data to create similar heuristics.
AI will give us an advantage as a force multiplier and help us to prevent attacks by doing things like ensuring employees do the right things internally with proactive and preventive oversight. Those things take to much man power today
Businesses also need to spend more time getting as much help as possible from different providers. They need to educate themselves on the various services and share information so that people can work together.
However, we will be playing catch-up over the next five years. We’ll be dealing with attacks and things that we have not seen before, and what those will be able to do we don’t know.
To find out more, visit owndata.com