4.1

Cybersecurity

Illo4 Risk Reg

Probably the most fundamental cybersecurity concern when it comes to AI is whether the technology will ultimately be a force for good or for evil; whether it will be more useful to cyber attackers or cyber defenders.

Digital defenders are, of course, deploying AI to defend against cyber attacks. But just as firms are using GenAI to enhance productivity, so too are hackers. Attacks once required considerable resources – perpetrators had to identify high-value targets, study patterns of communication and research company documents, for instance. But machines can now complete this prep work in a fraction of the time.

How to detect a deepfake

AI is both changing the scale at which traditional attacks can be launched and leading to the emergence of new threats.

For instance, the use of deepfakes – false, AI-generated images or videos of real people – is on the rise. A 2024 Ofcom report found that 60% of people in the UK have encountered at least one deepfake. By 2026, 30% of organisations will consider their current authentication or digital ID tooling inadequate to fight deepfakes, according to Gartner, a research consultancy.

This is bad news for businesses,  which are already being targeted in customised phishing attacks that use the technology. Examples of successful deepfake attacks have made headlines. An employee in Hong Kong, for instance, transferred about £20m to cyber attackers after being bamboozled by a deepfake posing as a senior executive.

Is AI a blessing or a curse for cybersecurity?

Many experts worry that AI could be a massively damaging development in cybersecurity. With AI tools, malicious actors can engineer attacks much faster than before

Expand Close

Deepfakes once came with tell-tale signs that users were speaking with a digital impostor – say, glitching speech or a nose floating uncannily out of place. But as the technology improves, deepfakes are becoming nearly impossible to spot. So says Dr Andrew Newell, chief scientific officer at iProov, a digital authentication firm.

Marco Pereira, global head of cybersecurity at Capgemini, agrees. “If you have someone on a video call that looks like the CEO, sounds like the CEO, has the right background – all it takes to fool you is them saying, ‘Oh, my camera is not working well’,” he explains.

Some cybersecurity experts note that there are still some tell-tale signs to look out for – although these might not exist for much longer.

Simon Newman is CEO of the Cyber Resilience Centre for London, a government-funded not-for-profit body helping businesses and charities to improve their defences. He advises looking for details on the face of the person that appear unnatural – perhaps unusual lip colours, facial expressions, strange shadows or blurring inside the mouth.

Ask yourself: Do the lips appear to be moving as they would with the matching words? Do the surrounding facial expressions look natural? 

Still, detecting the technical flaws in deepfakes will become increasingly difficult as the tools continue to develop. Maintaining high contextual alertness may therefore be the most effective way to counter the risk. 

Instead of focusing on people’s features, consider instead the stated purpose of the call and how the participants are interacting. Does anything seem out of context?

As deepfakes become more sophisticated, then, spotting them may become less about trusting your eyes, and more about trusting your gut.

New versions of old attack methods

Democratised AI has also led to the creation of new, ‘smarter’ viruses and worms. An example is the Morris II computer worm, which uses GenAI to clone itself. Researchers at Intuit, Cornell Tech and the Technion Israel Institute of Technology conducted an experiment in which they used Morris II to break the defences of GenAI-powered email assistants using so-called poison prompts. 

Emails stuffed with these prompts caused the assistants to comply with their commands, compelling the bots to send spam to other recipients and exfiltrate personal data from their targets. They then cloned themselves to other AI assistant clients, which mounted similar attacks.

The researchers hope this proof-of-concept worm will serve as a warning that might prevent the appearance of similar species in the wild. They have alerted the developers of the three GenAI models they’d successfully targeted, which are working to patch the flaws exposed by Morris II. 

Attackers are also using AI to supercharge traditional threats – using ChatGPT to create more bespoke, targeted and grammatically correct phishing emails, for instance.

Moreover, GenAI could help criminals to make sense of metadata – the data about data. The content of a text message is data. Metadata includes information such as when the message was sent, where it was sent from, who sent it and to whom.

One piece of metadata on its own is pretty much worthless. But when volumes of metadata are analysed by machines, patterns emerge that are sometimes more revealing than the contents of the messages alone.

Christine Gadsby, chief information security officer at BlackBerry, says that because metadata is the language of machines, computing tools are very good at gathering and making sense of it. “AI will enable attackers to link this data to individuals,” she warns. “What would have taken a human two years to analyse will take two minutes with AI.”

But it’s not only smarter threats that security leaders must worry about. AI systems themselves can also present significant risks.

Securing data inputs

Foundation models, the bedrock of GenAI, are data-hungry. If businesses want to differentiate themselves, they must feed these models with proprietary information, including customer and corporate data. But doing so can expose this sensitive material to the outside world – and the bad actors operating in it – potentially contravening the General Data Protection Regulation (GDPR) in the process.

Dr Sharon Richardson, technical director and AI lead at engineering firm Hoare Lea, sums up the situation: “From day one, these models were a very different beast from a security standpoint. It’s hard to bake security into the neural network itself because its strength comes from hoovering up millions of documents. This is not a problem we’ve solved.

”The Open Worldwide Application Security Project, a not-for-profit foundation working to improve cybersecurity, cites data leakage as one of the most significant threats to the LLMs on which most GenAI tech is based. This risk drew considerable public attention in 2023 when employees at Samsung accidentally released sensitive corporate information via ChatGPT.

What is shadow AI?

Employees’ use of unvetted GenAI tools in the workplace can be a nightmare for data security

Expand Close

The task of safeguarding the data being used takes on a new meaning with the latest GenAI tools, since it’s hard to control how the information is processed. Training data can get exposed as these systems work to organise unstructured material. It’s why some businesses are focusing their efforts on securing inputs. Swiss menswear company TBô, for instance, carefully labels and anonymises information on customers before feeding this into its model. 

Smart organisations are taking a multi-pronged approach to managing the risk. One measure is permissions-based access for specific GenAI tools, under which only certain people are authorised to view classified data outputs. Another control is differential privacy, a statistical technique that allows the sharing of aggregated data while protecting individual privacy. And then there is the feeding of pseudonymised, encrypted or synthetic data into models, with tools that can randomise data sets effectively.

Data minimisation is vital, stresses Pete Ansell, chief technology officer at IT consultancy Privacy Culture. 

“Never push more data into the large language model than you need to,” he advises. “If you don’t have really mature data-management processes, you won’t know what you’re sending to the model.”

Understanding the attack surface that an LLM might expose is also important, which is why retrieval-augmented generation (RAG) is growing in popularity. This is a process in which LLMs reference authoritative data that sits outside the training sources before generating a response.

RAG users don’t share vast amounts of raw data with the model itself. Access is via a secure vector database – a specialised storage system for multi-dimensional data. A RAG system will retrieve sensitive information only when it’s relevant to a query; it won’t hoover up countless data points.

“RAG is really good from the perspectives of both data security and intellectual property protection, since the business retains the data and the library of information the LLM is referencing,” Ansell says.

But he adds that “best practice around identifiable personal information and cybersecurity should also apply to business-level data”.

Such techniques don’t just protect sensitive material from cybercriminals. They also enable businesses to lift and shift learning from one LLM to another since, in practice, it’s not possible to trace the data back to its original source. 

Businesses can also improve their AI-related data security by creating a multi-disciplinary steering group, conducting impact assessments, providing AI awareness training and keeping humans in the loop on all aspects of model development. 

One of the biggest challenges facing the sector is that sensitive corporate data still has to leave localised servers and be processed in the cloud at data centres owned by one of the tech giants, which control most of the popular AI tools. 

“For a brief moment, data can be sitting on a server outside your control, which is a potential security breach,” Richardson says.

Open-source models are therefore becoming increasingly popular, as they enable IT teams to externally audit LLMs, spot security flaws and have them rectified by a developer community.

Bharat Mistry, field CTO at Trend Micro, an IT security company, says cyber attackers will soon begin targeting AI models themselves, if they are not already doing so. 

For example, cybercriminals could infiltrate an organisation and corrupt its AI systems with dodgy data. After a brief period of havoc, the criminals would inform the organisation that they were responsible for the attack and demand a ransom to restore operations.

An over-reliance on AI could exacerbate the impact of such an attack, Mistry says. Even with powerful ransomware attacks, businesses were able to make last-ditch, paper-based contingency plans to stay operational. But operating on analogue, even temporarily, will be almost impossible as organisations become dependent on AI.

Attackers could also add an ‘extra layer’ to GenAI tools, enabling them access to all of the data entered into the system. In this case, the model would appear to operate normally; users would have no reason to distrust the tool and might upload all sorts of confidential information. But if a malicious actor has added a ‘man-in-the-middle’ on the user’s device, all the data fed into it will pass into the hands of the attacker. Employees working remotely are especially vulnerable to this type of breach.

AI’s inherent vulnerabilities

But how would an AI system be corrupted in the first place? “Prompt injection is currently the most common form of attack observed against LLMs,” explains Kevin Breen, director of cyber threat research at Immersive Labs. “The focus is on tricking the model into revealing its underlying instructions or to trick the model into generating content it should not be allowed to create.”

Another potential weakness stems from AI’s inability to access data and information that is more current than the system’s most recent training update. To counter this limitation, LLMs have an added capability to incorporate functions into the AI context through a process known as function calling.

What’s a prompt injection?

These new attack models threaten to turn AI’s capabilities against itself

Expand Close

Breen explains that accessing up-to-date weather information is a common example of such an operation. “Asking an application what the weather is like in London, for instance, will prompt the AI to tell the application what function to use and what data to send. As these functions are sent to the AI, they become part of the context.”

Malicious users can modify the context with a prompt injection and force the AI to list all of its functions, signatures and parameters, warns Breen. “If developers aren’t properly sanitising these results, this can lead to attacks like SQL injection or even code execution, if some functions are able to run code.”

And, since LLMs are used to pass data to third-party applications and services, the UK’s National Cyber Security Centre has warned that malicious prompt injection will become a greater source of risk in the near term. 

For this reason, any business training LLMs on sensitive data such as customer records or financial information must be especially vigilant, explains Dr Peter Garraghan, a professor of computer science at Lancaster University. He adds that the risks of improperly secured AI extend beyond data leakage. 

“Malicious actors can potentially exploit vulnerabilities to manipulate model outputs, leading to incorrect decisions or biased results. This could have severe consequences in high-stakes applications like credit scoring or content moderation.”

4.2

Data transparency and accountability

To achieve success with GenAI, businesses must ensure that the data feeding these systems are reliable.

“There’s a kind of black-box thinking around AI at the moment,” says Rachel Aldighieri, managing director of the Data and Marketing Association (DMA). “It’s really important to unpack how AI works: it’s not algorithms that are necessarily causing issues around data privacy and ethics, it’s the data practises companies are using.”

There are many different sources of data that organisations can use to train their AI models. The simplest is a company’s own internal data, collected from surveys, data-capture forms or customer relationship management (CRM) systems. Firms might have a wealth of internal data but they need to be careful about what they use and how. 

Clarity upon collection

How the data will be used must be made clear at the point of collection. If a company plans to use a customer’s data to train its AI, for example, the customer must be made aware of this before handing over their data. The process begins with obtaining explicit consent, and the opt-in prompt should be easy to understand and completely transparent. 

To comply with data protection regulations, businesses should assess risk using legitimate interest tests and data-balancing assessments to ensure they’ve got those permissions.

What is AI washing?

Transparency in AI is about more than just data collection. Firms must also be clear about the extent to which they use the technology

Expand Close

According to Aldighieri, explaining how customers’ data will be used in a straightforward way is a challenge. Being transparent and accountable upfront, and having a clearly defined ethical framework, helps to create an auditable trail for data and its permissions. She advises organisations to check the DMA Code for guidance on creating a principles-led framework. “If you’re unsure where data has come from, don’t feed it into an algorithm,” she says. “That has to be the bottom line.”

If you don’t have access to data internally, you have three options: use open-source data, buy it from elsewhere or generate it synthetically. 

Thanks to the open-data movement, there is now a wealth of reliable data available for free, from census reporting to travel data. The data is aggregated and anonymised from the start, so no personally identifiable information is exposed. 

For greater granularity, organisations can buy data, but this has its drawbacks too. Data sellers must be thoroughly audited and buyers are responsible for ensuring they work with reputable brokers.

It’s also important to distinguish between what’s ethical and what’s legal. “Technically what Cambridge Analytica did was legal, but most would agree it wasn’t ethical,” says Chris Stephenson, CTO at Sagacity, a data consultancy. 

Organisations should audit brokers by classifying providers based on their reputation and specialism. Government organisations, academics and established commercial vendors are usually the more reputable sources. 

Synthetic data

Then there is synthetic data, which is generated by AI and intended to mimic real-world data. Because this data is not connected to real people, there is no risk of a privacy violation. It can be cheaper too, as businesses won’t need to embark on massive data-collection campaigns or purchase data licences from third parties.

Tens of thousands of data scientists are already using the synthetic data vault – an open-source library created by MIT spinout DataCebo to generate synthetic tabular data. The company claims that as many as 90% of enterprise data use cases could be achieved with synthetic data.

Regardless of how and where companies source their data, bias will always be an issue. This could be bias that is “introduced programmatically, or bias in the sample sourcing or just the inbuilt biases of the societies we live in,” Stephenson says. 

It is therefore up to whoever is training the model to understand what and where the bias might be – and to take the appropriate steps to address it. Aldighieri suggests monitoring datasets to ensure that only properly permissioned data flows into algorithms. 

An essential part of reducing bias is ensuring that the teams working on artificial intelligence are diverse and representative. They must also have an understanding of how to recognise, unpick and challenge bias in automated decision-making, says Aldighieri. 

Once data collection processes are in place, organisations must continuously monitor their systems to ensure compliance with legal standards as well as their own ethical frameworks. Finally, firms should establish clear processes for removing content when requested.

4.3

Algorithmic bias and accuracy

Any AI system, however sophisticated, is only as good as the data on which it is trained. Any bias in its outputs will result from distortions in the material that humans have gathered and fed into the algorithm. While such biases are unintentional, the AI sector has a predominantly white male workforce creating products that will inevitably reflect that demographic group’s particular prejudices. 

Facial recognition systems, for instance, could be inadvertently trained to recognise white people more easily than Black people because physical data on the former tends to be used more often in the training process. The results can put demographic groups that have traditionally faced discrimination at even more of a disadvantage, heightening barriers to diversity, equity and inclusivity in areas ranging from recruitment to healthcare provision.

The good news is that the problem has been widely acknowledged in business, academia and government, and efforts are being made to make AI more open, accessible and balanced. There is also a new ethical focus in the tech industry, with giants including Google and Microsoft establishing principles for system development and deployment that often feature commitments to improving inclusivity. 

Nonetheless, some observers argue that very little coordinated progress has been achieved on establishing AI ethical norms, particularly in relation to diversity. 

Known to discriminate

Recent breakthroughs in the field of generative AI have also done little to address concerns about discrimination. In fact, there’s a risk that the potential harms of generative systems have been forgotten in all the media hype surrounding the power of OpenAI’s ChatGPT chatbot and its ilk, argues Will Williams, vice-president of machine learning at Speechmatics. 

What is an AI hallucination?

Biases aside, there is a peculiar quirk in AI systems that has serious implications for its reliability. Sometimes, it appears, these systems simply make things up.

Expand Close

Williams says: “The truth is that the inherent bias in models such as ChatGPT, Google’s Bard and Anthropic’s Claude means that they cannot be deployed in any business where accuracy and trust matter. In reality, the commercial applications for these new technologies are few and far between.”

The race to produce a winner in the generative stakes has given new urgency to addressing bias and highlighting the importance of responsible AI.

Emer Dolan is president of enterprise internationalisation at RWS Group, a provider of technology-enabled language services. She says that, while the detection and removal of bias is “not a perfect science, many companies are tackling this challenge using an iterative process of sourcing targeted data to address specific biases over time. As an industry, it’s our duty to educate people about how their data is being used to train generative AI. The responsibility lies not only with the firms that build the models but also with those that supply the data on which they’re trained.”

Can biases be overcome?

Some organisations are experimenting with synthetic data to mitigate algorithmic biases. By using synthetic data, it is theoretically possible for AI developers to generate, for instance, an endless number of faces of people of different ethnicities to train its models, meaning that gaps in the AI’s understanding are less likely. 

Steve Harris is CEO of Mindtech Global, a UK-based synthetic data startup. He claims that some of his customers use the firm’s services to generate diverse data from scratch, while others use it to address the lack of diversity in their existing real-world datasets.

But there are limitations to using computer-generated images to train AI for real-world applications. “Synthetic data almost never gives the same results as a comparable amount of real data,” says Marek Rei, a professor in machine learning at Imperial College London. “We normally have to make some assumptions and simplifications to model the data-generation process. Unfortunately, this also means losing a lot of the nuances and intricacies present in the real data.”

Plus, synthetic data relies on the people generating the data to use such platforms responsibly. Rei adds: “Any biases that are present in the data-generation process – whether intentionally or unintentionally – will be picked up by models trained on it.”

A study by researchers at Arizona State University showed that when trained on predominantly white, male images of engineering professors, its generative model amplified the biases in the dataset, meaning that it produced images of minority modes less frequently. Even worse, the AI began “lightening the skin colour of non-white faces and transforming female facial features to be masculine” when generating new faces.

With synthetic data programmes giving developers access to unlimited amounts of data, this has the potential to drastically exacerbate the issue of bias if errors are made at any point in the generation process.

So although synthetic data can make the process of creating AI models quicker, cheaper and easier for programmers, it will not necessarily solve the problem of algorithmic bias.

4.4

Job displacement and workforce disruption

Sarah Franklin, CEO of Lattice, an HR tech company, was at the centre of a controversy in July 2024. In a post on LinkedIn, Franklin explained that the company would begin to “employ” AI workers. AI assistants, the post continued, would be assigned managers and onboarded, similar to human employees.

Franklin maintains that the controversy is really just a misunderstanding. But the implicit comparison of humans and AI hit a nerve with many people, who are understandably sensitive to suggestions that AI will lead to role replacements en masse.

‘What will we do about the economy, jobs and ethics?’

An interview with Professor Erik Brynjolfsson

Expand Close

It is widely acknowledged that AI presents an existential threat to the labour market. Mass job displacement is not inevitable – some maintain that careful deployment of the technology will help to mitigate large-scale workforce disruption. But the danger of role replacement is very real for workers of all kinds, irrespective of industry, seniority or function.

For instance, IBM’s chairman and CEO, Arvind Krishna, expects that about 7,800 jobs at the company could be replaced by generative AI in the medium term. Meanwhile, BT Group’s leadership team has been open about its plan to slash the firm’s headcount by 55,000, using AI to automate up to 10,000 jobs in seven years.

Still, most senior leaders claim that AI implementation will not necessarily lead to large-scale redundancies. Some even believe the technology can enable a more fulfilling work experience for human employees.

One of the keys to mitigating workforce disruption is to provide company-wide AI training. See section three of this guide for more information on employee AI training.

So if AI is used to automate significant chunks of work, what will be left for humans to do?

Fulfilling work? More or less

The standard answer is that AI will handle the repetitive, low-value tasks that humans don’t want to do anyway. People, therefore, will be able to devote more time to creative, value-adding work – the kind only humans can do, so it’s said.

But the reality may not be so simple. Milena Nikolova, a professor in the economics of wellbeing at the University of Groningen, has looked at data on thousands of workers across 20 European countries over two decades. Surprisingly, she found that automation, at least in industrial workplaces, actually increased repetitive and monotonous tasks for humans. Human work became more routine, not less.

Nikolova’s research found that robotisation made work more intense, focusing on a dwindling set of tasks that machines could not easily accomplish. These tasks were also less interesting, with fewer opportunities for cognitively challenging work and human contact. 

Workers also became more reliant on a machine’s pace of work and had a more limited understanding of the full production process. The overall result was a decline in meaningful work and autonomy.

Nikolova explains: “Will automation and AI create more meaningless jobs? This new wave of automation, including AI, is very different. It has the potential to affect highly skilled, highly educated and highly paid knowledge workers. This is something we’ve not seen before.”

Understanding the technology’s mission creep on the tasks and workplace values that humans hold dear is essential. For instance, AI can increase the intensity of work and put pressure on “humans in the loop” as they try to keep pace with algorithms. 

Moreover, AI-powered digital systems can impinge on worker autonomy, embed employee surveillance, erode employee competencies without adequate retraining and degrade employee socialisation and human interactions. 

To address these cultural risks, employers should clearly communicate their strategy for AI and explain how exactly the technology will be used to assist rather than replace human workers.

Lisa Thomson is an HR consultant to early-stage and high-growth companies. She says the secret to success with an AI project is for employees to feel involved. “You can’t over-communicate. You need to get people on board with you and get them to put forward suggestions,” she says.

Giving employees agency, Thomson says, will help companies to bridge the gap between the executive vision of what the technology offers and how its use affects the day-to-day lives of its workforce. Getting employees involved in testing and experimenting also creates a learning environment. 

Hayfa Mohdzaini is a senior research adviser in data, technology and AI at the Chartered Institute of Personnel and Development (CIPD). She agrees that employees must be involved in the planning and implementation of AI as much as possible. Any business leader sensing resistance in the workforce, she adds, should carefully explain how employees will benefit from the AI tools.

This must be done sensitively, Mohdzaini stresses. Remember that change will always make people feel uncomfortable. Don’t allow rumours about redundancy to swirl. Instead, have “honest two-way conversations with staff as early as possible and give them opportunities to shape how the change will affect them,” she suggests. “While you may not be able to implement all of their suggestions, you can acknowledge these and show which ones you’re planning to implement now or later. Keep your communication channels open.”

4.5

Regulatory uncertainty

Ensuring AI systems comply with current and future regulations is a serious challenge that can be easily overlooked. That’s hardly surprising considering regulations governing the use of AI are, at present, somewhat experimental. 

Although certain processes involved in the use of AI, such as data collection, are covered by existing regulations, most national legislatures have not yet introduced a unified set of AI regulations. Some, however, have made a start.

The EU AI Act

Perhaps unsurprisingly, the first legislative body to take this step was the European Parliament. In March 2023, EU legislators signed off on the EU Artificial Intelligence Act – a set of statutory standards governing the use of AI. 

Neil Thacker, CISO at Netskope, a cybersecurity firm, says one of the main objectives of the legislation is to “strike the right balance of enabling innovation while respecting ethical principles”.

The legislation will apply to any system that touches, or otherwise interacts with, consumers in the EU. That means it could have a broad extraterritorial impact. A British company using AI to analyse data that’s then sent to a European client, for instance, would be covered by the legislation. 

Spot Illo4 Risk Reg

“The act is wide-ranging, trying to provide guidance and protection across the multitude of areas that AI will affect in the coming years,” Thacker says.

Naturally, most business leaders will wonder how this legislation will impact their compliance burden. For many, the EU’s previous big statutory intervention – the General Data Protection Regulation – has cast a long shadow since taking effect in 2018. Remembering the paperwork this required and the many changes they had to make to ensure compliance, they’re understandably worried that the new legislation could impose similar bureaucratic burdens, which might prove costly.

Fear not, says Michael Veale, associate professor at University College London’s faculty of laws, who has been poring over the act’s small print. 

Many of its provisions are “quite straightforward and imaginable”, he says. These include “making sure that your system is secure and not biased in ways that are undesirable, and that any human overseeing it can do so appropriately and robustly”. 

In theory, such requirements shouldn’t be too taxing, Veale says. 

“They echo a variety of the very basic demands on AI systems in recent years,” he explains. “While it may be difficult to interpret them in every single context, they aren’t particularly onerous or revolutionary.”

Risky businesses

Companies specialising in areas that the legislation deems “high risk” will need to be particularly attentive to its terms. Most applications identified as high risk by the act are those that public sector organisations would use for purposes such as education, the management of critical infrastructure or the allocation of emergency services.

Any UK firm selling AI products for such purposes would need to register these in a centralised database and undergo the same certification process that applies to any EU counterpart. 

As regulations continue to develop across various jurisdictions, the best way for businesses to extract the most value from AI now while staying mindful of the future compliance risks is to focus their strategy on three qualities: flexibility, scalability and effective governance. So says Caroline Carruthers, co-founder and CEO of Carruthers and Jackson, a consultancy specialising in data management. 

The flexibility element, she explains, means having the means to both “take advantage of new opportunities afforded by advances in AI” and adjust quickly to any new regulatory requirement. 

When it comes to scalability, Carruthers notes that “some tech teams can build fantastic new tools but aren’t able to scale these up, meaning that only a small part of the business benefits from them. AI innovation will be no different. It’s important for any AI-based transformation to strike a balance between doing something fast and ensuring that it is scalable.” 

Underpinning both of these elements should be good governance, she stresses. “We don’t know what AI regulation will look like yet, so understanding the risks of this technology and building a framework that recognises them is critical to getting ahead of potential new laws.”

See section three for more information on developing a robust AI governance framework.

4.6

Conclusion

As stated at the start of this guide, AI is the defining technology of the early-21st century. As with any nascent technology the use cases, developments, opportunities and pitfalls are constantly evolving and impossible to fully predict.

This guide does not aim to provide the C-suite with a blueprint for success, but instead a solid foundation for business leaders to build on. By understanding the foundations of AI – by reviewing best practices and learning from the experiences of experts and peers – business leaders can establish the building blocks that will allow them to test, iterate and improve their use of AI systems, giving them the best chance for success in the future. 

Further reading

With AI developing at such a rapid pace, it is essential that business leaders remain up to date with the latest legislation, use cases and advice from experts. Raconteur’s dedicated ‘Mastering AI’ hub aims to provide the C-suite with the latest industry news, commentary and opinions on how AI is shaping the business world today, tomorrow and in the years to come