2.1

Identifying uses and choosing the right tools

Illo2 Getting Started

Before implementing any AI tools, business leaders must determine what they want to achieve with the technology and identify specific use cases. The goals could include, for instance, better customer service, higher productivity levels or stronger brand recognition.

This is an essential starting point for any AI journey, it’s not just a tick-box exercise, says Emma Brown, CFO at Medius, a spend-management solutions provider. She advises senior leaders to first identify a real business problem, which AI might be able to solve, and then work backwards.

Simply deploying the technology with the vague hope of “higher productivity levels” won’t bring the desired results. Dael Wilson, field CTO at Databricks, a provider of data and AI solutions, warns against adopting AI without a clear vision for its use, including specific use cases. Deploying an AI tool and then looking for a problem to solve likely won’t bring any significant benefits – and it could even cause unnecessary disruption.

But what does aligning use cases with business goals mean in practice?

Dell Technologies launched its Dell AI Factory earlier this year, a collaboration with AI chipmaker Nvidia to help businesses integrate GenAI applications into their operations.

“Aligning GenAI strategies with business goals means moving beyond a fascination with hype and toward a deeper understanding of how the technology can enhance the business,” says Steve Young, UK senior vice-president and managing director at Dell Technologies. 

How to assess your organisation’s digital maturity honestly

Before leaders even begin thinking about AI adoption, many would benefit from assessing their firms’ current level of digital maturity

Expand Close

Peter van der Putten, assistant professor of AI at Leiden University, echoes this opinion, adding that leaders must ask whether the technology will improve the experience of customers and employees, whether its output will reach quality standards and, ultimately, whether it will lead to better outcomes for the business.

If a company can answer all these positively, it should then devise appropriate use cases with which to experiment. It would need to set up pilot and control groups to measure the tool’s impact, just as it would with any new IT, stresses Van der Putten.

Remember: no two businesses are the same. Every organisation has different pain points and therefore every leadership team must decide their specific goals in using AI and where it makes most sense for their business.

While it may be tempting to pick a low-value, low-stakes use case to test, he recommends choosing an application that allows for a “quick measurement of success” and could also make a big impact once scaled up. 

Becky Owen, CMO of Billion Dollar Boy, an influencer-marketing agency, notes that such trials naturally involve a certain amount of error. With this in mind, her company has established a set of guiding principles designed to ensure that its AI systems benefit staff and clients. The agency has also created a task force to seek out new AI tools and uses for them. This group will brief the rest of the business on the latest advances in the field.

“There’s been a surge in AI-integrated tools, each promising efficiencies, but these can be clunky and add time to work processes,” Owen says. “The truth is that we might not all immediately land on the right solution. The key is to be open-minded.”

Data privacy and ownership

For all the enthusiasm about GenAI, there are several pitfalls that firms seeking to implement the technology must avoid. 

Prashanth Chandrasekar, chief executive at Stack Overflow, recounts a meeting he had with 15 CIOs in the banking sector, who had all been keen to realise the huge productivity gains promised by GenAI. Three months later, these IT chiefs hit a wall when concerns over data privacy and security arose following the pilot programmes. 

What is ChatGPT Enterprise?

The enterprise version may help to protect sensitive training data

Expand Close

The CIOs were worried that the data they had been putting into the tools would “make its way, literally, into their competitors’ banks”, Chandrasekar says. 

Given what’s at stake, ownership becomes a “hot potato”, he adds. “You’re betting your career that this is going to work when you’re fairly early in the hype cycle.”

AI tools need to address the credibility problem by adding context such as citations to reassure the user that their output hasn’t been poisoned by hallucinations. That’s the view of Cassiano Surek, CTO at digital design agency Beyond. 

“Given the data-heavy nature of AI assistance, ensuring that relevant, high-quality information is available will be key to its effective use, as inaccuracies can quickly erode trust,” he says. “AI assistants must be able to cite their sources and have virtually zero hallucinations for such a business-critical usage.”

After identifying realistic use cases and ensuring they are truly aligned with the broader goals of the business, firms should consider several other factors before implementing their AI strategies. These include ethical responsibilities, workforce readiness and success metrics.

Finance leaders’ top tips for investing in AI

How do finance chiefs assess the cost, value and feasibility of potential AI projects in their organisations?

Expand Close
2.2

Setting the guardrails

AI enables its users to do useful things with large pools of data – for instance, fish out insights without tying up the time of data scientists. Data is therefore fundamental to AI. There is a direct relationship between the quality (and quantity) of what’s fed into an AI application and the accuracy of its output.

Data governance has traditionally been viewed in terms of complying with regulations that stipulate how data must be collected, stored and processed. But AI has introduced new challenges and risks to be managed. It’s not enough to obtain a vast amount of data; you also need to consider its characteristics. Where is the data coming from? What does it actually represent? Is there anything that you need to account for before feeding this material into your algorithm? Will it train the algorithm in the right thing?

Ensuring observability

One way of approaching data governance is to use a tool known as an observability pipeline. This ensures that every process is visible, collecting data that is then unified and cleaned up to create a more consumable final data set. 

An example is the conversion of raw website logs to an analytics platform. The original data and its point of consumption are ‘buffered’ by the pipeline: the raw data enters it and is processed before being sent out to where it needs to be consumed. The method of consumption can easily be altered because the underlying data is unaffected – that is, you can change how the data is presented without changing the collection process.

AI can both benefit from this process and become part of it. The pipeline itself may feed an algorithm, but ML can be used to detect anomalous data (based on past trends) before it gets too far. This can save people the time and effort they would otherwise need to spend on checking and cleaning data and, once it’s been processed, investigating any irregularities. But it can also ensure that business-critical algorithms aren’t being fed data that will lead them to draw the wrong conclusions and, potentially, nullify any benefits gained from introducing AI to the process in the first place.

Ensuring observability has plenty of benefits for data flows that don’t involve AI, but the sheer volume of material involved and the complexity of ML processes mean that it’s vital to know what’s happening to the data being processed. Checking that the number of visitors in your web analytics matches what your logs tell you is trivial compared with understanding the output of a complex algorithm that’s being trained and tuned over time.

This is because a system that might have started by providing the insights you were seeking could be drifting ever further away from generating anything useful. The better your view of what’s happening to the data, the easier it is for you to prevent this outcome.

Data primacy

Data governance is key to ensuring that AI produces useful results. It incorporates an understanding of not only ethical and legal issues but the implications these have for what material must be collected and its potential limitations.

The organisations that will benefit the most from AI will be those that take the time to build a framework that ensures they’re targeting the right data, collecting enough of it, checking and cleaning it to ensure that it’s of a high standard, and then using it ethically. 

Should businesses hold back on AI adoption?

How firms should approach AI adoption is a matter of debate

Expand Close

With the right data governance in place, these enterprises can maximise the benefits and minimise the risks of using AI to provide insights that will streamline their processes, inform their decision-making and create powerful new products and services. There is a lot more than hype behind what AI can do for your business – as long as you lay the right foundations for it.

It’s easy to see the attraction of the pervasive, rapidly developing plug-and-play tools, powered by customer data and intellectual property. However, a gap has emerged between organisations’ speedy implementation of AI and their ability to address the special governance concerns that arise.

AI exposes businesses to unique risks, demanding new levels of scrutiny and due diligence. It not only has the ability to expose private information and infringe IP rights, but there are also challenges over bias and ethics, cybersecurity for AI and corporate governance practices at external AI vendors. 

It doesn’t help that there’s no universal blueprint for AI governance. The EU AI Act mainly tempers riskier forms of the tech, while the fragmentation of regulations globally means businesses face many questions when it comes to putting the right guardrails in place.

‘An ecosystem challenge’

Antonis Patrikios, privacy, cyber and AI partner at Dentons, a global law firm, describes the AI governance challenge as “GDPR on steroids”. However, GDPR – the EU’s General Data Protection Regulation – primarily concerns the chief data officer and IT departments. AI governance is an ecosystem challenge, involving teams including procurement, legal and information security. But many businesses aren’t taking a joined-up approach. 

Because the risk landscape is so varied, the chances are high that things will fall between the gaps, says Steve Wright, CEO of IT consultancy Privacy Culture.

“Many departments and teams still work in silos. IT teams often don’t work hand in glove with the person involved with AI governance,” he explains. 

Although there are some resource-rich organisations that have been particularly proactive about AI governance, most firms will have to settle for a wait-and-see approach, says Wright. “While GDPR, for instance, had an end date for compliance, this is not the case with AI so far.”

At this point in the evolution of AI, companies must take the initiative themselves, rather than rely on global governance structures for safeguards. Closing the AI governance gap for organisations means creating a framework around two elements: one external and one internal. 

The external element involves scrutinising third-party providers. It is essential for businesses to ask the right questions about AI accountability at the outset of any contract with vendors. 

Corporations increasingly want to utilise private instances of a large language model (LLM) in the cloud. They want to know where that cloud infrastructure is located and to use retrieval augmented generation (RAG) systems – where company data sits outside of the training sources – so businesses don’t share vast tranches of raw data with the LLM itself. Moreover, they want a human in the loop for quality assurance. 

As for the internal component, firms should focus on ensuring top-notch data management systems, since AI data input is one of the most significant factors that businesses can control. Strong internal AI governance also means considering privacy by design, mapping AI systems in use and implementing robust ethical guidelines.

A proactive approach to AI governance is vital. “The core challenge for businesses is that AI policies, protocols and contracts can quickly become outdated as technology, regulations and market standards rapidly evolve,” notes Alexander Amato-Cravero, a director of emerging technology at Herbert Smith Freehills, a law firm.

Standards and reporting

With so many moving parts in AI governance, how can organisations gauge success? There are a number of frameworks being developed around the world. 

Businesses can expect more from the European Commission, for instance. When the EU AI Act was formulated, EU technocrats had a future conformity assessment or CE mark in mind, similar to a BSI kitemark for AI. 

Wright thinks a European roadmap, which organisations can test against, is likely coming in the near term.

An AI kitemark would help businesses bridge the internal AI governance gap, because many organisations are struggling to work out the bare minimum of resources needed to account for responsible use of the technology. 

The UK’s Information Commissioner’s Office has an AI risk assessment tool to help in this process. Many businesses also use the US National Institute of Standards and Technology AI Risk Management Framework and/or the OECD’s responsible AI governance framework. 

Patrikios says one of the biggest issues with filling the AI governance gap is that there aren’t enough trained people for technical roles in data and security, among others.

“The talent shortage right now is similar to the lack of data protection officers that occurred around 2016, two years before the GDPR came into force,” he explains, adding that some organisations, such as the International Association of Privacy Professionals, are training people on AI governance.

Case study: How Lewis Silkin is using AI to automate legal work

Expand Close
2.3

Preparing an AI-ready workforce

Ask any business leader about their vision for AI integration and nearly all of them will acknowledge that AI must ultimately be a partner to human workers – a tool enabling people to do their jobs better; a co-pilot. 

To achieve this, humans must become comfortable working with machines. But many organisations are facing worrisome digital skills gaps, which threaten their ability to integrate the technology effectively. Less than half of companies in the US (38%) and the UK (44%) are taking steps to train workers to use AI tools, according to a LinkedIn survey of 3,000 senior executives in December 2023. 

What’s more, a 2023 KPMG report found that 61% of desk-based workers in the UK actually want training in GenAI. More than half of 18- to 24-year-olds are already using the technology to learn professional skills, but only a fifth of UK employees can find learning resources quickly at their job.

After the initial surge of enthusiasm accompanying the launch of ChatGPT in late 2022, a more sober assessment of the technology’s risks and limitations began. That’s according to Gina Smith, research director, IT skills for digital business at IDC, a technology research firm. Reflecting on the corporate policies and procedures needed to govern the use of AI, she says even getting the guardrails in place takes a lot of time. 

Upskilling for an AI future

Employers have started to define best practices for upskilling workers, which will help to address these problems. The first step is to ensure employees are provided adequate training on how to use AI systems.

Take JPMorgan Chase, for instance. At the company’s ‘investor day’ in May, Mary Callahan Erdoes, CEO of the bank’s asset and wealth management business line, explained that all employees would receive training on prompt engineering “to get them ready for the AI of the future”.

Instruction in such foundational skills is part of the AI literacy that companies seek to give employees as a baseline for AI training. It also typically covers issues in AI ethics and company policy governing the use of AI.

For instance, Ikea, which has provided “AI literacy training” to nearly a quarter of its workforce as of September 2024, offers modules on responsible AI and AI ethics. Likewise, MasterCard offers eight hours of video training on key responsible-AI principles such as fairness and transparency; this is delivered through a new intranet hub for company-wide AI learning launched in August 2024. 

Companies are also offering more specialised instruction beyond the basics. Ikea, for instance, runs an accelerator programme for new digital hires with AI-related degrees to help them get up to speed in their jobs. 

Meanwhile, financial services provider USAA, which has 37,000 employees, relies on hackathons to give technical and other staff the chance to get hands-on experience with AI software and try to find novel use cases for the technology. 

Advertising behemoth WPP, which has long championed the use of AI technology, has developed various strategies for training staff at all levels. These range from providing ‘future readiness academies’ – online courses in various tech disciplines including data and AI – to sponsoring a group of senior executives for a postgraduate diploma in AI at Oxford University’s Saïd Business School in 2023.

There are also a range of organisations assisting companies with their training efforts. These include consultancies such as Accenture, which has a proprietary skills learning platform; major tech providers including Microsoft, Adobe and Meta; LinkedIn, through its LinkedIn Learning service; and online learning specialists such as Pearson, which has introduced a certificate for GenAI to meet the rising demand for AI-related skills.

To address the significant skills gaps, roughly half of organisations say they’re relying on professional certifications from big tech firms; about the same proportion are providing internal upskilling training, according to a July IDC survey of 1,269 organisations globally. 

Is GenAI prompt training worth the investment?

GenAI tools are only as good as the prompts they’re provided

Expand Close

Whether they’re hiring outside experts or finding purely internal solutions, most firms will need to up their investments to provide AI training to staff. 

Just how much these efforts cost is difficult to pin down. WPP, for instance, says in its 2024 Strategic Report that it plans to spend £250m this year to support its AI strategy, but it’s not clear if that includes AI development skills. 

IDC projects worldwide IT training and educational services spending to increase 5% annually in the coming years, from just over £16bn this year to almost £19bn in 2027. The Americas will account for almost half that total, at £8.7bn.

What about the return on investment from the recent AI training push? Because most AI training programmes are still in the early stages, it’s difficult for companies to tout any measurable results yet. A LinkedIn report released in March found that just 4% of large-scale upskilling programmes had reached the measurement stage, based on a global survey of more than 1,600 learning and development and HR staff in September 2023.

But new skills are not learnt overnight. AI is likely to be central to future business models, so more firms may well bet that the returns they generate for training staff now will be seen in the success of the organisation in the future.

AI leadership

So who will lead organisations’ AI efforts? At many firms, especially those just beginning their AI journeys, natural choices for leadership include chief data officers, information and security chiefs, or some other senior technical specialist.

But as organisations progress in their AI initiatives, some are establishing a bespoke C-suite position for AI strategy and management: the chief AI officer (CAIO). Should other enterprises follow suit? “It depends” is the answer that most experts will give. 

One of them is Michael Queenan, founder and CEO of Nephos Technologies, a consultancy specialising in data-service integrations. He notes that many of the S&P 250 are hiring, or talking about hiring, an AI chief of some description. But he compares this to an “emperor’s new clothes” scenario, suggesting that firms are “often not giving enough thought” to why they really need one. 

Their reasoning may be no more complex than “they don’t want to be seen as the company that doesn’t have one, lest they’re asked why not at the next shareholder meeting or on CNN and their share price falls”, Queenan explains.

Spot Illo2 Getting Started

The decision whether to hire an AI supremo or not should be based on how central the technology already is to the business. That’s the view of Brian Peterson, co-founder and CTO of Dialpad, the creator of an AI-based customer intelligence platform. 

“If AI is a big element of your business or you’re building it into your product set, having a CAIO would provide focus. But, if it just seems cool and could be part of your future but you’re not sure how yet, appointing one might not be right for you,” he says, suggesting that it would make more sense in the latter scenario to hire a consultant first to assess the technology’s potential value to the firm.

In any case, CAIOs are a scarce and costly commodity, reports Waseem Ali, CEO of Rockborne, a recruitment consultancy specialising in the data and AI sector.

He has observed “more chief data officers than anyone else absorbing the AI remit to become chief data and AI officers, while some organisations are simply turning their CDOs into CAIOs. You don’t see this conversion happen as much with CIOs or CTOs unless they have a data remit.”

The absorption of roles makes sense to Queenan, who says: “Companies should absolutely get across AI, but most large ones already have the data science people and processes in place to do that. AI is an app that sits on top of your data, which means it’s just another data product. So, if you already have a team creating such things, this is simply adding a string to their bow.”

He believes that having “a head of AI who reports to the CIO or CTO is more than sufficient in most cases. In five years’ time, there could be a real need for a powerful job title such as CAIO, but it’s too early for it now.”

Queenan’s view is that organisations generally need more time to work out how to “do AI better” and decide whether they will benefit most from developing their own tech or buying off-the-shelf products. Most firms already seeking to hire a CAIO are “putting the cart too far in front of the horse”, he argues. 

Peterson agrees that granting an AI specialist a seat at the top table now would be overkill in most organisations. 

“It depends on what expertise there already is on the board, whether you need it and what value a CAIO could bring,” he says. “But, if you’re not a tech company and AI isn’t core to your business, it probably isn’t necessary.”

However, if your company is set on hiring an AI chief, it is essential to “put your money where your mouth is” and equip the successful candidate adequately, Peterson argues.

“You can’t just hire a CIAO, give them that big title with lots of expectations and leave them to it,” he warns. “You need to support them by putting money, resources and prioritisation behind it. Otherwise, you’ll be setting them up to fail.”

Cultural considerations

There is also a significant cultural component to AI implementation. For firms to know exactly what they want from GenAI, full support and buy-in from all C-suite executives is required. Despite the buzz GenAI has created over the past couple of years, there’s still plenty of hesitation around its adoption, whether that’s because leaders are stuck in their old school ways of thinking or they’re concerned about the return on investment. 

The key to winning over reluctant C-level executives is to show them how GenAI can solve real business challenges, argues Kristof Symons, CEO International at Orange Business. In March 2024, the company launched two GenAI products for French enterprise customers. “When leaders back AI, it sends a strong signal: this is important and we’re all in this together,” Symons says.

‘Everybody, irrespective of their seniority, has to become AI-aware’

An interview with Rafee Tarafdar, CTO at Infosys

Expand Close

Paul Cardno, global digital automation and innovation senior manager at 3M, thinks GenAI must be “positioned as a strategic investment”. He recommends demonstrating its value by highlighting how competitors have used the technology to improve productivity and deliver efficiencies and cost reductions. 3M is “prioritising GenAI projects that are helping individuals to do their jobs, like content creation and process automation, as these directly support our core objectives”, adds Cardno. 

The C-suite must also tolerate failure. Young says some executives can be “paralysed by indecision” when it comes to investing in GenAI because of the potential for an idea to fail. “Investing in a project that fails could be damaging, but failing to act quickly enough could be more so,” he points out.

Of course, it’s not just senior leaders who must be won over. Employees may be even more resistant to AI than their leaders, thanks to the threat of job displacement.

Cardno stresses the need for all those involved in a GenAI project – from the engineering team to the legal affairs department – to pull in the same direction. This requires leaders to establish a culture of trust, not just in the GenAI solution that’s being built but in each other as well, he says. 

If a GenAI project is to be deployed successfully, all employees, not just engineers and data scientists, must believe in it. As Symons puts it, leaders need to “demystify GenAI and show it as a tool for everyone”.

This means ensuring the technology isn’t just for a select few, he says. “Democratise it. If only certain employees get access, others might feel left behind, which can create further resistance. There must be AI equity within the business, because without it you risk a disparity that may see some employees get ahead of those that don’t have access.”

Both Symons and Young emphasise the need for education and training to support those who aren’t confident in using GenAI. By empowering employees and arming them with knowledge, they’ll get a better understanding of the benefits the technology can bring to the workplace. This will likely help pilot projects be more successful, but there may still be some pushback.

“It’s important to acknowledge there may be some short-term trade-offs for long-term gains,” says Pickrell. “There are no quick wins with GenAI. For it to truly deliver on its potential, it requires large quantities of high-quality data and a highly skilled team. It must be seen as an essential part of the business infrastructure.”

2.4

Implementing your AI strategy

A road map for success with AI isn’t easy to find. Use cases vary wildly depending on an organisation’s resources, industry and position on its AI journey.

But new research from MIT’s Center for Information Systems Research suggests a process that could enable firms to implement AI into workflows quickly and safely. 

The research was inspired by questions from tech leaders on why they aren’t getting the same value from GenAI as they have from data and analytics technologies in the past. Based on a series of virtual roundtable discussions with data and technology executives, it identifies a need to separate the technology into two distinct parts – tools and solutions – before deploying them in a two-step strategy.

AI tools “are designed to be broadly applicable”, according to Dr Nick van der Meulen, who co-authored the research. They could include conversational systems, such as ChatGPT, Claude or Gemini, as well as digital assistants embedded in existing productivity software. 

“An employee will use a GenAI tool to summarise a document, brainstorm ideas, rewrite an email or analyse financial results,” says Van der Meulen. “As one executive in our study put it, they allow for ‘productivity shaves’.”

Crucially, the report reveals that AI tools also help employees get comfortable with using AI and are important mechanisms for building data democracy in an organisation. 

However, the report emphasises that leaders must understand some basic principles of use with this first step, most importantly putting in place certain guardrails and backing it up with workforce training (see previous sections).

“Unvetted GenAI tools, in the form of ‘bring your own AI’, can bring significant risks for an organisation, including data loss, intellectual property leakage, copyright violation and security breaches,” explains Van der Meulen. “The guardrails should outline which tools are acceptable and any conditions that may apply. For example, a company may permit GenAI tool use when prompts draw on publicly available information but disallow its use if prompts require company data.”

Allow experimentation within reason

Employees shouldn’t be left to explore tools independently, according to the MIT research. There must be company-wide training in place to teach them how to effectively and responsibly instruct and interrogate GenAI tools so they can get the most out of them. 

With these guidelines in place, senior leaders can be assured that tools are being used safely. This will also help foster a self-perpetuating understanding of AI best practices across the organisation. As more staff use the tools correctly, best practices will become the norm.

Once a sound knowledge base has been established, firms can further build AI architecture and expand its horizons with the introduction of GenAI solutions, which help groups of employees to transform workflows and create value. 

For example, Van der Meulen says the research team has “heard from a number of call centres that use LLMs to transcribe calls as they happen and process the content and tone of conversations. This is then used to coach agents in real time to either recommend empathetic responses to frustrated customers or propose upselling opportunities for satisfied ones.”

The key to success is to pursue both tools and solutions but use different strategies that dovetail to create a virtuous cycle. 

“GenAI tools can serve as a form of grassroots innovation,” says Van der Meulen. “Employees can discover promising use cases that can later evolve into more formalised, scalable and lucrative GenAI solutions.”

Organisations at different stages of the AI journey must adopt different strategies. The report recommends that the best starting point for a GenAI journey is the targeted adoption of a few tools from trusted vendors, accompanied by close oversight. 

Those further along in their journey should shift their focus to developing GenAI tools into solutions that contribute to strategic business objectives.

For instance, NN Group, an international financial services company, created a ChatGPT ‘playground’, where employees can use various GenAI tools to test their ideas on how to make their work more efficient. 

How so-called lean AI can help firms to overcome implementation challenges

For tech leaders, it’s a question of whether to build or buy their AI systems

Expand Close

“The playground is available to all employees. With a few ground rules in place and by making it easy to use, there is no need for employees to use unsupported tools outside of the playground,” explains Tjerrie Smit, NN Group’s chief analytics officer. “Launching the playground has been a game-changer for us. It provides a secure and compliant environment where our employees can safely experiment with GenAI. This proactive approach not only encourages innovation but also ensures that we can scale successful ideas into impactful AI applications across the organisation.”

One of the main takeaways from the research is that businesses can choose their approach: buying, boosting or building an AI solution. 

Buying means using vendor-provided solutions where the vendor manages the model and operations. Boosting enhances vendor-provided models by incorporating proprietary data through techniques like fine-tuning or retrieval augmented generation (RAG), which customise pre-existing GenAI models with more relevant information from company sources. Building is the most resource-intensive approach, where organisations take full ownership of developing, running and maintaining the model.

“Buy or boost GenAI solutions when you need to move fast and gain competitive parity,” advises Van der Meulen. “But build when you need a differentiated GenAI solution that is hard to imitate and provides a competitive advantage.”

CIOs must remain vigilant when it comes to business alignment, so that GenAI is never siloed and left in the hands of a few select technologists, as this will starve it of the oxygen of innovation. 

As the MIT research suggests, the surest way to accelerate GenAI’s value to an organisation and ensure it is safely embedded is to increase employee access to the technology.

How to think about the ROI of AI investments

It’s clear that AI implementation requires significant planning and resources. This will inevitably lead senior decision-makers to ask a surprisingly complex question: is it worth the money? 

Nearly nine in 10 (87%) organisations are actively developing GenAI initiatives, but only 35% have a clearly defined vision for how they will create business value from GenAI, according to Bain Research. 

And there are different views on just what constitutes success. Consensus on how to measure the success of AI investments is rare, according to a survey of nearly 600 CIOs and heads of IT by Gong, a sales platform.

So where are major firms focusing their AI investments – and how do they measure the impact? According to the survey, 55% focus on productivity, but a similar share look at efficiency and revenue (53% each), while 46% focus on employee satisfaction.

AI has helped to transform business processes at Axa, an insurer, bringing benefits everywhere from customer service to risk assessment and fraud detection. By introducing a corporate version of GPT to its call centres, call-resolution time has been slashed from five minutes to five seconds, as agents can swiftly retrieve policy document references to customer questions, says Axa’s UK and Ireland CIO Natasha Davydova. 

AI-enabled pricing platforms have also made pricing more efficient, helping the company’s underwriters complete their customer risk assessments and pricing proposals in hours rather than weeks. AI-enabled IT observability tools, meanwhile, have been implemented to detect and prevent IT incidents, reducing the total number of incidents and pushing down the time to fix.

A trial of Microsoft Copilot, which helps teams summarise and draft documents, is being extended, with its value based on productivity increases, error reduction and employee satisfaction, all of which have changed for the better, according to Davydova.

This comprehensive tech stack doesn’t come cheap. Davydova maintains that the company’s AI budget is confidential. However, ChatGPT at enterprise level is quoted at $30 (around £24) per user per month. The price for Microsoft’s Copilot Pro, meanwhile, is currently published at £19 per user per month. In a 150,000 strong global business like Axa, this would amount to £43.2m and £34.2m respectively. 

Still, that’s not a huge outlay for a giant like Axa. The company’s total tech spend in 2023 on all platforms, not just AI, was reported by Global Data to be $2.2bn (£1.74bn).

The changes are making a difference to Axa’s bottom line, Davydova says. But how does the company measure the value of the technology? 

Davydova says: “By analysing KPIs from operational efficiency and enhanced customer experience to improved risk management, cost savings and employee productivity, the evidence indicates that AI contributes positively to the company’s goals and bottom line.”

Jean-Philippe Avelange is CIO at Expereo, a business connectivity company. He says all conversations around AI initiatives begin with a question: “What’s our starting point that we want AI to help with, if it can?” 

Avelange says Expereo decided in 2023 to upgrade its Salesforce platform to the AI-enhanced Agentforce offering, which includes features such as real-time AI-powered guidance in customer interactions. The company did not share financials, but a total package at the published price of $500 (around £394) per user per month, would amount to $2.4m (about £1.9m) for the 400 Expereo employees using the platform.

Like Davydova, Avelange emphasises customer service with AI uses and correlates AI platform rollouts to productivity gains and any resulting increases in employee satisfaction. 

Avelange outlines key focus areas for Expereo: “How many emails are we sending per customer service agent? How long does it take an agent to handle a case summary? How much time is spent on a customer update? We then assess the cost for that specific AI use case, start prototyping and commence frequent rollouts to gather quick feedback.”

New technology comes with a financial price, but there’s also an environmental cost that decision-makers shouldn’t overlook, notes Louise Bunting, CIO at Carbon Net Neutral Technology Solutions, a corporate carbon-measurement and management company. 

For example, it took 1,287MW/h of electricity to train the large language model (LLM) GPT-3, according to the Association of Data Scientists; that’s roughly equivalent to the usage of an average American household over 120 years. Gartner has predicted that by 2030, AI could consume 3.5% of the world’s electricity, while each GPT query uses about half a litre of water to cool its servers. 

This all adds to an organisation’s carbon footprint, Bunting warns, which must be considered when assessing the ROI of AI. She recommends a particular line of questioning when considering whether AI will be worth the cost. “Is it actually adding value? Or could you do what you need to with tech that you’ve already got? Is it actually going to save us any money, when we could do the same thing through the automation and digitisation of processes, without AI?”

Yet another overlooked cost of AI, Bunting adds, is governance. An AI governance director can earn a salary of up to £74,000, according to Glassdoor. But firms, especially large ones, will likely need a team of specialist staff members to create frameworks and processes for how AI should be used and monitored in a business, as well as additional legal support to clean up the mess if AI gets it wrong, which is still a reality.

“If the value of AI is that I can respond to my customer in five seconds, does that warrant having a whole team governing its use?” asks Bunting.

But Avelange notes that quick wins aren’t everything. “This risk of any organisation making short-term considerations about ROI on AI initiatives is that they could then potentially miss out on any long-term gains and benefits,” he says.

Case study: How Priority Direct is trialling AI to balance supply and demand

Expand Close