AI in business: translating ambitions into action

A panel of experts discussed how business leaders can ensure their AI goals don’t get lost in translation

Businesses and their employees may fear the impact of artificial intelligence, but they risk being left behind if they spend too long pondering their AI strategy.

This was the view of experts who attended the ‘Ensuring AI Goals don’t get lost in translation’ roundtable organised by Raconteur and sponsored by language AI platform,  DeepL.

They debated how the technology is already transforming organisations by saving time and money, especially in areas such as business communications where AI is supporting humans rather than replacing them.

Tim Hickman, partner at law firm White & Case, says that employees must learn to trust AI gradually, much like how people develop confidence in their coworkers over time.

“I urge businesses to test and learn,” he says. “We encourage our people to use appropriate AI tools to create executive summaries of their drafting, to help confirm that the core message they are seeking to convey is coming across clearly. We also encourage them to use such tools to suggest legal arguments for and against any position they are planning to adopt, as a way to test their thinking and to help identify possible gaps. AI will reshape the legal profession and make lawyers better at their jobs.”

Hickman is an expert in the new EU AI Act, which came into effect in August 2024. He accepts that regulation is needed to help ensure accuracy and security but insists it must not stifle innovation. “The EU AI Act is designed to be adaptable to future technologies, but it leaves businesses facing a significant lack of certainty,”  he says.

The EU AI Act is designed to be adaptable to future technologies, but it leaves businesses facing a significant lack of certainty

One of the first corporations to publish an ethical AI strategy to soothe any concerns among employees and customers was Vodafone. In October it also announced a 10-year, $1bn partnership with Google to power AI technology across Europe and Africa. 

Justin Shields, Vodafone Business’s chief technology officer, says AI is already delivering a better understanding of customer sentiment, generating useful feedback on Vodafone services and boosting the efficiency of internal departments.

“You have to be focused on using AI in the right way,” says Shields. “This includes training staff to use customer data correctly because it can be put somewhere which is not secure. We must also ensure AI does not create any biases when it comes to decision making.”

AI is also improving operations and business communications in the public sector, including within the emergency services.

Angus Wallis, director of innovation and bid strategy at NEC Digital Studio, explains how AI will be used by the police to communicate summaries of incidents more quickly. 

He comments that adoption of the technology will increase once government organisations become more open with their employees about how AI can make their individual jobs more productive, allowing them to focus on the most important activities. 

“We work within the youth justice system where caseworkers may be suspicious of AI, but, at the same time, they do not always have time to read all of a child’s complex case history,” says Wallis. “We will show them how to use AI to summarise all the information they have. This can highlight quickly, for example, which children might be at risk of self-harm.”

Indeed, AI is becoming an enabler for people who feel overwhelmed in their day-to-day work.

Alan Flower, executive vice president – head, AI Labs, HCLTech, which has more than 218,000 employees, explains how AI can assist an organisation’s workforce. “We might work with an insurance company where a salesperson hasn’t had time to fully understand the product or a client’s policies. AI will generate product summaries to help that agent communicate the benefits credibly and respond to questions. AI can also significantly transform the delivery and accessibility of world-class healthcare from enhancing efficiency and effectiveness of healthcare delivery through to improving patient outcomes.” 

Flower adds that once employers encourage their staff to experiment with AI, they soon notice productivity creeping up. This might start with someone transcribing a Teams call, using AI to begin a PowerPoint presentation or adopting a specialist tool to support research. 

He also agrees there must be robust regulation to ensure accuracy and security but warned that poorly thought-out rules could mean some businesses unwittingly break the law.

“To try and avoid this, we are seeing businesses fine-tuning AI capabilities to better suit their own needs and augmented by appropriate governance to ensure safety and accuracy.

“There is huge potential for AI to lead to an entrepreneurial boom due to the dramatic simplification of setting up a new business or transforming an existing organisation. However,  AI regulation cannot be ignored. We are seeing the implementation of fine-tuned large language models (LLM), with appropriate guardrails, to ensure a more accurate response to any user enquiry.”

There is huge potential for AI to lead to an entrepreneurial boom due to the dramatic simplification of setting up a new business

Daniel Lloyd, partner at law firm TLT, says businesses need to sweat their AI assets to boost productivity, but they must take their employees with them as the workplace changes. 

He also urges regulators globally not to put too much of a burden on businesses, as this could dissuade them from investing in AI. 

“We want to protect human rights, and this is fundamental to the regulation, but we need a flexible approach to help business,” he says. “We must get the balance right between encouraging the use of AI to improve lives and not putting too many obstacles in place.”

With more organisations keen to encourage their employees to experiment with AI, it is important staff are discouraged from using tools that have not been sanctioned by the business (so-called shadow AI). These products can increase the security and accuracy risks.

Joy Uzuegbu, product marketing lead at DeepL, says that, in business translations, data protection and accuracy are arguably more important than speed or cost savings.

“You have to track the tools employees are using because you are not sure what data they are putting in,” she says. “You need to educate employees and make sure everyone understands their responsibilities when it comes to AI, to ensure accuracy and security, as well as compliance with regulation around the world.”

DeepL’s technology focuses on strategic languages to help businesses communicate in different global territories and improve the customer experience in local markets. This can provide businesses with a competitive advantage. 

Uzuegbu explains that DeepL uses a panel of linguistic experts to develop and validate its suite of products, which includes a glossary of terms for different industry sectors.

It is certainly sensible to advise employees on which AI tools to use. Gerard Frith, entrepreneur in residence at law firm Taylor Wessing, says this is part of learning to trust staff to act ethically and responsibly, while still allowing them to experiment with AI. 

His firm uses the technology to manage and distribute knowledge, including around specific legal cases and regulatory updates.

“We have also used AI in communication to create voiced podcasts around complex topics,” he says. “You just need to have proper guidance in place. Although some employees will never admit that AI could do a job as well as they can – that is human nature.”

In all organisations, especially those operating in highly regulated sectors, it can be advisable to devise internal policies around the use of AI.

Caro Ames, data science strategy leader at Arup, says her company followed this path to protect the inputting of sensitive data for the energy, water and transport sectors. 

“We are working with lots of big infrastructure projects where there are huge health and safety implications,” she says. “We need to ensure employees still question AI when they use it, while, at the same time, getting them to try it. This is happening as we get more buy-in from the C-suite.”

Part of her role is helping companies leverage data from AI to support sustainability, as well as drive efficiency. 

“You must take a human-centred approach to AI, which means supporting them so they can see the value. In our sectors, the sharing of knowledge is so valuable, but people from different teams need to know how and where to find the information so major projects run more smoothly and sustainably.”

AI is an invaluable tool to support employees, but controls are needed. Organisations are already enjoying the benefits, particularly around business communication, but they need to be confident that AI outputs are accurate, secure and compliant.

For more information, visit deepl.com