Think first: why responsibility needs to be at the forefront when deploying AI

As the AI race heats up, no business wants to be left behind – and doing things properly will yield even bigger benefits
Ryoji Iwata Ibavuzsjjto Unsplash

The AI era is upon us, with what seems like new advances every week, pushing the technology to new heights. Between Google, OpenAI, Microsoft and a raft of other companies, new developments that can ease the way we live and work are accessible to people more than ever before. It’s little wonder, then, that businesses are starting to consider how best to integrate AI into their processes to reap the benefits.

But thinking before acting is vital in such a fast-moving space. The first-mover advantage that businesses seek out can quickly be negated by the regulatory risks of irresponsible use of AI.

“Lots of companies talk about AI, but only a few of them can talk about responsible AI,” says Vikash Khatri, senior vice-president for artificial intelligence at Afiniti, which provides AI that pairs customers and contact-centre agents based on how well they are likely to interact. “Yet, it’s vital that responsibility be front of mind when considering any deployment of AI – the risks of not considering that are too great.”

Think fast, act slower

In part, the fast moving and competitive environment often places the responsible use of AI secondary to gaining market share. The history of AI, says Khatri, has seen companies develop tools that harness the power of AI by making use of big data sets without fully considering what impact they can have on society. Widely used AI tools are trained by trawling the internet and gleaning information from what is found online, which can often replicate and amplify our societal biases. Another problem with AI generated content is that it is often ill-suited to the specific needs businesses may have when deploying AI.

“If I’m a broadband provider in the UK, as opposed to a health insurance company in the US, there’s a specific way that I communicate with my customer,” says Khatri. “With respect to the generative AI technology that’s receiving so much attention, it’s important that the AI models being used are trained on the company’s own data, rather than relying solely on generic, third-party data. That way, the organisation remains compliant with global data regulation and the AI models generate content that aligns with the company’s unique approach to its customers.”

Khatri points to how a customer service chatbot trained on the way users interact with one another on social media, for instance, could quickly turn quite poisonous rather than supportive, lobbing insults rather than offering advice.

“At Afiniti, we use responsible AI design to make those moments of human connection more valuable,” says Khatri. “That in turn produces better outcomes for customers, customer service agents and companies alike. One way we do this is by training our AI models only with the data we need, and we continuously monitor them so our customers and their customers get the results they want, while being protected from bias or other discriminatory outcomes.”

At Afiniti, we use responsible AI design to make those moments of human connection more valuable

It’s not just the risk of alienating customers that should be at the forefront of a business leader’s mind when considering how to roll out AI within their organisation and to their clients. Regulation is on the horizon for AI, and is likely to bring specific requirements for how data is fed into models that are used to give AI its ‘brain’, and how AI is used to handle customer interactions.

Caution avoids consequences

“Before you even start to develop or deploy AI, you must be cognisant of the regulatory landscape,” says Kristin Johnston, associate general counsel for artificial intelligence, privacy and security at Afiniti. “This means examining your governance structure around data compliance to get your house in order first.”

AI regulation is complex and constantly changing, and a patchwork of laws across the globe can make it hard for businesses to comply. For example, businesses operating in Europe have different requirements from those with customers in the US, while the UK’s data protection regulation is likely to soon diverge from the European Union’s.

The magnitude of the task in responsibly deploying AI is something most businesses have yet to fully wrap their heads around, fears Johnston. “A lot of companies haven’t built out a governance process specifically around AI,” she says. To do so properly, Johnston says it’s important to consider, first, the definitions of ‘AI’ and ‘machine learning’, then to identify how AI is being used within the organisation based on those definitions, and to construct your responsible AI programme accordingly so that all employees are aligned.

AI is set to become so ubiquitous that external services that feed into your company may use AI as well. For instance, Google has now introduced generative AI-powered aids to develop documents and slide decks in its cloud-software suite that your employees could soon find themselves inadvertently using without knowing it. And if people in your company aren’t sure what AI is — or even if they’re using it — you can’t be confident your approach to AI is responsible.

Root and branch reform

Johnston stresses that a clearly understood definition of AI within your company is the basis of any AI governance programme. She recommends considering the definition of ‘AI systems’ in the artificial intelligence risk management framework published by the National Institute of Standards and Technology (NIST) in the US as a working definition.

“Making sure everyone is aligned is critical, because you want to check for any use of AI throughout your organisation,” she says. “Any protocol worth its salt needs to be able to categorically define who is using AI tools, when they’re using them, what data they’re using and what the limitations of the tools are. It’s also important to ensure AI tools are being used in a way that respects privacy and intellectual property, given the mounting legal actions against some generative AI tools by those who believe their data was used to train the models that power such platforms.”

Doing this work in making sure responsibility is front and centre of any AI deployment is vital because it will avoid headaches in the long run. Not only can the irresponsible use of AI lead to trouble, but generative AI’s tendency to ‘hallucinate’ content — in other words, generate untrue responses — could lead to even bigger trouble in the court of public opinion for spreading disinformation. Yet fewer than 20% of executives say their organisation’s actions around AI ethics live up to their stated principles on AI. By putting in place a robust responsible AI programme, companies can avoid the pitfalls that come with leaping headfirst into the promise of AI without considering its drawbacks. “We’re very mindful about ethical and responsible use of data,” says Johnston. “Responsible AI should be a priority for organisations globally.”

Responsibly transform your business with AI at afiniti.com