As firms rush to tap AI, a governance gap emerges

A gap has emerged between organisations’ speedy implementation of AI and their ability to address the special governance concerns posed by the technology

Database Programer Writing Code In Front Of Multiple Computer Screens Displaying Artificial Intelligence

Many businesses have embraced AI, keen to exploit the potential of the powerful and affordable tools. But in the mad scramble for the technology, governance has become an afterthought. 

It’s easy to see the attraction of the pervasive, rapidly developing plug-and-play tools, powered by customer data and intellectual property. However, a gap has emerged between organisations’ speedy implementation of AI and their ability to address the special governance concerns that arise.

AI exposes businesses to unique risks, demanding new levels of scrutiny and due diligence. It not only has the ability to expose private information and infringe IP rights, but there are also challenges over bias and ethics, cybersecurity for AI, and corporate governance practices at external AI vendors. 

It doesn’t help that there’s no universal blueprint for AI governance. The EU AI Act mainly tempers riskier forms of the tech, while a fragmented set of regulations globally means businesses face many questions when it comes to putting the right guardrails in place. It’s no wonder that the AI governance gap is one of the top risks threatening business growth in 2024, according to KPMG

The core challenge for businesses is that AI policies, protocols and contracts can quickly become outdated as standards rapidly evolve

Antonis Patrikios, privacy, cyber and AI partner at global law firm Dentons, describes the AI governance challenge as “GDPR on steroids”. However, GDPR – the EU’s General Data Protection Regulation – primarily concerns the chief data officer and IT departments.  AI governance is an ecosystem challenge, involving teams including procurement, legal and information security. But many businesses aren’t taking a joined-up approach. 

Because the risk landscape is so varied, the chances are high that things will fall between the gaps, says Steve Wright, CEO of IT consultancy Privacy Culture.

“Many departments and teams still work in silos. IT teams often don’t work hand in glove with the person involved with AI governance,” he explains. 

Although there are some resource-rich organisations that have been particularly proactive about AI governance, most firms will have to settle for a wait-and-see approach, says Wright. “While GDPR, for instance, had an end date for compliance, this is not the case with AI so far.”

A proactive approach

At this point in the evolution of AI, companies  must take the initiative themselves, rather than rely on global governance structures for safeguards. Closing the AI governance gap for organisations means creating a framework around two elements: one external and one internal. 

The external element involves scrutinising third-party providers. It is essential for businesses to ask the right questions about AI accountability at the outset of any contract with vendors. 

Corporations increasingly want to utilise private instances of a large language model (LLM) in the cloud. They want to know where that cloud infrastructure is located and to use retrieval augmented generation (RAG) systems – where company data sits outside of the training sources – so businesses don’t share vast tranches of raw data with the LLM itself. Moreover, they want a human in the loop for quality assurance. 

As for the internal component, firms should focus on ensuring top-notch data management systems, since AI data input is one of the most significant factors that businesses can control. Strong internal AI governance also means considering privacy by design, mapping AI systems in use and implementing robust ethical guidelines.

A proactive approach to AI governance is vital. “The core challenge for businesses is that AI policies, protocols and contracts can quickly become outdated as technology, regulations and market standards rapidly evolve,” notes Alexander Amato-Cravero, a director of emerging technology at law firm Herbert Smith Freehills.

A governance gold standard 

With so many moving parts in AI governance, how can organisations gauge success? There are a number of frameworks being developed around the world. 

Businesses can expect more from the European Commission, for instance. When the EU AI Act was formulated, EU technocrats had a future conformity assessment or CE mark in mind, similar to a BSI kitemark for AI. 

Wright thinks a European roadmap, which organisations can test against, is likely coming in the near term. The question, he says, is whether the UK should follow suit. 

“The concept of a conformity assessment that will rubber-stamp AI before it’s released to the mass market, like a physical product approval for consumer goods, is a good concept,” he adds. 

An AI kitemark would help businesses bridge the internal AI governance gap, because many organisations are struggling to work out the bare minimum of resources needed within their organisation to account for responsible use of the technology. 

The UK’s Information Commissioner’s Office has an AI risk assessment tool to help in this process. Many businesses also use the US National Institute of Standards and Technology AI Risk Management Framework and/or the OECD’s responsible AI governance framework. 

Time for a chief AI officer? 

Patrikios says one of the biggest issues with filling the AI governance gap is that there aren’t enough trained people for technical roles in data and security, among others.

“The talent shortage right now is similar to the lack of data protection officers that occurred around 2016, two years before the GDPR came into force,” he explains, adding that some organisations, such as the International Association of Privacy Professionals, are training people on AI governance. Patrikios is one of the IAPP’s AI trainers.

There are also calls for a C-suite role for AI specialists. Chief AI officers could act as the authoritative figure to champion strong governance, secure budgets and deal with internal and external accountability. Their remit would cover the highly technical aspects of AI, as well as the legal components, including due diligence and ethics. 

Patrikios says many companies have started extending the remit of existing roles such as chief privacy officers to cover AI governance.

There are parallels in history when it comes to dealing with governance gaps that get filled over time. Whether it’s the advent of the internet, cloud computing or data protection regulations, eventually the law and effective safeguards catch up with tech advances. 

But in each of these cases, standards and guidelines were not dictated by big tech or governments alone. They emerged as the result of industry collaboration involving public and private sector actors. Filling the AI governance gap will require a similar collaborative effort, Patrikios urges. 

Thankfully, guidelines on responsible AI are not being handed down from big tech companies alone. There are many industry forums weighing in on the issue and regulators around the world are starting to devise formal frameworks.

“What we must be doing is upskilling and using the power of the network when it comes to filling the AI governance gap. We need to be talking to our peers and our partners about this issue. It’s essential to know what market practice looks like,” concludes Patrikios.