AI in the enterprise: navigating the innovation-responsibility tightrope

AI is transforming the enterprise, but at what cost? Unsanctioned use of shadow AI is skyrocketing, driving security risks and spiralling expenses. Balancing innovation with governance is critical to avoid hidden costs and mitigate risk

Flexera Pa Headers1

Tech leaders have long struggled with employees using their personal devices or purchasing IT for their own use at a department level – often without their authorisation, or even their knowledge. 

Incidents of shadow IT surged with the widespread adoption of Software as a Service (SaaS) in the enterprise, where at the click of a finger employees can download an application that helps them do their job.

In reality, those side-stepping IT to make unsanctioned tech purchases can trigger a host of security, management and cost concerns for the business. With the explosion of artificial intelligence (AI) in the enterprise, this issue couldn’t be more pressing. 

A huge percentage of AI use is currently unsanctioned by IT

Since OpenAI’s ChatGPT launched in November 2022, the use of generative AI (GenAI) in the enterprise has grown at an exponential rate. Indeed, GenAI is the number one solution deployed in organisations, according to Gartner, with OpenAI, Google and Microsoft accounting for 96% of AI usage at work.

AI’s potential for innovation is undeniable. With the right use cases, AI can bolster productivity by up to 70%, optimise costs, drive growth and provide a clear competitive advantage in a disruptive business landscape.

At the same time, AI’s rapid evolution has ramped up the pressure on organisations to accept Bring Your Own AI (BYOAI).

BYOAI refers to the growing trend of employees bringing their own AI tools to work, using tools to help with day-to-day tasks, regardless of whether this is sanctioned by the company or not. 

OpenAI launched an enterprise version of ChatGPT in August 2023 – the firm claims that 80% of Fortune 500 companies have teams using corporate accounts. The reality though, is quite different. A huge percentage of AI use is currently unsanctioned by IT, and with no governance model in place.

One study by Microsoft reports that 78% of AI users are bringing their own AI tools to work. This figure rises to 80% in small and medium-sized companies. Another shocking statistic is that 74% of ChatGPT accounts used in the workplace are non-corporate ones, so lack the security and privacy controls of ChatGPT Enterprise. The percentage is even higher for Gemini (94%) and Bard (96%).

“80% of our customers have introduced ChatGPT directly to their customers. Those users have automatically just deployed it, and they’re not being managed by the business. That’s quite a scary thought,” explains Leigh Martin, product director at Flexera.

AI pricing models remain a mystery

Many business leaders don’t yet fully understand the consequences of non-sanctioned AI use by employees. This is the crux of the problem. 

Aside from the security risks – such as putting data into GenAI that shouldn’t be exposed outside the company – there are major cost implications. 

That’s because like the early days of cloud, the pricing models around AI aren’t transparent. As such, it’s easy for users to treat prompts as an unlimited resource and not realise how much they are burning through their GenAI credits.

“Vendors are saying, very smartly, ‘Don’t worry if you go over your credits for now – it’s fine, we’ll give you some more’, before they introduce a secondary tier bill. It’s quite an explosive issue,” says Martin. 

As these credits often don’t have a form of roll over, it leads to a behaviour change of FOMO and feeling the need to consume them all. This creates an endless trap of consuming more.

“This open licence model means you can just consume infinitely. It starts costing a lot of money,” he says.

Another challenge for organisations is they don’t know which is the best licensing model for them, or the most suitable pricing tier. “It’s like comparing apples to oranges when it comes to AI pricing. Different platforms offer similar features but at vastly different costs. It’s a potential minefield for users,” says Martin.

“The problem is that organisations are often sold a bundle of software that the end user doesn’t need, just so they can get their hands on AI. It’s a hook sale. People are so excited about AI right now, there’s a sense of urgency that they must have it, without thinking about the bigger picture.” 

Understanding licensing restrictions

It’s not just a question of cost. Many companies are failing to maximise the potential of AI, simply because they lack understanding of their own licensing restrictions. Often, organisations aren’t aware that they can’t use certain applications for commercial use and need to purchase a different model entirely. 

Similarly, if you ask a GenAI application to generate code, you need to understand who owns the code, and how you are allowed to use it.

“AI can be brilliant for prototypes or mock-ups. But you can’t run a marketing campaign on an application that’s not for commercial use. So, you do need to have guidelines in place to make sure that if you’re providing a service to your end users, they can use the AI,” said Martin.

The risk of low visibility in running costs

Alongside licensing costs, organisations must consider other charges that don’t appear on the initial price tag, such as investments in infrastructure and implementation to support AI, alongside running costs like electricity and data storage. You may also need to consider regular system maintenance as well as model training and updates.

As such, organisations need to gain visibility into hidden AI costs and manage transactions in future contracts. This means balancing the cost of these transactions with increased usage. They should undertake a cost of delivery of service around AI, as it often moves beyond SaaS into FinOps. This means it’s essential to understand the overall total cost of delivery of the service.

“You need to know exactly what your data storage is. Do you want to go over that? How much is that going to affect the price for delivering that service? That’s where it starts to get really complicated for customers,” says Martin.

“You need to track and measure the consumption of both the requests and the output you deliver to ensure that you’re not over or underpaying for that service. That’s the emerging challenge with AI that is constantly overlooked.”

Societal costs of AI

Alongside the expense to organisations of licensing, running and maintaining an AI environment, it can be argued that AI has a societal cost too. 

Many businesses are being tracked on their sustainability modelling in light of increasing regulation – and Gartner predicts that by 2030, AI could consume up to 3.5% of the world’s electricity.

AI is a double-edged sword. It offers incredible potential, but there’s a real cost to be paid, both in terms of finances and societal implications

“AI is a double-edged sword,” says Martin. “It offers incredible potential, but there’s a real cost to be paid, both in terms of finances and societal implications.”

GenAI’s biggest contribution to increasing emissions will come from its routine use. While the energy to train models has been a primary focus for corporate users, the process of user prompting and model responses will have a greater impact. As a result, it’s crucial for the enterprise to integrate AI into its sustainability metrics.

Elsewhere, there are ethical and legal considerations surrounding AI use. Algorithms can help to perpetuate human biases, based on the data they’re trained on. Addressing these biases requires investment in curating inclusive datasets, continually monitoring outputs and employing teams to oversee the AI’s operation. 

Compliance is another potential cost. For example, the EU AI Act is designed to give businesses clear guidelines to help support their ethical AI adoption journey.  One aspect of the Act requires developers to show the working of their models. Organisations therefore need to think about how they can ensure compliance with this – and other emerging regulations and standards.

Striking a balance between innovation and responsibility

With concerns mounting about the hidden costs surrounding AI use, business leaders might be tempted to lock AI use down within their organisation to curb the problem. But, as Martin points out, this would be short-sighted. “AI is not going anywhere. You can’t run and hide from AI,” he says. “You can’t restrict employee usage either, as that leads to shadow AI. You have to find a way to make AI work for your organisation.”

To successfully navigate the complexities of AI adoption, businesses must strike a balance between innovation and responsibility. That means taking into account any hidden costs - both to the business, and society as a whole. 

But by gaining visibility into AI usage, implementing clear governance models and ensuring cost-effective, secure access to the right tools, organisations can harness AI’s transformative potential, while making sure its benefits far outweigh the costs.

Tangible steps for AI and SaaS governance

When it comes to balancing innovation and responsibility, business leaders are walking a tightrope. “AI is already here. Don’t wait for a complex framework to be ready – start now with basic guidelines and build upon them as you go,” advises Martin. “Waiting could be a costly mistake.” So, how can leaders take action and drive innovation while taking accountability for AI in their organisation? And how can organisations ensure they are maximising the benefits of AI while mitigating associated risks?

Expand Close


For more information please visit www.flexera.com