A little over a year since the release of Open AI’s groundbreaking ChatGPT, organisations and their employees are harnessing the power of generative AI tools with largely positive outcomes. Goldman Sachs forecasts that productivity growth could rise by 1.5 percentage points from this new wave of generative AI over the next decade.
The change is being felt institutionally, with companies rapidly rewriting processes to include artificial intelligence. But it’s also happening from the bottom up, with individual workers adopting these tools in their day-to-day jobs in a far broader way.
What appeared to be the ultimate silver bullet for workplace productivity challenges has now emerged as the newest shadow IT concern (a device, software or application that sits outside the IT department’s control) for CISOs.
“Most practitioners are talking about us entering the AI age,” says Neil Thacker, chief information security officer EMEA at Netskope, an organisation that helps others protect their data and defend against cyber threats. Based on data from millions of enterprise users globally, Netskope found that generative AI app usage is growing rapidly, up 22.5% in just two months earlier this year.
Employees are becoming unbridled in their use of the tools to improve their workflows. “This gives us lots of opportunity to leverage and harness new technology,” says Thacker. “But CISOs also have to consider the risks generative AI brings.”
Where data goes
IT leaders and CEOs, mindful of their business’s reputation and continuity, rightfully harbour concerns regarding the data that is fed into generative AI tools. Organisations with 10,000 staff members or more use an average of five AI-powered apps daily. ChatGPT leads the pack, receiving over eight times the number of daily active users compared to any other generative AI application, according to Netskope research.
With big names in the tech world, like Samsung, banning the use of ChatGPT by employees after sensitive data was accidentally leaked earlier this year, the million-dollar question emerges: how safe are company secrets in the hands of AI applications?
Netskope’s analysis reveals that the source code for proprietary apps and services is posted to ChatGPT more than any other type of sensitive data, at a rate of 158 incidents per 10,000 users per month. Caution is vital when using AI, and the rules need to be understood by everyone. “For the workforce, it’s all about performance and productivity,” says Thacker. “Workers may not be aware of the risks. They may have interpreted the app’s terms and conditions slightly differently to someone in the legal team - or more likely, they didn’t read them at all.”
Some employees may be better informed about the dangers than their peers. Yet, after weighing up the potential pitfalls of getting caught against the productivity benefits, they may decide it’s a risk worth taking. So, how do you stop the misuse of AI from becoming a concern before it rears its head? “It really comes down to education,” says Thacker.
Beyond educating staff on how generative AI tools operate, how they manage data, and the economic dynamics of providing services for free or at a low cost to users (and understanding the implications for the data they process), Thacker suggests introducing real-time or point-in-time education. Pop-up banners can be coded to insert a warning when an employee is about to post sensitive data into an unapproved generative AI application. “That is a perfect time to educate somebody,” he says, “Then and there, you can explain the risks and why you have that oversight in place.”
The guidance from Netskope includes building a continuous inventory of which apps and services are being used by employees and for what purpose, and, fundamentally, what data is being used. In addition, organisations need to align with the many new AI risk frameworks that have cropped up in the past year or more.
Take a page from cloud
Personal cloud storage, messaging apps, collaboration tools - the bevy of shadow IT infiltrating workplaces is no new thing, and it’s ever-growing.
Often, Thacker notes, decision-makers worry that instituting AI policies, education systems, and advice will take a lot of time, effort and resources – things that, in an increasingly competitive landscape, businesses don’t have.
Leaders recognise the risks: alongside source code, employees are putting regulated data, intellectual property, and, in worrying cases, passwords and crucial keys into generative AI tools. But they also know the realities of running their organisation.
Instituting good working practices in a post-AI world doesn’t need to be onerous, says Thacker. Instead, tech teams can draw strategies for safe and secure integration from their past efforts against cloud threats and repurpose them for the post-AI environment. He adds that the challenge of securing new technology without impeding its benefits is one that CISOs have been successfully overcoming for years.
Safeguarding may be simpler than business leaders think, then. “They need to apply the same controls to AI services as they did to cloud applications, using technology to automatically see which apps or services are being used inside their organisation and apply policy controls and advisory notes within the workflows,” says Thacker.
Being forewarned is being forearmed. Knowing which services workers are using means businesses “can build out inventory and ensure that they have monitoring in place for those services,” he says. But simply knowing what tools the business is using now is no good unless they’re regularly updated.
Netskope has a database of 75,000 cloud-based apps to which it assigns risk scores, enabling security teams to determine easily whether using an app is safe or not. By that token, it has begun ranking hundreds of generative AI apps on the same system, all of which receive a risk score. The database will continue to expand as more new products enter the market.
When in doubt, and as AI becomes more ubiquitous across the business world, immediate insight into the risks associated with each new iteration of the chatbot or virtual assistant will provide those securing the business with peace of mind.
For more information, visit netskope.com