Whatever you think of the technology, “generative AI” are the words on everyone’s lips. Tech leaders are under pressure from chief executives to deploy the probabilistic prompt-based tool at all costs. Security leaders are wondering what the implications might be.
For some, GenAI is an intriguing proposition that doesn’t yet have a firm use case. For others, such as Rafee Tarafdar, CTO of global technology services business Infosys, this primordial stage in the development of GenAI is exactly the right time to double down and go all in.
While market analysts begin to question the usefulness of generative AI – with even early cheerleaders such as Goldman Sachs expressing scepticism – Infosys has decided to upskill its entire global workforce of 340,000 people in the technology.
“As part of our own AI-first transformation, when we looked at what was required to have an AI-ready workforce we recognised there would be a spectrum of users and impact,” Tarafdar explains. “But when we launched our plan we said: everybody, irrespective of their level, has to become AI-aware.”
To get started, Infosys set out to understand the organisation’s skills landscape. It looked at all tasks and roles and determined what might be automated, what could change with AI and which new skills would be required.
A tailored approach to GenAI training
The company decided that some in its workforce would be consumers of GenAI: think a sales rep using GenAI to research a client or a developer who wants to write code faster. This cohort would need to understand how to make most effective use of the technology – to create useful prompts and incorporate GenAI into their workflows with a critical eye.
Others would be creating with AI, and their training would need to focus on how to code Generative AI products either within Infosys or for its clients. And the rest might be a combination somewhere on the spectrum.
With that in mind the company decided to take a ‘three-tiered’ approach to its AI transformation. The first stage would be to get everyone ‘AI-aware’ – so all employees would be familiar with the basics. For the second stage, ‘builders’ were trained to create products using GenAI. These were employees who needed to understand how to work with AI models or APIs, the type of staff tasked with creating, for example, AI assistants for wealth advisors or AI-enabled customer service bots.
Then, stage three would focus on ‘masters’. These were employees tasked with having a much deeper understanding of generative AI. These workers might specialise in safety – protecting against prompt injections or prompt hijacking – or be deep subject matter experts building training models and scrutinising large sets of data for usefulness and quality.
Clearly, a one-size-fits-all approach would be ineffective. To get around this challenge, Infosys used its internal training platform, Lex, to create 66 courses on generative AI mapped to each of those personas. Some courses were designed to help staff become AI-aware, others were tailored to builders and still more for masters.
The training platform combines different approaches for learning. This included the Socratic method, which prompts users to come to conclusions or answers on their own, as well simulations and adaptive learning, which tailors education to the specific requirements of individuals. Ongoing hands-on workshops or training sessions are also available for leaders, employees and clients.
“84% of our employees – that’s 270,000 people – are now AI-aware,” says Tarafdar. “We have a large number that are builders and masters too. Anybody can use this platform any time, and that’s how we’ve been rolling out this change across the company. We’re midway through right now; AI-aware is largely done, but there’s more work to do for the builders and masters.”
How to keep GenAI in check
Infosys also had to ensure that its staff, being trained on AI, would learn how to check against coding biases into its applications. Recent legislation such as the EU’s AI Act, which declares “discriminatory impacts and unfair biases” in the technology to be unlawful, make compliance an important regulatory matter.
To avoid these problems, Infosys weaved its own responsible AI framework into its training programme. This covers explainability – so users of AI understand what occurs ‘under the hood’, the kinds of data being used and to what end – as well as ethical and security considerations.
When Infosys began its AI transformation, it established an internal ‘Centre of Excellence’ to promote the safe and responsible use of AI. Next, it brought in an external auditor to evaluate its responsible AI processes and then applied for the ISO 42001 standard – a commitment to establishing, implementing, maintaining and continuously improving AI management.
How Infosys tracks its AI progress
To keep track of the programme’s success, the company collects metrics around daily average users of its AI platforms and the acceptance rate of code created with generative AI.
But it also encourages employees to flag issues with AI so the trainees become the trainers. For example, if an employee notices a poorly automated translation or transcription, they can dispute the offending portion and correct it themselves, helping to teach and fine-tune the AI model that spat it out.
“Where there are more fundamental issues, engineers look at feedback or disputes,” says Tarafdar. “All of this happens digitally so it becomes a process where they improve the dataset.”
Employees might baulk at such a broad AI programme becoming so integral to a company’s daily operations. The elephant in the room is that genuinely efficient automation has, throughout history, put jobs at risk, be that the looms of the industrial revolution or self-service checkouts at supermarkets.
It may be for those reasons that Infosys CEO Salil Parekh recently denied that any cuts were on the cards due to GenAI, although the statement that the technology is only here to help people be more productive may prove difficult to swallow for some. With generative AI especially, many of the businesses that have been its loudest advocates have also blamed the technology for recent cuts.
But, in spite of the scepticism, Tarafdar is confident that GenAI is here to stay.
“In my view, the organisations that have taken a strategic approach and built the right foundation (using the right platform, the right data, being responsible by design and having the right use cases) will deliver value,” he says. “I don’t think there’s much of an issue for people who have done those things. Where they’ve just gone with the hype, then there is an issue.”