The future of AI adoption and responsible innovation in financial services

The rapid advancement of AI creates significant opportunities and pressing security challenges to the financial services industry. A panel of eight industry experts discussed the importance of adopting AI with a focus on responsibility and innovation.

Cf Header

Financial institutions need to embrace AI solutions to stay competitive, but there are real security concerns among this traditionally cautious sector. Striking the right balance between innovation and protecting companies and customers will future proof the financial sector. 

A 2024 Bank of England/Financial Conduct Authority survey of AI adoption among UK financial services firms found that 75% of firms are using AI solutions, up from 53% in 2022, with the main use cases being process optimisation, customer support, detecting fraud, enabling transparency and risk mitigation. 

To explore the evolving role of AI in the financial services market, Raconteur and CreateFuture bought together a panel of eight experts to discuss this further. The panel was made up of:

  • Jeff Watkins, chief technology officer, CreateFuture
  • Baydr Yadallee, director of client services, CreateFuture
  • James Farrow, chief technology officer, Railsr
  • Aman Thind, chief technology officer, State Street
  • Guillaume Merindol, senior engineering director, Checkout
  • Matt Roberts, head of data science, ClearBank
  • Dan Whitehead, chief counsel global regulatory, Hogan Lovells
  • Jordan Avnaim, chief information security officer, Entrust

Security and regulatory challenges for financial services

Deepfakes, identity fraud, AI-powered cyber-attacks, lack of explainability, data breaches and third-party risks along value chains were cited by the panellists as major security concerns that need to be addressed for responsible AI adoption and innovation in the financial services sector. Malicious actors, from so-called lone wolves up to hostile states, may exploit weaknesses in systems and supply chains, as well as targeting individuals, to compromise financial institutions. 

Jordan Avnaim, chief information security officer at software company Entrust, says that AI allows “threat actors to create deepfakes at alarming rates”, which is a worry for enterprises and end users alike. Entrust’s Onfido Identity Fraud Report three years ago recorded relatively low numbers of deepfakes, but Avnaim said that this year, they’re seeing one deepfake attempt every five minutes. 

Cyber security risks can increase when businesses use third-party applications and services, “particularly those developed by smaller organisations, such as start-ups”, says Dan Whitehead, chief counsel global regulatory at law firm Hogan Lovells. His firm is kept busy with clients reporting cyber security incidents and cautions businesses to be wary of AI tools that “may be poorly tested or exhibit zero-day vulnerabilities.” 

James Farrow, head of product at software firm Railsr, gives the example of machine learning and data science platform Hugging Face, which contained models that could attack user environments, potentially resulting in data disclosure:  “A lack of knowledge how data is used by public models is probably the biggest worry.”

For financial services operators, regulatory compliance can compound these challenges, even if the regulations themselves are aimed at protecting the money, data and privacy of businesses, customers. 

But Jeff Watkins, chief technology officer at software company CreateFuture, explains, reputable companies and friendly nation states should not slow down AI research, or become complacent with regulatory compliance, simply because malicious actors don’t care about regulations. These malicious actors will continue with their own research into harmful ways to use AI, so it is up to reputable actors to set a good example. 

“AI could be used by unfriendly nations to destabilise economies and start to manipulate markets – and not just nations – somebody could do a really rigorous huge volume attack that looks very credible, as if it has come from a nation, to manipulate a particular stock, for example,” says Watkins.

As such, it is imperative for the financial services industry to take steps to research, develop and introduce AI solutions and protect all stakeholders in ways that are ethical, responsible and innovative.

Adopting AI constructively and responsibly

Along with dealing with major threats, such as international cyber-attacks, Matt Roberts, head of data science at the banking and payment infrastructure company ClearBank, reminds financial services businesses to remember “the most basic foundations” of setting up and using AI solutions. He cautions that even simple tasks, such as using AI for translation and information interpretation can be a big risk, citing recent controversies about AI-generated news stories that lacked accuracy, nuance and sensitivity.

The panellists agreed that properly training employees in responsible, competent AI use was vital. Roberts says that the “cultural side” of getting people on board with AI adoption is as important as the technical training. 

“If they’re not engaged with the idea of AI becoming part of their jobs, they’re not going to be involved in the technical training,” says Roberts. “When rolling out the technical training, we’ll talk about how this is going to affect your job and how this is part of the company culture.”

Guillaume Merindol, senior engineering director at digital payment solutions provider Checkout, is upbeat about the contributions of so-called AI natives: “There are a lot of questions about how leaders can help others understand AI, but it has to come from the colleagues, especially the young people who are going to come in and know how it works better than everyone.”

Baydr Yadallee, director of client services at CreateFuture, says that in-house training and education, especially on data privacy and security with AI applications, is essential, especially when working with clients. He says that CreateFuture “essentially tested our AI on ourselves, with the view of taking those learnings out to clients.”

EU regulations, particularly the AI Act, could improve AI training beyond the bloc, according to Whitehead, who says the EU is “ahead of the curve in some respects”, with its requirement to have employee training in place if you are deploying or developing AI systems. This, in turn, sets a standard that businesses that trade with the EU may need to meet to maintain these commercial relationships.

Choosing partners carefully is another important step in mitigating risks and concerns. Like employee training, focusing on both technical aptitude and a culture of ethics and responsibility is important when involving external operators and consultants. 

“You need to understand the lack of explainability and predictability of these models and ensure they are used for the correct use case with appropriate oversight,” says Aman Thind, chief technology officer, State Street. “Our brain is also unexplainable but we operate within the boundaries of laws and ethics to be responsible human beings. We need an encoding of the same principles to create responsible AI which will enable us to use it much more widely.”

Additionally, choosing partners has customer service implications when using third-party AI-powered solutions for onboarding new customers, according to Avnaim: “Our data shows that one in five customers will drop out and go somewhere else if the onboarding process is cumbersome or slow, so choosing a good partner who has this AI technology is the way to go here.”

Data protection regulations, such as the EU’s GDPR and DORA, have given financial services businesses further motivation to ensure AI is developed and used responsibly. Whitehead says that while regulations can have “a significant impact on operational resilience for financial services organisations … the focus of the financial authorities is on cyber security and the importance of protecting customer data.”

Looking ahead to an innovative AI future

The panellists are optimistic that AI solutions can continue to transform the financial services sector, provided operators are proactive and make positive use cases for the ever-evolving technologies. Farrow says that businesses need to have plans in place to use AI to scale their business, keeping up with the pace of change and to achieve the best results from AI.

Echoing Farrow’s comments, EU and UK operators need to focus on this potential for growth, according to Roberts. The US is the world’s biggest tech investor with the EU and UK “a very distant third or fourth”, but Roberts says that finding “a reasonable middle point that feels safe and ethical” with a commitment to “preparedness when it comes to regulation.”

Whitehead adds that regulations do not have to stifle innovation as long as they are ”sufficiently pro-innovation that they’re not seen to be entirely restricting organisations’ ability to build new AI systems to innovate and progress.”

While financial services is a heavily regulated industry, which lends itself to caution overall, Yadallee says the future is bright, even if we cannot tell exactly what it holds: “I look forward to the possibilities – we won’t even know what some of the AI use cases for banking will be in five years’ time.”

“With innovation, it’s like a logistic curve – you never know where you are on it,” concludes Watkins. “Are we at the gramophone level? Are we at the iPod level? That’s exciting.”


To find out more about the transformational power of AI, visit CreateFuture