
The impact artificial intelligence (AI) is having on financial services is nothing short of transformational.
From fraud detection and risk management to advanced automation, AI has emerged as the driving force of technological innovation, reshaping the financial services landscape.
But while much has been made about the benefits of AI, it is not without its challenges. According to research by KPMG, 77% of business leaders admit that the uncertain and evolving regulatory environment significantly impacts their AI investment decisions.
Fostering trust in AI should not be the sole responsibility of business leaders
“AI has become a game-changer for businesses, enabling increased personal productivity and the opportunity to innovate products and services to drive business growth,” says Stuart Munton, CIO at AND Digital.
“However, one of the biggest challenges organisations face is understanding how to strike the balance between consumer protection, oversight and innovation.”
As is often the case, regulation has lagged innovation in the AI market but the tide is turning. In 2024, the EU AI Act made waves as the first-ever legal framework, ushering in a new era for AI governance.
While it is unlikely that the UK will introduce exactly the same prescriptive rules, one thing is for certain; financial services organisations will need to have rigorous governance frameworks in place and a deep understanding of the strengths and weaknesses of AI if they are to succeed.
A holistic approach
As traditional banks jostle with challenger banks for market share, they face increasing pressure to create more personalised, tailored customer experiences.
However, they now find themselves at a crossroads, balancing the need for data privacy with the demand for personalisation.
Business leaders often go one of two ways, says Munton. While some choose to ‘lock down’ their systems to protect customer data and with it, the ability to innovate, others embark on ambitious large-scale projects that often add undue cost and complexity.
For Munton, the holy grail lies in a holistic approach, creating a strategy that encompasses high quality data, the supporting infrastructure and the right combination of tech skills, frameworks and governance to maximise AI’s potential safely.
As a first step, businesses need to ensure digital transformation aligns with their business goals.
“A deep understanding of where the business sits on the digital spectrum so they have a baseline from which to work and clarity around what they want to achieve is paramount,” explains Arjun Mahajan, chief for client partnerships at AND Digital.
“Many organisations initiate projects without a clear vision or understanding of what digital transformation entails, only to run into problems further down the line and slow their progress. Effective strategies are those that are designed to help achieve the wider business objectives.”
But strategy is just one part of the equation; even the most sophisticated roadmaps can run into failure if businesses cannot ensure data quality. For an AI model is only as good as the data it uses, says Munton.
“High quality data is crucial as it allows AI models to learn accurately and provide more reliable insights and recommendations, with reduced negative bias. This helps to avoid unfair treatment of specific individuals or demographics, which fosters trust and confidence in customers,” he explains.
In contrast, poor quality data can lead to fragmentation and discrepancies, making it more challenging to trace and understand the decision-making process and ultimately stifling innovation.
To address these challenges, business leaders will need to establish robust safety and ethics frameworks, including clearly defined roles, dedicated oversight and regular monitoring of data throughout its lifecycle.
“Cybersecurity has always been an important factor but developments in AI means the security we had in the past will become vulnerable in the future,” says Munton. “Establishing frameworks for safety and privacy will minimise misuse and enable organisations to pursue innovation sustainably.”
However, fostering trust in AI should not be the sole responsibility of business leaders; cross collaboration is vital, both within the organisation, the financial services sector and with regulators.
The secret to success is a combination of strategic hiring and targeted training programmes
“Working together to share knowledge and ideas helps organisations stay ahead of emerging challenges and ultimately leads to better, more customer-centric solutions,” says Mahajan.
“By the same token, there should be ongoing collaboration with regulators and policymakers. Building this level of transparency and trust will be vital in gaining a competitive advantage.”
Equally important is the use of value-driven use cases, enabling business leaders to cut through the buzz and understand how AI can solve their specific problems, ensuring that investments are made strategically.
But, as the saying goes, the proof is in the pudding and the ability to measure success will not only be beneficial, but imperative.
Establishing achievable goals and building in tangible measures such as key performance indicators against these benchmarks will allow businesses to address challenges and ensure they remain aligned with the business objectives and their regulatory requirements.
Investing in talent and culture
The next five to 10 years will see a significant shift in the way people interface with technology, but with that is likely to come a stark skills gap.
A report by Pluralsight found 90% of executives don’t completely understand their team’s AI skills. And while 81% of IT professionals feel confident they can integrate AI into their roles, only 12% have significant experience working with it.
Against this backdrop, establishing a culture that encourages new ways of working will be vital.
The secret to success is a combination of strategic hiring - bringing together a blend of technologists, data engineers and finance specialists - and targeted training programmes that allow employees to upskill.
“In a world where AI is having a far-reaching and profound impact, having a workforce that understands and embraces this shift will be crucial,” says Munton.
“By investing in training and fostering a culture of shared learning and upskilling, organisations can ensure that AI is seen as a benefit rather than a threat. This is crucial to help employees develop the right skills to seamlessly integrate AI into their workflows,” says Munton.
For Mahajan, mindset and culture are as critical as the technology itself.
“Successful AI will only work if organisations can strike the right balance of top-down behavioural change and bottom-up support. A shared vision and awareness of the benefits of AI right across the organisation, coupled with a culture of continuous learning and development will empower employees, encourage experimentation and accelerate innovation,” he adds.
Ultimately, the successful integration of AI into financial services will depend on a company’s ability to adapt both its technology and its culture to embrace continuous change.
By prioritising governance, ethical standards and workforce readiness, financial institutions can not only navigate the complexities of AI but also unlock its full potential for long-term growth and innovation.
To find out more please visit and.digital

The impact artificial intelligence (AI) is having on financial services is nothing short of transformational.
From fraud detection and risk management to advanced automation, AI has emerged as the driving force of technological innovation, reshaping the financial services landscape.
But while much has been made about the benefits of AI, it is not without its challenges. According to research by KPMG, 77% of business leaders admit that the uncertain and evolving regulatory environment significantly impacts their AI investment decisions.