Skip to content

Impact and use cases of AI in compliance for Financial Service firms

This article examines the challenges and opportunities of using AI in compliance for financial service firms. finalix’s Andrew Rufener and Daniel Knecht argue that AI can help firms manage regulatory requirements, risk, and efficiency by automating data analysis and providing insights. They stress the importance of trust and explainability in AI systems, suggesting the need for clear governance and knowledgeable compliance officers. Rufener and Knecht suggest starting with AI for low-risk tasks, building trust and experience, before aiming for the more sophisticated solutions. Besides reading this finalix article, make sure to tune in to the AI-generated podcast!

Challenges and possible benefits of sophisticated AI applications in the context of regulatory requirements

As in many industry sectors, Artificial Intelligence (AI) presents substantial opportunities for companies in financial services. Currently, these firms grapple with significant cost pressures, the constant imperative to innovate and generate new demand, increase Assets under Management (AuM), and meet all these challenges while ensuring full compliance.

How does AI contribute to addressing these issues? What simplifications or enhancements can this advanced technology offer to such firms? Will the role of compliance and related functions remain the same as it is today?

In the discussion below, Andrew Rufener (SME for AI) and Daniel Knecht (SME for AML) shed some light on the above questions and provide guidance on how to address some of the most striking challenges.

What are some of the most pressing issues for financial service firms when looking at compliance?

Daniel:

“Compliance challenges are mostly either efficiency-driven or quality-oriented. For one, enhanced regulatory requirements are increasing the need for skilled personnel who can cope with the flood of alerts and necessary clarifications to maintain client relationships. The number of clients and transactions, for instance, can exponentially push the number of clarifications required by the first line of defence dramatically. On the other hand, quality is paramount. Irrespective of quantities, if only one bad case goes undetected, companies and their employees must fear severe punishment from regulators.

What remains is a constant reassessment of efficiency vs. risk. How much risk are companies willing to take and what measures can be incorporated to reduce any remaining residual risk, while still operating as efficiently as possible. To date, there is no one correct answer, and each firm has a different approach to this question”.

How could AI provide an answer to the above question of efficiency vs. risk?

Andrew:

“AI, in its different forms, can help to provide efficiency as well as supporting better informed decision making. Broadly speaking there are the following possible applications:

  • Data Aggregation: AI can assist in efficiently consolidating and analysing vast amounts of structured (for example from a database) and unstructured (for example text or call recordings) data from various sources to gain valuable insights.
  • Information Generation: By analysing customer and market data, AI can generate precise and useful information necessary to support informed business decisions. This ranges from summarization from vast amounts of data but also pattern detection, and more.
  • Improved Risk-Based Approaches: AI and graph technologies can help better identify and assess risks by analysing historical data, recognizing patterns indicative of future risk, but also identifying key relationships between businesses and individuals that could affect risk analysis.

AI that handles large volumes of data and extract meaningful insights can fundamentally transform financial services. Automating data aggregation and generating actionable insights can augment and free up human resources for more value generating tasks, while improved risk-based approaches ensure more reliable and accurate assessments, possibly also by adding in 3rd party data to further enhance risk profiling. These tools can help drive efficiency, better manage risk, but also reduce exposure and assist with value creating insights”.

With new tools come new challenges. What must be ensured for AI to be effectively used in compliance?

Daniel:

“One of the key challenges with AI in compliance is trust. Trust in the results and outcomes of what the machine provides. Trust in what the user does and does not get to see. For instance, any algorithm applying AI to calculate risk scores and assess transactions, filtering out false positives and highlighting true positives, must enjoy a certain level of trust. Otherwise, compliance is back to manually checking each and every hit individually.

Machine learning algorithms get better over time. Their models are being trained with every action and iteration taking place. But until the machine does get to such a level, the risk of false negatives remains. How should companies address this issue? How can they treat this residual risk without becoming the target of regulatory consequences?

In the same breath, the use of AI to filter and assess data against a set of risk factors raises new questions of responsibility and liability. In the old world, responsibility and liability could most often be assigned to the person performing certain tasks or the one in charge of controlling said person. With AI, who is to blame if something does go awry? Can compliance officers, RMs and other personnel be held accountable and personally liable, if the machine returns incomplete or false results? Financial service firms must find a balance between “accepted risk in the name of the company” and residual personal responsibility.

Quite likely, current roles, governance structures and internal policies are not fit for this purpose and must be revised not just with a compliance mindset, but with a deep understanding of the underlying technology, too”.

Andrew:

“When talking about AI in compliance and also in reference to trust, it is first and foremost key to be clear about the use case, the technology being applied and to what extent humans remain “in the loop”. Machine Learning is reliable within the model parameters while GenAI “black box” systems require more effort to ensure they remain within the parameters set.

To address these challenges effectively, firms must first establish a foundation of transparency and trust. This begins with the technology itself: compliance AI systems need to be explainable (which has been best practise for years), providing clarity on how risk scores are calculated, how decisions are made, and which data points are emphasized. By employing explainable AI, firms can help their compliance officers understand, trust, and validate the AI’s outputs, allowing for greater reliance on the technology while reducing the burden of manual checks.

To manage residual risk and the potential for false negatives, firms can employ a combination of human oversight and tiered response systems. High-risk cases can be flagged for human review while low-risk cases are monitored over time. This helps firms stay compliant with regulations without overwhelming staff and allows the AI to continue learning from these reviews, enhancing accuracy over time. Additionally, continuous model monitoring and feedback loops are crucial. Regular audits and validation checks of AI models help ensure that they meet regulatory standards, and that potential biases or errors are addressed promptly.

Addressing liability and responsibility is indeed a new frontier. It’s essential that firms define clear governance and accountability structures in AI-augmented compliance roles. Developing AI governance policies that address the aspects of responsibility, particularly in situations of error, can provide a structure for accountability while supporting innovation. Having said that, with new AI regulation becoming available, its implementation is currently still underway, and this is an evolving field.

Finally, for these systems to be successfully integrated into a compliance framework, the existing (AI and data) governance structures, policies, and roles need to adapt. Businesses should invest in upskilling their teams, ensuring that they not only understand compliance but also the nuances of the AI tools they’re using. By revisiting and adjusting internal policies with an eye on technology, firms can create a more resilient compliance function capable of leveraging AI while staying aligned with regulatory and ethical standards”.

Who could and should drive this development?

Daniel:

“Legal and compliance specialists are often too far removed from technical developments to fully understand and leverage AI’s possibilities. Developers, on the other hand, often lack the necessary compliance expertise to grasp the true challenges that AI could resolve for the compliance unit. Therefore, bridge-builders who can connect both worlds are needed. These people would need a thorough knowledge of overall compliance processes. It does not suffice to only think and strategize in siloes. The most veritable gains can only be achieved when processes and structures are looked at holistically. Classically, you would often find a Product Owner (PO) role somewhere in the compliance organisation who is responsible for the advancement of existing tools. In recent years, this role has many times been supported by data specialists who would crunch numbers and try to get some deeper insights from statistics. All that is important, but it lacks the holistic view required to integrate AI most effectively – especially since these people themselves are usually part of a compliance team.

Instead, your bridge builders should be in a position to create an impact on overarching processes related to client data and the customers’ lifecycle management (CLM). Such skills are not easy to come by and I would not expect an easing of the current shortage in the years to come. Fortunately for finalix, this is exactly the sort of engagements we are specialized in and where we can provide the most benefit to our clients”.

Andrew:

“To answer this question, it is probably worthwhile to look back at how we dealt with developments of technology in general and more recently the emergence of Cloud infrastructure that posed similar questions. While the complexity with AI is most certainly higher and additional factors such as ethics come into play but also possible impact on reputational risk, we have always required individuals who were able to bridge the gap between technology, business, finance, and legal to help us steer developments in the right direction and this is no different now. The challenge, however, is to find professionals who understand the breadth of challenges across domains and ideally also have been involved in the design and implementation of solutions such that they can guide the organization.

Experience has shown that senior individuals who have the battle scars to show are usually best suited to assist with the overall vision development and guidance of the solution development, being able to steer subject matter experts but respond to multi-disciplinary challenges early on to help deliver complex programs within the risk and financial parameters. These skills may come at an extra cost, but they will most certainly pay off in complex projects”.

It will take a while before artificial intelligence is fully integrated into compliance-driven-processes. What could be a starting point in the meantime? What are possible quick wins? Are local LLM applications a good way to introduce a company to the world of AI?

Andrew:

“To start with the last question first, LLM’s are a good tool for users to experience the power of GenAI, but users need to be trained so they can be successful with them. Exposing users to GenAI without adequate training is ill-advisable. The “lowest hanging fruit” applications are (internal) knowledge bases using retrieval augmented generation (RAG), search using tools such as Perplexity, and additions to desktop tools although they are still in their infancy. Local (internal) generic or even customized (open source) LLM’s are interesting because they can be run within the confounds of the business, ensuring that no data leaves the organization, but require more effort and knowledge from the IT teams than using cloud-based services.

In the compliance domain, GenAI can be deployed as Knowledge Base for subject matter experts and employees in general, but natural language processing (NLP) and GenAI can also easily be embedded in compliance workflows today to assist humans. While humans need to be able to take final decisions, many mundane and time-consuming tasks can be performed by machines, enabling humans to take better and informed decisions.  Example tasks are text analysis, text summarization, report generation, sentiment analysis, network analysis, and more which can substantially reduce human effort and improve decisions by providing all the foundational information required for a good decision”.

Daniel:

“Like Andrew said, an easy first step into the world of AI might be the development of LLMs to summarise policies, contracts, client profiles and the like. There is little risk involved in allowing such applications, but it might allow the organisation to gather some first-hand experience in the process, while also benefitting from it.

Additionally, AI models should be set up as quickly as possible to analyse patterns and anomalies in large data sources such as transaction data. These models become better over time. Thus, the earlier you can get started, the quicker you will have better results. Whether or not you fully trust the results from the get-go or keep some manual checks and balances depends on your risk appetite and individual situation.

Moreover, AI capabilities may also assist report generation with low risk and low effort, again allowing for quick wins and experience gathering”.

Does this talk sound familiar to you? Are you looking to transform your business with the help of AI, but need some more guidance on how to do so? Or would you like to keep the conversation going without any obligation? Andrew, Daniel and our other SMEs here at finalix would love to discuss your challenges and possibilities in some more detail.

Here at finalix, we use AI in our daily business regularly. This article has been edited and improved with the help of MS Copilot. The image at the beginning of the article has been generated via ChatGPT 4.0. Finally, make sure to tune in to the audio version of this talk that has been made available via NotebookLM and be ready for a surprise.

Contact for further questions:

Andrew Rufener
Senior Manager

More case studies

Please select listing to show.