Why regulators must adopt AI now

18 March 2024
Knowledge Base

by Daoud Abdel Hadi

It’s a new world – defined by rapid technological advancements, and witnessing more organisations embracing AI. To navigate this ever changing landscape, it is essential that regulators recognise traditional governance principles no longer suffice. In fact, relying on these outdated principles could result in more harm than good. Regulators must begin to use the power of AI to modernise their approach and promote responsible innovation. As the world turns, it’s essential for regulators to foster an ecosystem that integrates AI effectively and protects individuals at the same time.

The impact of AI on regulatory affairs 

We are already seeing regulatory bodies beginning to embrace AI in its functions. The FCA, for instance, provides a digital sandbox where AI propositions and proof of concepts can be tested.

However, the topic of AI regulation remains its primary focus. FCA Chief Data, Information and Intelligence Officer, Jessica Rusu, stated at an event that innovation will lead to better AI regulation. She also added, “One of the most significant questions in financial services is whether AI in UK financial markets can be managed through clarifications of the existing regulatory framework, or whether a new approach is needed”1.

The benefits of AI for regulators

Without a doubt, AI offers financial regulators powerful tools to improve how they work and what they can achieve. It promises many benefits, from predicting market changes to enhancing oversight processes. With its potential for digital transformation, there’s little reason for regulators to overlook AI in modernising their operations.

Providing regulatory foresight in addition to oversight

With AI, regulators can proactively assess market conditions and anticipate major market events such as the collapse of Lehman Brothers, Silicon Bank, FTX, Evergrande etc. They also have the authority to request relevant data from companies and use AI models to evaluate their financial health in comparison to their peers.

As an example, an AI model can be used for stress testing, simulating several stress scenarios such as an economic downturn to gauge the resilience of financial institutions. This proactive approach enables regulatory agencies to provide advanced warnings to the markets regarding the likelihood of such events.

Delivering holistic intelligence

Regulators can access a wealth of data from numerous businesses and their customers, enhanced by public sources and potentially shared by other regulatory partners. Though individual business may not have access to all this data, regulators can use it to create a comprehensive oversight. By employing technologies such graph models and link analysis, they can combine data from different financial and security agencies to gain a well-rounded understanding of nefarious activities, such as terrorist financing.

Improving regulatory operating processes

AI can enhance the efficiency of regulatory processes, leading to better outcomes. Using natural language processing (NLP) models, regulators and any regulated businesses can quickly get clear summaries and concise answers to specific inquiries, without the need to painstakingly sift through large volumes of documents. Financial institutions can also use generative AI to understand the applicability of regulation to their specific circumstances.

In essence, AI can handle complex, error-prone and costly tasks, making processes smoother for both regulators and the businesses they oversee.

Emerging threats

Financial crime regulators are facing new and unfamiliar threats as cryptocurrencies, NFTs and digital wallets gain popularity. Criminals are using the anonymity, pseudonymity, and global reach of these assets to their advantage.

With new forms of payments like cryptocurrencies, assessing risk is more challenging because traditional attributes such as geographical location, currency, sender and beneficiary information are no longer relevant. However, AI can effectively analyse blockchain transactions to identify suspicious fund flows, adapting to these evolving risks. Similarly, Natural Language Processing (NLP) can be used to analyse cryptocurrency social media, websites, and forums to determine bad actors.

The implementation of AI

Integrating AI in regulatory operations is not going to be a straightforward task. Regulators must carefully consider numerous factors to avoid costly mistakes and manage risks effectively.

Divide between regulation and investigation

In many cases, the entities responsible for making the laws operate independently from the agencies tasked with investigating the resulting cases. When insights from investigations don’t directly inform the legislative process, it can leave financial institutions without clear guidance on case reporting. This gap can constrain how financial institutions use AI.

For example, incorporating insights and feedback from investigations, financial institutions can apply supervised learning to develop prescriptive models for detecting money laundering and child trafficking models. Without such feedback, they are limited to the use of unsupervised learning, which might not be as effective in complex scenarios where nuanced understanding and context are important. AI models benefit greatly from this feedback loop as it strengthens or challenges its recommendations.

Understanding this interplay informs regulators on how they might set rules, share information, and create frameworks that ensure financial institutions are equipped with the right data and guidance to use AI effectively. Regulators need to remember that their approach to legislation and coordination with investigative agencies can directly influence the effectiveness of AI in financial compliance and crime prevention.

Black box models

Many financial institutions are using black box models, predominately neural networks, for their ability to learn from complex data which makes them useful for financial crime analysis. However, the lack of clarity around how these models make decisions presents challenges in financial crime investigations, where transparency, fairness and the absence of bias are essential.

This creates a paradoxical situation where regulators discourage the adoption of this type of models, pushing financial institutions to opt for less effective ones. As a result, more low-quality cases arise and in a worst-case scenario, real criminal activities may go undetected.

Complexity of data from multiple sources

To be able to fully benefit from the use of AI, data sharing is often necessary. This may include obtaining publicly available data and information specific to the institutions they oversee. Addressing complexities such as data quality and establishing a common data dictionary is crucial for regulators to unlock the promises of AI.

As the world transitions to accommodate the continual technological evolutions, regulators must integrate AI into their systems. There has been progress made in adopting AI, but there still remain substantial gaps that need addressing.

Regulatory bodies cannot afford to lag behind the industries they oversee. It is necessary that they are not lax in their innovation and remain ahead of the criminals whose methods evolve and adapt day-by-day.

The author, Daoud Abdel Hadi, is a Lead Data Scientist at Eastnets and Seun Sotuminu, Data Scientist, PDM, Eastnets.

(*1) https://www.fca.org.uk/news/speeches/building-better-foundations-ai



Leave a Reply

Your email address will not be published. Required fields are marked *