Artificial Intelligence and Machine Learning in capital markets is picking up the pace | AFME


Share this page
Close
Views from AFME
Artificial Intelligence and Machine Learning in capital markets is picking up the pace
21 Nov 2019
Download Links
Author James Kemp Managing Director
​ ​

The use of artificial intelligence (AI) and machine learning (ML) is increasingly widespread in capital markets. Banks are investing in AI/ML to rationalise cost-intensive manual processes such as KYC (know-your-client), fraud detection and regulatory compliance, as a paper by the ACPR (French Prudential Supervision and Resolution Authority) outlined last year. AI/ML is also powering data analytics in areas from risk management to client engagement and is being used by supervisors to perform market surveillance, according to a Bank for International Settlements (BIS) report. 

Regulators look to get a grip on AI/ML risk

While AI/ML could help banks improve services or streamline costs at a time of unprecedented margin pressure, its impact on market integrity and consumer protection is increasingly being monitored. Authorities in several EU Member States (including France and Germany) have issued consultations or analysis of key risks. 

In addition, the ethics (e.g. fairness) of AI/ML in capital markets is also being examined. The Dutch Central Bank has been first out of the blocks with suggested guiding principles. But the biggest area of interest is the technology’s impact on market stability. This includes the possible emergence of new systemically important financial services providers, who can quickly adopt such technologies, unencumbered by legacy systems, but who may fall outside the scope of existing regulations. 

Finding the right regulatory balance

Even though the risks posed by AI/ML must be understood, regulators cannot be too prescriptive, as this risks slowing down innovation. Fortunately, regulators have generally managed to strike the right balance by adopting a policy of technology neutrality, at least when overseeing innovations like distributed ledger technology (DLT). We hope to see an equally measured approach on AI/ML.  

At the most basic level, regulators need be confident that firms are applying their existing regulatory obligations, such as treating clients fairly, to their use of AI/ML. This should start with firms mapping out the stakeholders in an AI project (e.g. programmers, management, control functions and clients) along with an analysis of the levels of transparency that they will need. 

The focus should then be on delivering that transparency. Firstly, the assumptions made in the development of the model should be clearly defined and justified – from the methodologies used to the way that the results of the model will be measured.  Secondly, testing the model, both before and during deployment, is critical. Such testing might include analysing the model’s behaviour against real and hypothetical market conditions, or the interaction between the model and other systems. All of the testing processes – along with the results – should be documented and shared with the regulators if required.   

The problems with explainability

Explainability – namely the extent to which complex internal mechanics of an AI/ML model can be expressed  is a key issue. Just as the proprietary code shaping computer-based trading strategies are rarely – if ever - shared with institutional investors, it should not be a requirement for the coding of an AI/ML model to be made available, as it will not be comprehensible out of context. 

If specific levels of technical explainability were mandated, use of AI/ML would be severely restricted to only the simplest models, which would come at a cost to their accuracy. The documentation of assumptions and testing is therefore a better way to deliver transparency into an AI/ML model, while retaining full accountability. 

Creating a proportionate regulatory framework

Regulators need to pursue a risk-based approach when determining the transparency requirements it will demand from financial institutions, based on factors such as criticality and scale.  For instance, an AI/ML-enabled trading algorithm might need more scrutiny than, say, an AI/ML tool used to deliver back office efficiencies. 

As a minimum, organisations should document how they use AI/ML and provide evidence, when required, that it is being used appropriately and safely to internal and external stakeholders. There may also be circumstances where transparency needs to be actively curtailed. Banks using AI to detect and combat fraud could find their systems being compromised if they are forced to disclose too much. If the benefits of AI/ML are to be maximised, the regulation guiding it needs to be pragmatic and carefully thought through. 

This blog was first published in L’AgeFi Hebdo on 21 November 2019