setembro 27 2023

Generative AI and the financial services industry – risks according to the IMF

Share

Summary

The International Monetary Fund's recent Report on the adoption of GenAI in the financial services industry provides a timely insight into the key risks that industry participants may face when interacting with GenAI.  The Report concludes that GenAI technology has "great promise" in the financial sector, but that risks intrinsic to GenAI could damage the reputation and soundness of the sector.  According to the IMF, prudential oversight authorities should increase their monitoring of GenAI development, with interim action needed to help guide use of the technology in the financial services industry.

Background

On 22 August 2023, the IMF published its Generative Artificial Intelligence in Finance: Risk Considerations report (the "Report") on generative artificial intelligence ("GenAI"), following its 2021 paper which examines artificial intelligence / machine learning ("AI/ML") in the financial sector.  As adoption of GenAI has been uniquely fast and wide,1 the technology brings risks which are uncommon to AI/ML and should be "considered carefully by the industry and prudential oversight authorities".2 The IMF indicates that competitive pressures are a force behind the widespread use of GenAI, as its ability to process masses of data means financial service providers can improve efficiency and enhance customer service without incurring disproportionate additional costs.

The Report explores potential risks of GenAI in the financial services industry through 4 key categories: (1) inherent technology risk; (2) performance risk; (3) cybersecurity threats; and (4) financial stability risk.

Inherent technology risk

Data privacy

Although GenAI technology shares privacy concerns with AI/ML—e.g. leakage from training datasets and de-anonymising data through deducing identities from behavioural patterns—the Report also considers risks which are novel to GenAI.  One issue is that publicly available GenAI systems, which often automatically "opt-in" user input data, utilise this data to train the model and improve responses.3  Although "opting-in" can enhance the functionality of GenAI, it also increases the likelihood that sensitive data will form part of the LLM and leak to other unconnected users.4  Even with the development of enterprise-level GenAI, which can restrict the use of business data as a learning tool,5 one residual privacy concern is that scraping (and thus integrating) personal data from publicly available platforms (e.g. social media) may have otherwise required explicit consent.

Embedded bias 

The general challenges of systematic and unfair discrimination are important risks to mitigate in a highly regulated financial industry, but this is particularly the case in respect of GenAI.  Data used to train LLMs which is incomplete or contains underlying societal prejudices perpetuates discrimination in the model's output.  Although GenAI can offer a low-cost and automated means of profiling customers (e.g. for AML or sanction purposes), the composition of these profiles may be influenced by the embedded bias of the model.  The IMF proposes that "appropriate human judgement will need to complement GenAI-based transaction monitoring models" in order to limit risk of unethical practices and maintain public trust in financial services.

Performance risk 

Robustness

Both AI/ML and GenAI face issues of robustness (meaning those relating to the accuracy of AI models' output).  The risk is acute with GenAI as some models have shown an ability to generate novel content which is plausible but incorrect and will, in some cases, subsequently defend the veracity of this content—a phenomenon known as "hallucination".  In practice, an example may be where financial services companies offer helplines operated by GenAI chat bots which respond with false information to customers, eroding public trust in GenAI technology and damaging the reputation of the financial services industry.  The IMF states that the reasons for hallucinations are not fully understood but that the risk is likely to remain for the foreseeable future, as the ongoing efforts to address hallucinations are narrowly focussed on specific tasks. 

Synthetic data

One potential response to the privacy and confidentiality risks of adopting AI/ML in the financial industry is using synthetic data, which mimics real data through deep learning model simulations which cannot itself be attributed to a person or group.  Given GenAI's ability to generate new content using broad datasets, it can be an effective method of coding synthetic data-generator algorithms which "better captures the complexity of real-world events", according to the IMF.  Despite its benefits, ensuring high quality synthetic datasets which extinguish the replication of real-world biases is still a challenge which must be overcome for the technology.

Explainability

It is necessary for financial institutions to understand and explain the reasoning behind their actions, both to uphold the trust of the consumer and to adhere to the regulatory requirements of prudential supervisors.  The Report states the adoption of GenAI is a risk to explainability in the financial markets, as the breadth and size of datasets used by GenAI currently make it difficult to present its reasoning.  Although research is being undertaken to improve GenAI's explainability, complex architecture and the output of text—rather than decisions—means further scrutiny of this risk is needed to cement the technology's successful adoption within the financial markets.

Cybersecurity threats

The IMF identifies two categories of cybersecurity threats: (1) those which exploit the use of GenAI to create cyber risks; and (2) those which attack the operation of GenAI.  First, where GenAI can be used to generate increasingly sophisticated phishing communications, leading to identify theft or fraud—both prevalent issues in the financial services sector.  Second, where data manipulation attacks (in which elements of training data are modified) undermine the training accuracy of GenAI,6 an issue shared with enterprise-level GenAI models as the use of enterprise-specific datasets could be attacked by purpose-built cyber-hacking tools.  According to the Report, the "full scale of its vulnerability to cyberattack is yet to be comprehensively understood" but early signs warrant "careful contemplation", particularly in the regulatory-heavy finance industry.

Financial stability risk 

The IMF's 2021 paper considered potential challenges to systemic risk of AI/ML—one example being that using AI/ML to complete credit underwriting decisions and risk assessment (which are inherently procyclical) could automate and accelerate procyclical financial conditions.  Given the high adoption rate of GenAI, the IMF is concerned that such risks could be exacerbated through excessive reliance on the technology.  Amongst others, solvency and liquidity risks could increase if AI-driven trading incentivises the market to take higher credit risk or if the herd behaviour of GenAI investment advisors impacts market liquidity by encouraging financial institutions to follow the same decision making process.

Takeaways

  • The IMF believes GenAI technologies could drive efficiency, improve customer experience, and increase regulatory compliance in the financial industry, but adoption of GenAI should be approached with caution.
  • Financial service companies should assess the various risks of adopting GenAI, such as those concerning privacy over customer data, bias in training datasets, and limited explainability. 
  • GenAI regulation will evolve over time and action is needed to guide the use of GenAI by financial institutions.  In the interim, GenAI needs human supervision to deal with risks, and prudential oversight authorities should improve institutional capacity to monitor the adoption of GenAI in the sector.


1 The risks associated with using GenAI in finance are amplified by its rapid adoption across the world—one GenAI engine (ChatGPT) gained more than 100 million active users within the first two months of its launch.

2 GenAI is a type of AI/ML technology.

3 "Opting-out" is possible but needs to be explicitly exercised.

4 Which may have been a driving factor as to why a number of international banks reportedly banned employees from using ChatGPT (Retail Banker International 2023).

5 E.g. see Introducing ChatGPT Enterprise (openai.com).

6 Although this risk is limited as current GenAI models are trained on pre-2021 internet data.

Serviços e Indústrias Relacionadas

Stay Up To Date With Our Insights

See how we use a multidisciplinary, integrated approach to meet our clients' needs.
Subscribe