In what could be an important step towards needed regulatory updating to accommodate the growing use of artificial intelligence (AI) by financial institutions, the CFPB, FDIC, OCC, Federal Reserve Board, and NCUA issued a request for information (RFI) regarding financial institutions’ use of AI, including machine learning (ML). Comments on the RFI must be received by June 1, 2021.
Uses of AI. In the RFI, the agencies express their support for responsible innovation by financial institutions and observe that with appropriate governance, risk management, and compliance management, financial institutions’ use of innovative technologies and techniques such as those involving AI have the potential to augment business decision-making and enhance services available to consumers and businesses. The agencies describe the following six uses of AI by financial institutions:
- Identifying potentially suspicious, anomalous, or outlier transactions (e.g. fraud detection and financial crime monitoring)
- Improving customer experience and gaining efficiencies in the allocation of financial institution resources, such as through the use of voice recognition, natural language processing (NLP), and chatbots
- Enhancing or supplementing existing techniques for making credit decisions
- Augmenting risk management and control practices
- Using NLP for handling unstructured data and obtaining insights from that data or improving efficiency of existing processes
- Detecting cyber threats and malicious activity, revealing attackers, identifying compromised systems, and supporting threat mitigation
Potential benefits of AI. The agencies describe the following potential benefits of AI for financial institutions:
- Identifying relationships among variables that are not intuitive or revealed by more traditional techniques
- Better processing of certain forms of information, such as text, that may be impractical or difficult to process using traditional techniques
- Facilitating processing of significantly large and detailed datasets by identifying patterns or correlations that would be impracticable to ascertain otherwise
- Increasing accuracy, reducing cost, and increasing speed of underwriting
- Expanding credit access for consumers and small businesses that may not have obtained credit using traditional underwriting
- Enhancing an institution’s ability to provide customized products and services
Potential risks of AI. The agencies observe that the potential risks associated with using AI are not unique to AI, such as creating operational vulnerabilities and consumer protection risks (fair lending, UDAAP, privacy). They describe the following “particular risk management challenges” created by the use of AI:
- Lack of explainability as to how and AI uses inputs to produce outputs
- Because of an AI algorithm’s dependency on training data, perpetuation or amplification of bias or inaccuracies inherent in the training data or making of incorrect predictions due to incompleteness or non-representative nature of data set
- If an AI approach has the capacity for dynamic updating (i.e. updating on its own sometimes without human intervention), difficulty in review and validation
Information requested. The RFI contains a series of questions on the following topics:
- Explainability
- Data quality and data processing
- Overfitting (i.e. when an algorithm “learns” from idiosyncratic patterns in the training data that are not representative of the population as a whole)
- Cybersecurity
- Dynamic updating
- AI use by community institutions
- Use of AI developed or provided by third parties
- Fair lending
The RFI includes as an appendix a list of laws, regulations, supervisory guidance, and other agency statements that might be relevant to AI. Such guidance includes the CFPB’s 2020 blog post on providing adverse action notices when using AI/ML models and the 2019 interagency statement issued by the CFPB and federal banking agencies on the use of alternative data in credit underwriting.
In the RFI’s questions regarding fair lending, the agencies specifically ask about the need for more regulatory clarity as to providing the principal reasons for adverse action in adverse action notices. However, the areas in which more clarity in the regulatory framework is needed to facilitate the use of AI in credit underwriting are not limited to adverse action notices, but also include the appropriate manner in which ML models should be tested for fair lending risk and how ML model development processes can search for less discriminatory alternatives. Both of these tasks are more complex in the case of ML models than with traditional logistic regression models. Financial institutions should use the RFI as an opportunity to bring those areas to the regulators’ attention.