The CFPB recently published a blog post titled, “Innovation spotlight: Providing adverse action notices when using AI/ML models.”
The blog post primarily recycles information from the Bureau’s annual fair lending report issued in May 2020. The Bureau indicates that artificial intelligence (AI) and a subset of AI, machine learning (ML), is an area of innovation that it is monitoring. It notes that “industry uncertainty about how AI fits into the existing regulatory framework may be slowing its adoption, especially for credit underwriting.” The Bureau observes that “one important issue is how complex AI models address the adverse action notice requirements in the [ECOA] and the [FCRA]” and that “there may be questions about how institutions can comply with these requirements if the reasons driving an AI decision are based on complex interrelationships.”
As it did in the fair lending report, the Bureau comments that “the existing regulatory framework has built-in flexibility that can be compatible with AI algorithms” and repeats the two examples of such flexibility given in the report: (1) the absence of a requirement for a creditor, when giving specific reasons for adverse action, to describe how or why a disclosed factor adversely affected an application, or, for credit scoring systems, how the factor relates to creditworthiness, and (2) the absence of a requirement for a creditor to use any particular list of reasons.
The Bureau again encourages entities to consider using its new innovation policies (e.g. No-Action Letter and Trial Disclosure Policies) to address potential compliance issues. It also states that it intends “to leverage experiences gained through the innovation policies to inform policy,” and indicates that such experiences “may ultimately be used to help support an amendment to a regulation or its Official Interpretation.”
In connection with encouraging entities to use its innovation policies, the Bureau identifies three areas that it is “particularly interested in exploring”:
- Methodologies for determining the principal reasons for an adverse action
- Accuracy of explainabilty methods, particularly as applied to deep learning and other complex ensemble methods
- How to convey the principal reasons in a manner that accurately reflects the factors used in the model and is understandable to consumers, including how to describe varied and alternative data sources, or their interrelationships, in an adverse action reason
In May 2020, we held a webinar, “Consumer Protection: What’s Happening at the FTC,” in which leaders of Ballard Spahr’s Consumer Financial Services Group were joined by special guest speakers Andrew Smith, Director of the FTC’s Bureau of Consumer Protection, and Malini Mithal, Associate Director of the FTC’s Division of Financial Practices. In the webinar, Mr. Smith discussed his recent blog post about the of AI and algorithms, including the challenges that the use of AI and algorithms create for providing ECOA and FCRA adverse action notices. We have released a two-part podcast based on the webinar. Click on the following links to listen to Part I and Part II.