A new CFPB blog post titled “An update on credit access and the Bureau’s first No-Action Letter” provides a boost to lenders using alternative data and machine learning in their underwriting models.

The Bureau issued its first (and so far only) no-action letter in September 2017 to Upstart Network Inc. stating that the CFPB had no present intention to take enforcement or supervisory action against the lender under the ECOA relating to the lender’s underwriting model, and especially its use of certain alternative data fields.  The letter was conditioned on Upstart’s agreement to a model risk management and compliance plan that required it to analyze and address risks to consumers, and assess the real-world impact of alternative data and machine learning. In its blog post, the CFPB shares results provided by Upstart of simulations and analyses it conducted pursuant to that plan.

The results showed that Upstart’s model using alternative data and machine learning approved 27% more applications than a traditional lending model and yielded 16% lower average APRs. The expansion of credit access reflected in the results occurred “across all tested race, ethnicity, and sex segments” and “significantly expand[ed]” access in “many consumer segments,” such as “near prime” consumers, applicants under 25 years of age, and consumers with incomes under $50,000.  The CFPB stated that “with regard to fair lending testing, which compared the tested model with the traditional model, the approval rate and APR analysis results provided for minority, female, and 62 and older applicants show no disparities that require further fair lending analysis under the compliance plan.”

In the blog post, the CFPB encourages lenders “to develop innovative means of increasing fair, equitable, and nondiscriminatory access to credit, particularly for credit invisibles and those whose credit history or lack thereof limits their credit access or increases their cost of credit, while maintaining a compliance management program that appropriately identifies and addresses risks of legal violations.”

The blog post concludes with the CFPB’s statement that it is “currently reviewing comments to its proposed No-Action Letter, Trial Disclosure, and Product Sandbox policies.”  In September 2018, the Bureau proposed significant revisions to its “Policy to Encourage Trial Disclosure Programs” which sets forth the Bureau’s standards and procedures for exempting individual companies, on a case-by-case basis, from applicable federal disclosure requirements to allow those companies to test trial disclosures.  (Upstart’s no-action letter was issued under these procedures.)  In December 2018, the CFPB issued proposed revisions to its 2016 final policy on issuing no-action letters, together with a proposal to create a new “product sandbox.”

The CFPB, in its July 2019 fair lending report, discussed supervisory reviews of alternative credit scoring models. It stated that in 2018, the Office of Fair Lending recommended supervisory reviews of third-party scoring models that would “focus on obtaining information about the models and compliance systems of third-party scoring companies for the purpose of assessing fair lending risks to consumers and whether the models are likely to increase access to credit.  Observations from these reviews are expected to further the Bureau’s interest in identifying potential benefits and risks associated with the use of alternative data and modeling techniques.”  The Bureau commented that while a significant focus of its interest is on how alternative data and modeling can expand credit access for credit invisibles, it is also interested in other potential direct or indirect benefits to consumers, “including enhanced creditworthiness predictions, more timely information about a consumer, lower costs, and operational improvements.”

The use of algorithmic models to assess credit risk or other objectives is specifically addressed by HUD in its soon to be released proposed revisions to its 2013 rule under which HUD or a private plaintiff can establish liability under the Fair Housing Act for discriminatory practices based on disparate impact even if there is no discriminatory intent.  The proposed revisions include defenses that a defendant can use to establish that the plaintiff’s allegations do not support a prima facie case where the cause of a discriminatory effect is alleged to be a model used by the defendant such as a risk assessment algorithm.

Lawmakers are also focusing on the use of algorithms by consumer financial services providers.  Earlier this year, the House Financial Services Committee established two task forces, one on financial technology and the other on artificial intelligence.  Both task forces held their first meetings in June.  Also in June, Democratic Senators Elizabeth Warren and Doug Jones sent a letter to the CFPB, Federal Reserve, OCC, and FDIC expressing concern that fintech and traditional lenders using algorithms in their underwriting processes may be engaging in unlawful discrimination.  The letter sought answers to a series of questions, including what the agencies are doing “to identify and combat lending discrimination by lenders who use algorithms for underwriting” and what analyses the agencies have conducted or plan to conduct regarding “the impact of FinTech companies or use of FinTech algorithms on minority borrowers, including differences in credit availability and pricing.”

Our weekly podcasts include an episode released in June titled, “Using artificial intelligence for consumer finance: a look at the opportunities and challenges.”  In the episode, we discussed the opportunities and challenges created by the use of AI models in consumer financial services, including the benefits of explainable AI and its implications for the consumer financial services industry, especially for applications where understanding the model’s reasons for returning a score or decision are necessary.  Click here to listen to the podcast.