On February 12, 2020, the House Financial Services Task Force on Artificial Intelligence will hold a hearing titled, “Equitable Algorithms: Examining Ways to Reduce AI Bias in Financial Services.”

The Committee Memorandum flags a number of issues that could be the focus of comments and questions from lawmakers.  Those issues include the potential for data sets used in AI to “contain errors, [be] incomplete, and/or contain data that reflects societal or historical inequities,” challenges in applying the existing legal framework (ECOA, FHA, FCRA) to AI technologies, and risks of “regulatory sandboxes.”  Also mentioned in the memorandum is “educational redlining,” which was the subject of a recent report by the Student Borrower Protection Center.  The term “educational redlining” refers to the claim that the use of education data in credit underwriting, such as whether a prospective borrower attended “a community college, an Historically Black College or University, or an Hispanic-Serving Institution,” results in higher costs to minority borrowers.

The scheduled witnesses are:

  • Dr. Philip Thomas, Assistant Professor and co-director of the Autonomous Learning Lab, College of Information and Computer Sciences, University of Massachusetts Amherst
  • Dr. Makada Henry-Nickie, David M. Rubenstein Fellow, Governance Studies, Race, Prosperity, and Inclusion Initiative, Brookings Institute
  • Dr. Michael Kearns, Professor and National Center Chair, Department of Computer and Information Science at the University of Pennsylvania
  • Bärí A. Williams, Attorney and Emerging Tech AI & Privacy Advisor

We have been closely following developments concerning the use of AI by providers of consumer financial services and discussed such use in three of our recent podcasts which can be accessed by clicking here, here and here.