A bill now being considered by the Council of the District of Columbia titled the “Stop Discrimination by Algorithms Act of 2021” (“Bill”), would impose limitations and requirements on businesses that use algorithms in making credit and other eligibility decisions, including decisions regarding the individuals to whom a business directs advertising or marketing solicitations. … Continue Reading
The White House’s Office of Science and Technology Policy has identified a framework of five principles, also known as the “Blueprint for an AI Bill of Rights,” that is intended to guide the design, use, and deployment of automated systems and artificial intelligence (AI). The Blueprint defines automated systems broadly as any system, software, or process that uses computation to determine outcomes, make or aid decisions, inform implementation, collect data, or otherwise interact with individuals or communities. … Continue Reading
Together with our special guest whose company provides software enabling lenders to use AI to underwrite loans, we explore a wide range of issues of importance to lenders using AI to underwrite loans. Our discussion topics include: the CFPB’s position on how ECOA adverse action notice requirements apply to credit decisions based on the use of AI; what is meant by explainability and interpretability of AI; fair lending and other compliance risks and steps to reduce risk; preparing for CFPB exams; and the CFPB’s call for tech workers to act as whistleblowers to report potential discrimination arising from the use of AI.… Continue Reading
In their June 2021 request for information regarding financial institutions’ use of artificial intelligence (AI), including machine learning, the CFPB and federal banking regulators flagged fair lending concerns as one of the risks arising from the growing use of AI by financial institutions.
Last week, in an apparent effort to increase its scrutiny of machine learning models and those that use alternative data, the CFPB published a blog post titled “CFPB Calls Tech Workers to Action,” in which it made a direct appeal to “engineers, data scientists and others who have detailed knowledge of the algorithms and technologies used by companies and who know of potential discrimination or other misconduct within the CFPB’s authority to report it to us.”… Continue Reading
After discussing the current state of the regulators’ knowledge about artificial intelligence and machine learning (ML) in underwriting models, we examine the regulators’ key areas of focus for ML models (explainability/accuracy in adverse action notices, potential hidden bias, testing for disparate impact), discuss how to test for and counteract disparate impact and how to search for less discriminatory alternatives in ML model development, and consider regulators’ possible next steps.… Continue Reading
In what could be an important step towards needed regulatory updating to accommodate the growing use of artificial intelligence (AI) by financial institutions, the CFPB, FDIC, OCC, Federal Reserve Board, and NCUA issued a request for information (RFI) regarding financial institutions’ use of AI, including machine learning (ML). Comments on the RFI must be received by June 1, 2021.… Continue Reading
The CFPB recently published a blog post titled, “Innovation spotlight: Providing adverse action notices when using AI/ML models.”
The blog post primarily recycles information from the Bureau’s annual fair lending report issued in May 2020. The Bureau indicates that artificial intelligence (AI) and a subset of AI, machine learning (ML), is an area of innovation that it is monitoring.… Continue Reading
Andrew Smith, Director of the FTC Bureau of Consumer Protection, has written a blog post, “Using Artificial Intelligence and Algorithms,” in which the FTC “offer[s] important lessons about how companies can manage the consumer protection risks of AI and algorithms.”
The blog post makes the following key points:
- Transparency. Companies that use AI tools, such as chatbots, to interact with customers should not mislead consumers about the nature of the interaction and also should not mislead consumers about how sensitive data is being collected.
On February 12, 2020, the House Financial Services Task Force on Artificial Intelligence will hold a hearing titled, “Equitable Algorithms: Examining Ways to Reduce AI Bias in Financial Services.”
The Committee Memorandum flags a number of issues that could be the focus of comments and questions from lawmakers. Those issues include the potential for data sets used in AI to “contain errors, [be] incomplete, and/or contain data that reflects societal or historical inequities,” challenges in applying the existing legal framework (ECOA, FHA, FCRA) to AI technologies, and risks of “regulatory sandboxes.” … Continue Reading
In this podcast, we are joined by Scott Ferris, CEO of Attunely, a provider of machine learning (ML) and artificial intelligence (AI) technology to the debt collection industry. We look at how changes in consumer behavior have impacted collections, technology’s role in collections, state law’s/GDPR’s impact on ML/AI and compliance strategies, how ML/AI can improve profitability, and perceived impediments to adopting ML/AI.… Continue Reading