The CFPB, FTC, Justice Department, and Equal Employment Opportunity Commission have issued a joint statement about enforcement efforts “to protect the public from bias in automated systems and artificial intelligence.” The CFPB also issued a separate press release and prepared remarks by Director Chopra about the statement. In the press release, the CFPB indicated that it “will release a white paper this spring discussing the current chatbot market and the technology’s limitations, its integration by financial institutions, and the ways the CFPB is already seeing chatbots interfere with consumers’ ability to interact with financial institutions.”
In the joint statement, the term “automated systems” is used to mean “software and algorithmic processes, including [artificial intelligence], that are used to automate workflows and help people complete tasks or make decisions.” The agencies observe in the statement that “[p]rivate and public entities use these systems to make critical decisions that impact individuals rights and opportunities, including fair access to a job, housing, credit opportunities, and other goods and services.” Giving minimal acknowledgment to the benefits of automated systems, the statement focuses on their “potential to perpetuate unlawful bias, automate unlawful discrimination, and produce other harmful outcomes.” The agencies “reiterate their resolve to monitor the development and use of automated systems and promote responsible innovation,” and also “pledge to vigorously use [their] collective efforts to protect individual rights regardless of whether legal violations occur through traditional means or advanced technologies.”
The statement lists the following sources of potential discrimination in automated systems:
- Unrepresentative or imbalanced datasets, datasets that incorporate historical bias, or datasets that contain other types of errors can skew automated system outcomes. Automated systems also can correlate data with protected classes, which can lead to discriminatory outcomes.
- A lack of transparency regarding the internal workings of automated systems that causes them to be “black boxes” and makes it difficult for developers, businesses, and individuals to know whether an automated system is fair.
- The failure of developers to understand or account for the contexts in which their automated systems will be used, and the designing of systems based on flawed assumptions about their users, relevant context, or the underlying practices or procedures they may replace.
The joint statement includes a description of each agency’s authority to combat discrimination and identifies pronouncements from each agency related to automated systems. In its press release, the CFPB describes “a series of CFPB actions to ensure advanced technologies do not violate the rights of consumers.” Notably, the CFPB includes among such actions its policy statement on abusive acts or practices and its proposal to require nonbanks to register when, as a result of settlements or otherwise, they become subject to orders from local, state, or federal agencies and courts involving violations of consumer protection laws. With regard to its policy statement on abusive acts or practices, the CFPB states that the prohibition on abusive conduct “would cover abusive uses of AI technologies to, for instance, obscure important features of a product or service or leverage gaps in consumer understanding.” With regard to the proposed registry, the CFPB states that it “would allow the CFPB to track companies whose repeat offenses involved the use of automated systems.”
Consistent with past remarks, Director Chopra used his prepared remarks to highlight the “threat” posed by AI in the form of “unlawful discriminatory practices perpetrated by those who deploy these technologies.” Although not addressed specifically in the joint statement, Director Chopra raised concerns about generative AI. He stated that generative AI “which can produce voices, images, and videos that are designed to simulate real-life human interactions are raising the question of whether we are ready to deal with the wide range of potential harms – from consumer fraud to privacy to fair competition.” (We recently released an episode of our Consumer Finance Podcast in which we focused on generative AI.)