Andrew Smith, Director of the FTC Bureau of Consumer Protection, has written a blog post, “Using Artificial Intelligence and Algorithms,” in which the FTC “offer[s] important lessons about how companies can manage the consumer protection risks of AI and algorithms.”
The blog post makes the following key points:
- Transparency. Companies that use AI tools, such as chatbots, to interact with customers should not mislead consumers about the nature of the interaction and also should not mislead consumers about how sensitive data is being collected. A third-party vendor that provides information to automate decision-making about eligibility for credit, employment, insurance, housing, or similar benefits and transactions may be a consumer reporting agency (CRA) and trigger the duty of the user of such information to provide an FCRA adverse action notice that informs consumers about their right to see the information reported about them and to correct inaccurate information.
- Explaining decisions. To satisfy the ECOA requirement to disclose to the consumer the principal reasons for a denial of credit, a company using algorithmic decision-making must know what data is used in its model and how that data is used to arrive at a decision and explain that to the consumer. A company using AI to make decisions about consumers in other contexts should consider how it would explain its decision to a customer if asked. The FCRA requires a company that uses algorithms to assign risk scores to consumers to also disclose the key factors that affected the score, rank-ordered for importance. In addition, a company should inform consumers if it might change the terms of a transaction based on automated tools.
- Fairness. Use of AI could result in discrimination against a protected class and thus violate federal equal opportunity laws, such as the ECOA and Title VII of the Civil Rights Act of 1964. A company should rigorously validate and revalidate its AI models to make sure they are working as intended and do not result in a disparate impact on a protected class. It should examine both inputs and outcomes. In evaluating an algorithm or other AI tool for illegal discrimination, the FTC looks at the inputs to the model, such as whether the model includes ethnically-based factors, or proxies for such factors, such as census tract. In addition, regardless of the inputs, the FTC reviews the outcomes, such as whether a facially-neutral model has an illegal disparate impact on protected classes. The FTC also conducts an economic analysis of outcomes, such as the price consumers pay for credit, to determine whether a model appears to have a disparate impact on a protected class. Companies using AI and algorithmic tools should consider self-testing of AI outcomes to manage the consumer protection risks inherent in using such models.
- Consumer access to and ability to correct information. Under the FCRA, consumers can obtain the information on file about them and dispute that information if they believe it to be inaccurate. The FCRA also requires an adverse action notice to be given to a consumer when that information is used to make a decision adverse to the consumer’s interests, and such notice must include the source of the information that was used to make the decision and inform the consumer of the consumer’s access and dispute rights. A company that uses data obtained from others or directly from the consumer to make important decisions about the consumer should consider providing a copy of that information to the consumer and allowing the consumer to dispute the accuracy of that information.
- Use of robust and empirically sound data and models. The ECOA encourages the use of AI tools that are “empirically derived, demonstrably and statistically sound,” which means, among other things, that they are based on data derived from an empirical comparison of sample groups, or the population of creditworthy and noncreditworthy applicants who applied for credit within a reasonable preceding period of time; that they are developed and validated using accepted statistical principles and methodology; and that they are periodically revalidated by the use of appropriate statistical principles and methodology, and adjusted as necessary to maintain predictive ability.
- Accuracy of information. A company that provides data about consumers to others to make decisions about consumer access to credit, employment, insurance, housing, government benefits, check-cashing or similar transactions may be a CRA that must comply with the FCRA, which includes an obligation to implement reasonable procedures to ensure maximum possible accuracy of consumer reports and provide consumers with access to their own information, along with the ability to correct any errors. Even if a company is not a CRA, it may have obligations under the FCRA as a “furnisher” to ensure the accuracy of data it provides about its customers to others for use in automated decision-making, such as not furnishing data that it has reasonable cause to believe may not be accurate. In addition, it must have written policies and procedures to ensure that the data it furnishes is accurate and has integrity and must investigate disputes from consumers and CRAs.
- Accountability. A company should hold itself accountable for compliance, ethics, fairness, and nondiscrimination and consider the use of independent standards or independent expertise to evaluate its use of AI. To avoid using an algorithm that results in bias or other harm to consumers, the operator of an algorithm should ask four key questions:
- How representative is its data set?
- Does its data model account for biases?
- How accurate are its predictions based on big data?
- Does its reliance on big data raise ethical or fairness concerns?
- Unauthorized use. A company that develops AI to sell to other businesses should consider how such AI can be abused and whether the abuse can be prevented through access controls and other technologies.