A bill now being considered by the Council of the District of Columbia titled the “Stop Discrimination by Algorithms Act of 2021” (“Bill”), would impose limitations and requirements on businesses that use algorithms in making credit and other eligibility decisions, including decisions regarding the individuals to whom a business directs advertising or marketing solicitations.  The Bill was introduced in December 2021 and a public hearing on the Bill was held on September 22, 2022.

The Bill would apply to a “covered entity,” which includes any legal entity that “either makes algorithmic eligibility determinations or algorithmic information availability determinations, or relies on algorithmic eligibility determinations or algorithmic information availability determinations supplied by a service provider” and meets one of the following criteria:

  • Possesses or controls personal information on more than 25,000 District residents;
  • Has more than $15 million in average annualized gross receipts for the 3 years preceding the most recent fiscal year;
  • Is a data broker, or other entity, that derives at least 50% of its annual revenue by collecting, assembling, selling, distributing, providing access to, or maintaining personal information, and some proportion of the personal information concerns a District resident who is not a customer or an employee of the entity; or
  • Is a service provider (meaning an entity that performs algorithmic eligibility determinations or algorithmic information availability determinations on behalf of another entity).

The Bill defines an “algorithmic eligibility determination” as “a determination based in whole or in significant part on an algorithmic process that utilizes machine learning, artificial intelligence, or similar techniques to determine an individual’s eligibility for, or opportunity to access, important life opportunities.”  An “algorithmic information availability determination” is defined as “a determination based in whole or in significant part on an algorithmic process that utilizes machine learning, artificial intelligence, or similar techniques to determine an individual’s receipt of advertising, marketing, solicitations, or offers for an important life opportunity.”  The term “important life opportunities” is defined as “access to, approval for, or offer of credit, education, employment, housing, a place of public accommodation [as defined by D.C. law], or insurance.”

The Bill would prohibit a covered entity from making an algorithmic eligibility determination or an algorithmic information availability determination “on the basis of an individual’s or class of individuals’ actual or perceived race, color, religion, national origin, sex, gender identity or expression, sexual orientation, familial status, source of income, or disability in a manner that segregates, discriminates against, or otherwise makes important life decisions unavailable to an individual or class of individuals.”  A practice that has the effect of violating this prohibition would be deemed an unlawful discriminatory practice.

The requirements that the Bill would impose on covered entities include:

  • Providing a notice to an individual before making the first algorithmic availability determination about that individual, with the notice to include certain specified information about how the covered entity uses personal information in making algorithmic eligibility determinations or algorithmic information availability determinations;
  • Providing a disclosure containing specified information to an individual with respect to whom the covered entity takes any adverse action that is based in whole or in part on the results of an algorithmic eligibility determination;
  • Conducting an annual audit of the covered entity’s algorithmic eligibility determination and algorithmic information availability determination practices to determine whether the practice results in unlawful discrimination and to analyze disparate impact risks; and
  • Submitting an annual report containing the results of the audit to the D.C. Attorney General and includes certain specified information such as “the data and methodologies that the covered entity uses to establish the algorithms.”

The Bill provides for enforcement by the D.C. Attorney General and a civil penalty of up to $10,000 for each violation.  It also creates a private right of action and authorizes a court to award an amount of not less than $100 and up to $10,000 per violation or actual damages, whichever is greater.

The Bill has drawn criticism from credit industry trade groups such as the American Financial Services Association.  Among other criticisms, the trade groups assert that the Bill would impose difficult, if not impossible, compliance burdens on lenders that will result in decreased credit access and higher cost loans.  They also argue that the Bill is unnecessary and duplicates existing laws and regulations such as the Equal Credit Opportunity Act and the Gramm-Leach-Bliley Act.  (In May 2022, the CFPB issued a Circular regarding adverse action requirements in connection with credit decisions based on algorithms.)

The White House Office of Science and Technology Policy recently issued a “Blueprint for an AI Bill of Rights” in which it identified five principles “that should guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence.”  The Blueprint has been praised by D.C. Attorney General Karl Racine for incorporating much of [the Bill.]”  However, like the Bill, the Blueprint has drawn criticism from industry trade groups who are concerned that the Blueprint could result in industry-wide mandates.  Politico has reported that the head of AI policy for the U.S. Chamber of Commerce has raised the possibility that numerous federal agencies will issue regulations based on the Blueprint and that states and local governments will enact “copycat” laws.  The Chamber also sent a letter to the Director of the Office of Science and Technology Policy expressing its concerns regarding the Blueprint, including that it was developed without sufficient stakeholder input and conflates artificial intelligence with data privacy.