In a new “Issues spotlight” titled “Chatbots in consumer finance,” the CFPB discusses how chatbot technologies are being used by financial institutions and the limitations and risks of such technologies.  The report, which is part of a concerted focus by the CFPB on the use of artificial intelligence and machine learning in consumer financial services, comes on the heels of the recent joint agency statement regarding enforcement efforts “to protect the public from bias in automated systems and artificial intelligence.” 

As described in the report, chatbots simulate human-like responses using computer programming.  The report discusses the growing use of chatbots by financial institutions and the evolution of chatbot technology.  The CFPB reports that while much of the financial services industry at least uses simple rule-based chatbots, the banking industry has begun to adopt advanced technology such as generative chatbots and others marketed as “artificial intelligence.”  

The focus of the report is “challenges experienced by customers [when interacting with chatbots], as detailed in complaints submitted to the CFPB,” technical limitations and associated security risks, and risks for financial institutions.

With regard to customer challenges, the CFPB highlights the following:

  • Chatbots and scripts may be unable to recognize that a customer is making a dispute and even when a dispute can be recognized, there may be technical limitations to their ability to research and resolve the dispute.  Rule-based chatbots, because they are designed to accept or process account information from users and cannot respond to requests outside the scope of their data inputs or limited syntax, “tend to be one-way streets” and result in simulated conversations that are “tedious and opaque compared to browsing information with clear and logical navigation.”
  • Chatbots can provide inaccurate, unreliable, or insufficient information because of unreliable technology or inaccurate data.
  • Automated responses by chatbots may fail to resolve a customer’s issue and instead lead the customer into a “doom loop,” meaning “a continuous loop of repetitive, unhelpful jargon or legalese without an offramp to a human customer service representative.”  Even if an offramp exists, customers may face unreasonable waits in reaching a human customer service representative.  The CFPB suggests that financial institutions may intentionally be using used advanced technologies instead of humans to grow revenues or minimize writeoffs, observing that “advanced technologies may be less likely to waive fees, or to be open to negotiation on price.”

With regard to technical limitations and associated security risks, the CFPB highlights the following:

  • Because investment decisions may place higher priority on improving the ability of automated systems to promote financial products, financial institutions may underinvest in the reliability of a chatbot.  In addition, like any other technology, chatbots can crash, leaving customers with little to no customer service.
  • Chatbots are often used by bad actors to build fake impersonation chatbots to conduct phishing attacks to obtain information from consumers or from another chatbot.
  • Chat logs into which customers enter personal information provide another venue for privacy attacks, making it more difficult to fully protect the privacy and security of consumers’ personal and financial information.

With regard to risks to financial institutions, the CFPB highlights the following:

  • Risk of noncompliance with federal consumer financial laws because the information chatbots provide may not be accurate, the technology may fail to recognize that a consumer is invoking federal rights, or may fail to protect their privacy and data.  
  • Risk of diminished customer service and trust when chatbots reduce access to human support agents, with the impact of such reduction to be greater on customer segments where chatbot interactions have higher rates of failed resolution, such as groups with limited technical ability or limited English proficiency.
  • Risk of causing widespread harm to customers as a result of failing to promptly handle disputes and correct errors or providing inaccurate information.  

Last June, the CFPB issued a request for information seeking comments from the public “on what customer service obstacles consumers face in the banking market, and specifically what information would be helpful for consumers to obtain from depository institutions pursuant to section 1034(c) of the Consumer Financial Protection Act.“  Section 1034(c) requires depository institutions subject to CFPB supervision (i.e. those with more than $10 billion in assets) to provide timely responses to consumers requests for information about a financial product or service that the consumer obtained from the depository institution.  The CFPB described the RFI as “part of a broader effort to restore relationship banking in an era of consolidation and digitization.”

In the new report, the CFPB includes chatbots in “[t]he shift away from relationship banking and toward algorithmic banking.”  It concludes the report by commenting that this shift will have a number of long-term implications that the CFPB will continue to monitor closely” and by stating that it is actively monitoring the market and “expects institutions using chatbots to do so in a manner consistent with the customer and legal obligations.”  In addition to continuing the CFPB’s focus on the use of artificial intelligence and machine learning in consumer financial services, the new report serves as the latest example of the CFPB’s additional focus on customer service, which as here often extends beyond regulatory requirements.