The Bipartisan Senate AI Working Group, led by Senate Majority Leader Chuck Schumer (D-NY) released a comprehensive Roadmap for AI policy entitled, “Driving U.S. Innovation in Artificial Intelligence.”  The wide-ranging Roadmap is a call to action to Congress, federal agencies, and the private sector to foster advancements in, and address risks posed by, artificial intelligence.

The Roadmap is the culmination of more than 50 hearings and nine Insight Forums with input from over 150 industry experts (including Bill Gates, Elon Musk, Mark Zuckerberg, NVIDIA CEO Jensen Huang, and OpenAI CEO Sam Altman, among others) focused on addressing the potential impacts of AI.

Key Takeaways include:

  • Increased funding for AI research and development
  • Bipartisan collaboration on AI legislation
  • Minimized job displacement due to advances in AI
  • Maintaining and advancing US superiority in AI technologies for national security
  • Maintaining election integrity in the face of AI
  • Protecting children from harms posed by AI and social media

At the outset, the Roadmap calls for robust investment in AI research and development, setting a goal for the Executive Branch and the Senate Appropriations Committee to reach “as soon as possible,” the $32-billion-dollar per year spending level for non-defense AI innovation proposed by the National Security Commission on Artificial Intelligence (NSCAI).  The Roadmap is broken down into eight primary sections that lay the groundwork for future legislation and private industry action.

Supporting U.S. Innovation in AI: The Roadmap calls for increased federal spending to fund “a cross-government AI research and development (R&D) effort, including relevant infrastructure that spans the Department of Energy (DOE), Department of Commerce (DOC), National Science Foundation (NSF), National Institute for Standards and Technology (NIST), National Institutes of Health (NIH), National Aeronautics and Space Administration (NASA), and all other relevant agencies and departments.”  This spending includes fully funding the CHIPS and Science Act (P.L. 117-167) as well as the DOC, DOE, NSF, and Department of Defense to support semiconductor R&D specific to cutting-edge AI software and hardware.

AI and the Workforce: The Roadmap recognizes “workers across the spectrum, ranging from blue collar positions to C-suite executives, are concerned about the potential for AI to impact their jobs…including potential displacement of workers.”  Thus, the Roadmap calls on committees of jurisdiction to “make certain that American workers are not left behind” by developing legislation related to “training, retraining, and upskilling the private sector workforce to successfully participate in an AI-enabled economy.”

High Impact Uses of AI: The Roadmap identifies high impact uses of AI and acknowledges, concern around AI “black-boxes” that “raise questions about whether companies with such systems are appropriately abiding by existing laws.” One area of concern, as discussed in our previous alert in relation to Senate Bill 5351, is covered entities that incorporate artificial intelligence systems into decision-making processes. The Working Group “believes that existing laws, including related to consumer protection and civil rights, need to consistently and effectively apply to AI systems and their developers, deployers, and users.” Thus, the Roadmap encourages committees to “consider identifying any gaps in the application of existing law to AI systems that fall under their committees’ jurisdiction and, as needed, develop legislative language to address such gaps.” The Roadmap also highlights the particular vulnerability of children and the risks posed by AI and social media. The Working Group encourages committees to “develop legislation to address online child sexual abuse material (CSAM), including ensuring existing protections specifically cover AI-generated CSAM,” and particularly encourages consideration of legislation to address issues surrounding so called “deepfakes.”

Elections and Democracy: The Working Group recognizes the risks AI poses to election integrity. During the 2020 and 2024 election cycles, voters complained of rampant AI robocalls impersonating candidates.  For example, in the 2024 New Hampshire primary, an AI-generated representation of President Joe Biden called voters encouraging them to “save your vote,” and thus not vote in the primary.  Thus, the Roadmap “encourages the relevant committees and AI developers and deployers to advance effective watermarking and digital content provenance as it relates to AI-generated or AI-augmented election content.”  The Roadmap calls on AI deployers and content providers to “implement robust protections in advance of the upcoming election to mitigate AI-generated content that is objectively false, while still protecting First Amendment rights.”

Privacy and Liability: Acknowledging that rapid technological advancement and varying degrees of autonomy in AI systems present challenges in assigning liability to AI companies and users, the Working Group encourages “relevant committees to consider whether there is a need for additional standards, or clarity around existing standards, to hold AI developers and deployers accountable if their products or actions cause harm to consumers, or to hold end users accountable if their actions cause harm.”

Transparency, Explainability, Intellectual Property, and Copyright: Acknowledging that advancements in AI go hand-in-hand with intellectual property, the Roadmap encourages review of existing and forthcoming reports from the U.S. Copyright Office and U.S. Patent and Trademark Office on the impact of AI on Intellectual Property Law.  The Roadmap encourages committees to “take action as deemed appropriate to ensure the U.S. continues to lead the world on this front,” including consideration of “federal policy issues related to the data sets used by AI developers to train their models.”

Safeguarding Against AI Risks: The Working Group, inspired by “insights provided by experts at the forums on a variety of risks that different AI systems may present,” encourages companies to “perform detailed testing and evaluation to understand the landscape of potential harms and not to release AI systems that cannot meet industry standards.”  The Roadmap further encourages committees to “investigate the policy implications of different product release choices for AI systems,” and in particular places a burden on committees to “understand the differences between closed versus fully open-source models,” and develop an analytical framework that “specifies what circumstances would warrant a requirement of pre-deployment evaluation of AI models.”

National Security: National Security is a key concern of the Working Group and the Roadmap encourages the DOD and other agencies to “develop career pathways and training programs for digital engineering, specifically in AI,” encouraging the DOD, DOE, and Office of the Director of National Intelligence to “work with commercial AI developers to prevent large language models, and other frontier AI models, from inadvertently leaking or reconstructing sensitive or classified information.”

In conclusion, the Roadmap emphasizes the need for ongoing collaboration between congressional committees and the Executive Branch, underscoring the importance of a well-coordinated approach to AI policy and legislation to ensure the United States remains at the forefront of AI innovation while managing its risks effectively.

The Roadmap emphasizes the need for ongoing collaboration between Congress and the Executive Branch, underscoring the importance of a well-coordinated approach to AI policy and legislation to ensure the United States remains at the forefront of AI innovation while managing its risks effectively.