Following recent Senate testimony in which OpenAI CEO Sam Altman proposed additional Congressional oversight for the development of artificial intelligence (AI), Colorado Senator Michael Bennet has re-introduced the Digital Platform Commission Act, a bill that would enable the creation of a federal agency to oversee the use of AI by digital platforms. The proposed Federal Digital Platform Commission (FDPC) would have a broad mandate to “protect consumers from deceptive, unfair, unjust and unreasonable or abusive practices committed by digital platforms.”
Under the proposed bill, the Commission would have specific power to regulate the use of algorithmic systems used by “systemically important digital platforms.” The bill delegates to the FDPC rulemaking authority to designate a platform as systematically important based on a number of factors, including whether a platform is available to the public and has “significant nationwide economic, social or political impacts”, the platform’s market power, unique daily users, and “the dependence of business users of the platform to reach customers.” Digital platforms that qualify as systemically important could face new rules to require fairness and transparency of AI processes, as well as risk assessments and third party audits to assess harmful content and anti-competitive bias.
According to media reports, the proposed bill includes updated definitions to specifically address AI, and in particular generative AI. These changes include a revised definition of “algorithmic processes”, which now include computational processes using personal data that generate content or make a decision. Media reports also claim that the new bill would expand the definition of a digital platform to include companies that “offer content primarily generated by algorithmic processes.”
The proposed bill contains some of the hallmarks of other proposed AI regulation, such as the EU AI Act. Lawmakers worldwide appear to be focused on fairness and transparency of AI processes, safety and trust issues, and the potential for algorithmic bias. Lawmakers also appear to be coalescing around the idea of mandating third-party assessments for high-risk or systematically important AI.
One notable aspect of the Digital Platform Commission Act is its definition of AI, which does not provide exceptions for automated processes that include human decision making authority or a requirement that the automated processes have a legal or substantially similar effect. This approach differs from other laws that propose to regulate AI, such as the General Data Protection Regulation Act and the Colorado Privacy Act , which are more limited in their definition of “profiling” or “automated processing” that trigger compliance obligations and establish different obligations based on the level of human involvement. The scope of rulemaking for different kinds of AI is currently under consideration by the California Privacy Protection Agency, which has sought public comment on this question. How regulators address the threshold issue of what kind of AI triggers compliance obligations is a key issue, with potentially significant impact.
Whether Congress moves forward on the Digital Platform Commission Act remains an open question. As with other proposed bills regulating AI, lawmakers appear wary of stifling technologies innovations that are moving forward at a lightning place. On the other hand, there appears to be some bipartisan recognition of the potential power and danger of wholly unregulated AI technologies and an interest in creation of a new executive agency with oversight responsibilities for AI.