A bipartisan group of House members has introduced legislation that aims to protect Americans from AI generated content – such as deepfakes – during the 2024 election cycle by setting standards, like watermarking, for identifying AI content.

The “Protecting Americans from Deceptive AI Act” – introduced by members of the bipartisan Task Force on AI – directs the National Institute of Standards and Technology (NIST) to develop standards for identifying and labeling AI-generated content and requires generative AI developers and online content platforms to provide disclosures on AI-generated content.

When introducing the bill on March 21, Reps. Anna Eshoo, D-Calif., and Neal Dunn, R-Fla., pointed to the urgency of deepfake regulation with the upcoming 2024 election cycle, since “nearly half the world’s population” is holding an election this year.

“AI offers incredible possibilities, but that promise comes with the danger of damaging credibility and trustworthiness,” said Rep. Eshoo. “AI-generated content has become so convincing that consumers need help to identify what they’re looking at and engaging with online. Deception from AI-generated content threatens our elections and national security, affects consumer trust, and challenges the credibility of our institutions.”

“The Protecting Consumers from Deceptive AI Act protects Americans from being duped by deepfakes and other means of deception by setting standards for identifying AI generated content,” said Rep. Dunn. “Establishing this simple safeguard is vital to protecting our children, consumers, and our national security.”

Democratic members of the House AI Task Force, Reps. Don Beyer, D-Va., and Valerie Foushee, D-N.C., are co-sponsors of the bill.

Specifically, the “Protecting Consumers from Deceptive AI Act” would:

  • Direct NIST to facilitate the development of standards for identifying and labeling AI-generated content, including through technical measures such as provenance metadata, watermarking, and digital fingerprinting;
  • Require generative AI developers to include machine-readable disclosures within audio or visual content generated by their AI applications, and to provide users the option to include metadata with additional information;
  • Require online platforms to use those disclosures to label AI-generated content; and
  • Build on the voluntary commitments that several leading AI companies made last year, and the work of many experts and global stakeholders.

A similar bill was introduced in the Senate in September 2023. The “Protect Elections from Deceptive AI Act,” would ban the use of AI to generate deepfakes depicting Federal candidates in political ads to influence Federal elections.

Separately, President Biden’s AI executive order that was unveiled in October tasks the Department of Commerce with developing guidance for content authentication and watermarking to clearly label AI-generated content for Federal agencies to use “to make it easy for Americans to know that the communications they receive from their government are authentic.”

Many lawmakers have expressed concerns that Congress must pass AI regulation for the 2024 election cycle, with Sen. Mark Warner, D-Va., chair of the Senate Select Committee on Intelligence, noting that he is “gravely concerned that we’re not as prepared for foreign interference in our elections in 2024 than we were in 2020” due to AI.

Read More About
Recent
More Topics
About
Cate Burgan
Cate Burgan
Cate Burgan is a MeriTalk Senior Technology Reporter covering the intersection of government and technology.
Tags