The Federal Communications Commission (FCC) is set to vote next month on a proposal that could lead to the adoption of new rules to protect consumers against receiving robocalls whose content is generated by artificial intelligence technologies.
The proposal is backed by FCC Chairwoman Jessica Rosenworcel, who leads the Democratic majority on the five-member commission.
If the proposal is approved at the FCC’s monthly commissioners’ meeting set for Aug. 7, the agency would then seek public comment on a range of topics including:
- The definition of AI-generated calls;
- Requiring callers to disclose their use of AI-generated calls;
- Supporting technologies that alert and protect consumers from unwanted and illegal AI robocalls; and
- Protecting positive uses of AI to help people with disabilities utilize telephone networks.
The notice of proposed rulemaking follows the FCC’s launch in November 2023 of a notice of inquiry into the impact of AI technologies on illegal and unwanted robocalls and robotexts – an area in which the FCC has a long regulatory history. Notices of inquiry often precede formal rulemaking proceedings by the agency.
The notice of inquiry gathered information on the current uses of AI tech in calling and texting, and the “impact of emerging AI technologies on consumer privacy rights under the Telephone Consumer Protection Act.” That law, approved by Congress in 1991, regulates the creation of telemarketing calls and the use of automatic telephone dialing systems and artificial or prerecorded voice messages.
“Bad actors are already using AI technology in robocalls to mislead consumers and misinform the public,” Rosenworcel said in announcing the August vote.
“That’s why we want to put in place rules that empower consumers to avoid this junk and make informed decisions,” she said.