The Federal Communications Commission (FCC) voted on Nov. 15 to launch a formal inquiry into the impact of artificial intelligence (AI) technologies on illegal and unwanted robocalls and robotexts – an area in which the FCC has a long regulatory history.  

As part of the notice of inquiry, the FCC will be seeking public comment through Dec. 18.  

The agency said the inquiry – which stops short of any rulemaking proceeding but is often a first step for the FCC to consider formal rulemakings – will start with answering fundamental questions, including how to define AI in the context of robocalls and robotexts.  

The inquiry also will look at current uses of AI tech in calling and texting, and the “impact of emerging AI technologies on consumer privacy rights under the Telephone Consumer Protection Act.” That law, approved by Congress in 1991, regulates the creation of telemarketing calls and the use of automatic telephone dialing systems and artificial or prerecorded voice messages. 

The FCC said the results of the notice of inquiry will inform “what, if any, next steps the Commission should take to address these issues.” 

“The agency will assess both AI’s potential to positively and negatively affect consumers,” the FCC said after commissioners voted to approve the notice of inquiry.  

“As artificial intelligence technology becomes more prevalent, it presents opportunities to protect consumers but it can also pose privacy and safety challenges,” the FCC said.   

“In the case of robocalls and robotexts, AI could improve analytics tools used to block unwanted calls and texts and restore trust in our networks,” the agency said. “But AI could also permit bad actors to more easily defraud consumers through calls and text messages, such as by using technology to mimic voices of public officials or other trusted sources.” 

The FCC said the notice of inquiry is part of a broader effort at the agency to explore “opportunities and challenges that AI and machine learning pose to communications networks.” The agency and the National Science Foundation have hosted a workshop on the issue, and the FCC’s Technological Advisory Council is also looking into similar issues through its working group on AI and machine learning.  

“The anxiety about these technology developments is real,” said FCC Chairwoman Jessica Rosenworcel in a statement on the agency’s approval of the notice of inquiry. 

“But I think we make a mistake if we only focus on the potential for harm,” she said. “We need to equally focus on how artificial intelligence can radically improve the tools we have today to block unwanted robocalls and robotexts.” 

 “We are talking about technology that can see patterns in our network traffic unlike anything we have today,” Rosenworcel said. “This can lead to the development of analytic tools that are exponentially better at finding fraud before it ever reaches us at home. Used at scale, we can not only stop this junk, we can help restore trust in our networks.” 

“That is why today we are launching an inquiry to ask how artificial intelligence is being used right now to recognize patterns in our network traffic and how they could be used in the future,” she said. “We know the risks that this technology involves, but we also want to harness the benefits – just like the recently released Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence recommends.” 

Read More About
About
John Curran
John Curran
John Curran is MeriTalk's Managing Editor covering the intersection of government and technology.
Tags