Private sector tech leaders warned lawmakers on Wednesday of foreign efforts to influence U.S. public opinion ahead of the 2024 election, and emphasized that various “misinformation campaigns” are increasingly being shaped by advanced artificial intelligence (AI) technology.

Executives from Microsoft, Alphabet, and Meta testified during a Senate Intelligence Committee hearing that groups from adversarial nations – including Russia, China, and Iran – have disseminated false information and misleading news reports about both Vice President Kamala Harris and former President Donald Trump.

“Those attacks are being shaped in part by new developments in artificial intelligence,” said Kent Walker, Alphabet’s president and chief legal officer.

“We are seeing some foreign state actors experimenting with generative AI to improve existing cyberattacks, like probing for vulnerabilities or creating spear phishing emails,” he said. “We see generative AI being used to more efficiently create fake websites, misleading news articles, and robot social media posts.”

Nick Clegg, president of global affairs at Meta, testified that while the company’s platform has not seen definitive proof of “generative AI-enabled tactics used to subvert elections in ways that have impeded [their] ability to disrupt them,” it does not mean that people are not using AI to try to interfere in elections.

Clegg highlighted that Meta recently disrupted a Russian campaign that published numerous stories on fake news websites, which appeared to be AI-generated summaries of real articles. The campaign also created fictitious journalist personas with AI-generated profile photos.

He said that those findings demonstrate that Meta and the broader tech industry have existing defenses that can combat AI-generated misinformation. “However, we remain vigilant and will continue to adapt, as the technology does as well,” Clegg said.

Microsoft President Brad Smith explained that the tech industry must “prevent foreign nation-state adversaries from exploiting American products and platforms to deceive [the] public.”

“We do that with guardrails, especially around AI-generated content. But we also do it by identifying and assessing content on our platform, especially AI-generated content created by foreign states,” Smith said.

He provided an example of this situation, noting that the company had identified an “AI-enhanced” video from a Russian group that misrepresented Vice President Kamala Harris and depicted her saying things she never actually said at a recent rally.

Smith also highlighted the critical 48-hour misinformation risk before the election, during which the rapid dissemination of false information could have significant consequences.

“There is a potential moment of peril ahead. Today we are 48 days away from the election … the most perilous moment will come, I think, 48 hours before the election,” said Smith. “That’s the lesson to be learned from … other races we have seen.”

While agreeing with Smith’s 48-hour assessment, Sen. Mark Warner, D-Va., who chairs the Intelligence Committee, emphasized that the 48 hours after the polls close could be “equally if not more significant.” He called on the tech executives to provide written plans detailing how their companies will safeguard against foreign interference during the post-election period.

“The post-election 48 hours are crucial, and I want specifics on the surge capacity your institutions will have as we approach that time,” Warner said.

X (formerly Twitter) did not send a representative at the hearing, which prompted several expressions of frustration from Sen. Warner. He repeatedly voiced his disappointment that the company owned by Elon Musk’s chose not to send anyone to testify after being invited to do so.

Read More About
Recent
More Topics
About
Lisbeth Perez
Lisbeth Perez
Lisbeth Perez is a MeriTalk Senior Technology Reporter covering the intersection of government and technology.
Tags