The Government Accountability Office (GAO) released a report on deepfakes, a technology which uses artificial intelligence (AI) and can depict someone appearing to say or do something they never did.

While the report mentions beneficial uses of deepfakes in such areas as entertainment and communication, it also highlights ways deepfakes could be used for harm.

“Deepfakes could be used to influence elections or incite civil unrest, or as a weapon of psychological warfare,” the report states.

Researchers are working on ways to detect deepfakes, but more advanced detection methods can lead to more sophisticated deepfakes. The report states, “Detection may not be enough.”

Deepfakes can be used as disinformation even if they are detected as fake videos because many viewers may be unaware of the technology or not take the time to check what they see.

“What can be done to educate the public about deepfakes?” the report asks.

Several states including California, Texas, and Virginia have passed legislation aimed at addressing deepfakes. First Amendment concerns and enforcement challenges complicate potential federal legislation.

Section 5709 of the National Defense Authorization Act for Fiscal Year 2020, signed in December by President Donald Trump, does require the Director of National Intelligence (DNI) to submit comprehensive report on the foreign weaponization of deepfakes to the congressional intelligence committees. The provision also requires the DNI to notify Congress of foreign deepfake and disinformation activities targeting elections.

Read More About
More Topics
Dwight Weingarten
Dwight Weingarten
Dwight Weingarten is a MeriTalk Staff Reporter covering the intersection of government and technology.