A top Defense Advanced Research Projects Agency (DARPA) official said this week that generative AI – like ChatGPT – will alter the threat landscape by making it easier for adversaries to produce high-quality phishing capabilities and ransomware campaigns.

DARPA’s Director of the Information Innovation Office, Kathleen Fisher, explained that as AI evolves, the technology will advance to allow people with less experience to create more cyberattacks.

Fisher said, “In the short term, the generative AI – where you can generate images and video and voice; you can generate fake websites really fast; you can generate custom phishing messages” will give threat actors “the ability to create high quality phishing capabilities.”

Innovation Unleashed
With each new AI adoption, we learn new lessons to optimize business outcomes. Learn more.

“I think we, right now, trust a lot of things that we should not – that we won’t be able to trust in the future,” she said during day two of NVIDIA’s Developer Conference on March 21.

“In recent times, you could have a fair guess that something might be spam, for example, if it was grammatically incorrect,” Fisher explained during the panel. “Well, ChatGPT writes really good grammatically – it’s better than people.”

“Because these systems can write code, they can write a lot of the scripts that are commonly used in ransomware,” she said. “So, the barrier for somebody who doesn’t actually have much training to generate phishing emails and personas and ransomware can go down quite a bit.”

Fisher emphasized that while generative AI will be able to “make more attacks,” the “severity of those attacks probably won’t be that high.”

She continued, “They won’t be that hard to defend against that low level threat.”

Fisher also warned against the types of attacks the nation will begin to see from deepfake technology, which leverages generative AI to create false images and video of authority figures.

“We’re also seeing deepfake technology at a national security level,” Fisher said. “That capability is clearly there to be able to create very compelling fakes of people that you would normally trust.”

She continued, “We’re just seeing the tip of the iceberg of these kinds of attacks, but I think that given how easy that capability is to use and how it’s essentially available as a service these days, we will see a lot more of that in the future.”

Read More About
More Topics
Cate Burgan
Cate Burgan
Cate Burgan is a MeriTalk Senior Technology Reporter covering the intersection of government and technology.