As the Federal government is working to manage the potential risks that AI-driven systems can present, the head of the State Department’s Bureau of Cyberspace and Digital Policy said on June 21 that one positive AI application he’s excited about is using the technology to write more secure software.

At an event hosted by the Hudson Institute, Nathaniel Fick – the State Department’s inaugural ambassador at large for cyberspace and digital policy – explained how this application of AI tech can help to address “this sort of happy-go-lucky, laissez-faire software developer world” that introduces untrusted code and vulnerabilities.

“One of the good applications of AI that I’m most excited about is using AI to write better software. That’s pretty exciting to see the bug rate go way down,” Fick said.

“It is a road to really realize one of the pillars of the National Cybersecurity Strategy, which is really focused on building better software, and incentivizing that, and creating kind of incentive structures and liability punitive structures to require the developers of software that we all rely upon to build good stuff,” he said.

The White House released its National Cybersecurity Strategy (NCS) in March and is working fast to develop an implementation plan for the strategy, as well as a workforce strategy to build a more resilient future.

One key pillar of the NCS, as Fick mentioned, is to “rebalance” the responsibility to defend cyberspace by shifting the cybersecurity burden away from individuals, small businesses, and local governments, and onto the organizations that are best-positioned to reduce risks for all of us – such as software developers.

However, Fick acknowledged that AI also has a dangerous flip side, and he stressed the importance of developing AI regulations. He explained that there are four U.S. companies right now that have leadership positions in AI technologies: Google, Microsoft, OpenAI, and a smaller company called Anthropic.

“As we think about timelines, how much time do we have, how long is it going to take to develop a fifth model that has that capability – a fifth model that’s either built by a company that’s less trustworthy, or a model that’s open sourced? The best answer I can get is it’s less than a year,” Fick said. “We don’t have a lot of time. If this is 1945, we don’t have until 1957 to put together some sort of a regulatory or governance infrastructure.”

So, what is the Federal government going to do about it? Fick said the first step is to start with those four big companies, who will sign up for “voluntary commitments” around AI guardrails.

“Voluntary commitments by definition will not stifle innovation. They, I think also, are likely to be a starting point but not an ending point. But they have the great benefit of speed,” Fick said. “We’ve got to get something out in the world now. And then we’re going to iterate on it and build on it over time.”

Read More About
More Topics
Grace Dille
Grace Dille
Grace Dille is MeriTalk's Assistant Managing Editor covering the intersection of government and technology.