Lawmakers and industry leaders on Tuesday highlighted ways the Cybersecurity and Infrastructure Security Agency (CISA) should seek to secure artificial intelligence (AI) technologies, starting with integrating the emerging technology into the agency’s existing cyber policies and guidelines.

Leadership on the House Homeland Security Subcommittee on Cybersecurity and Infrastructure Protection praised CISA for its recent release of an AI roadmap, which seeks to integrate AI into the agency’s already existing principles like secure by design and software bills of material.

“I was pleased that CISA’s recently released AI roadmap didn’t seek to reinvent the wheel where it wasn’t necessary, and instead integrates AI into existing efforts like secure by design [and] software bill of materials,” subcommittee Ranking Member Eric Swalwell, D-Calif., said in his opening statement during the hearing on Dec. 12.

The Biden administration’s recent AI executive order (EO) tasked CISA with ensuring the security of the technology itself and developing cybersecurity use cases for AI.

But the effectiveness of the EO will come down to its implementation, subcommittee Chair Andrew Garbarino, R-N.Y., emphasized.

“The timelines laid out in the EO are ambitious, and it is positive to see CISA’s timely release of their Roadmap for AI and internationally supported Guidelines for Secure AI System Development,” the chairman said. “At its core, AI is software and CISA should look to build AI considerations into its existing efforts rather than creating entirely new ones unique to AI.”

“CISA should ensure that its initiatives are iterative, flexible, and continuous, even after the deadlines in the EO pass, to ensure the guidance it provides stands the test of time,” Rep. Garbarino added.

Private sector cyber experts also applauded CISA’s recent work to secure AI, but offered some next steps the agency can pursue to ensure that it is working to safeguard the nation against risks associated with AI, like an increase in cyberattacks.

“While secure by design and CISA roadmap for artificial intelligence are a good foundation, it can go deeper in providing clear guidance on how to tactically extend the methodology to artificial intelligence,” Ian Swanson, the founder and CEO of cybersecurity company Protect AI, said.

Specifically, Swanson called for CISA to begin implementing machine learning bills of material (MLBOM), the “ingredient list” concept identical to SBOMs, but specifically for machine learning.

Debbie Taylor Moore, senior partner and vice president of global cybersecurity for IBM, reemphasized that CISA should focus on executing its recently released roadmap for AI rather than reinventing the wheel. CISA’s future focus should be on areas including education and workforce development; improving understanding of AI and its risks; and leveraging existing information sharing infrastructure for AI, Moore said.

On the latter topic, Alex Stamos, Chief Trust Officer at cybersecurity company SentinelOne, elaborated that CISA can leverage its information sharing infrastructures to break silos and help defenders better collaborate with one another.

“I think their initial guidelines are smart,” Stamos said. “A key thing for CISA to focus on right now is to get the reporting infrastructure up.”

“One of the problems we have as defenders is we don’t talk to each other enough. The bad guys are actually working together. They hang out on these forums, they trade code, they trade exploits. But when you deal with a breach … you’re not supposed to talk to anybody and not send any emails and not work together,” Stamos said. “And I think CISA breaking those silos apart so the companies are working together is a key thing that they can do.”

Read More About
About
Cate Burgan
Cate Burgan
Cate Burgan is a MeriTalk Senior Technology Reporter covering the intersection of government and technology.
Tags