The emergence of adversarial artificial intelligence (AI) requires special attention. AI users need to understand the threat space and organize responsible AI mitigations, said Pamela Isom, director of the Artificial Intelligence and Technology Office (AITO) at the Department of Energy (DoE), on Oct. 18 at the AI World virtual summit.

The DoE, according to Isom, has been on a strategic mission to advance trustworthy AI and mitigate agency risks. Isom and her team developed the AI Risk Management Playbook (AI RMP), available only to DoE users at this time.

“The AI RMP is a dynamic system that offers DoE users 100+ unique risks and mitigation techniques, with the ability to expand. It also includes a search capability that allows users to filter their search according to lifecycle stage, risk type, and trustworthy AI principle,” Isom said.

The playbook also explains to users what the lifecycle of trustworthy AI looks like and where in that process they need to consider additional security, providing critical suggestions to developing and deploying trustworthy AI. Such suggestions include securing the supply chain of AI-driven hardware and software, securing training and testing of machine learning models, and monitoring model output for potential security risks.

“By understanding the lifecycle of AI, we can better ensure that our data and our AI-enabled devices are trustworthy,” Isom said.

The AI RMP also made numerous ties to various executive orders (EOs) signed by both President Joe Biden and former President Donald Trump. Isom explained that the playbook considered the principles laid out in Trump’s EO promoting trustworthy AI in the Federal government.

“It recognizes the power of AI to improve operations, processes, and procedures while still making sure it remains secure, precise, and accurate,” Isom said.

The AI RMP also ties to Biden’s EO on improving cybersecurity in the United States. It explains to users the necessity of AI-enabled devices in the agency’s cybersecurity framework. Specifically, as Federal agencies move towards a zero-trust architecture, she added, AI will become a necessary component.

“In my opinion, we do not use AI enough when it comes to cybersecurity, and our adversaries do. AI can continuously look for suspicious activity at a much faster rate compared to human teams. It can also pinpoint questionable activity that human teams often miss,” Isom said.

Additionally, Isom emphasized that the AI RMP also recognizes the importance of the workforce in building and leveraging trustworthy AI. The workforce, she added, is a critical component of making this risk management framework successfully operational.

“They need to understand every aspect of AI-enabled technology. From the lifecycle of that AI to the risk and benefits associated with it,” Isom said.

The DoE is working with the White House’s Office of Science and Technology Policy to release an external version of the AI RMP, specifically to share with other Federal agencies. The department, Isom added, also plans to work with the National Institute of Standards and Technology (NIST) to develop their AI RMP further and line it up with the principles mentioned in NIST’s AI Risk Management Framework.

Read More About
More Topics
Lisbeth Perez
Lisbeth Perez
Lisbeth Perez is a MeriTalk Senior Technology Reporter covering the intersection of government and technology.