The Department of Homeland Security (DHS) needs to update its AI risk assessment guidance for agencies that protect critical infrastructure, as a new report reveals that DHS’s initial guidance didn’t have agencies fully measure how much harm an attack could cause, or gauge  the probability of an attack.

A Dec. 18 report from the Government Accountability Office (GAO) explains that Federal agencies that protect critical infrastructure – also referred to as sector risk management agencies (SRMAs) – were required to submit AI risk assessments to DHS under President Biden’s October 2023 AI executive order (EO).

Under the AI EO, these agencies were given 90 days to develop and submit initial risk assessments for each of the critical infrastructure sectors to DHS by January 2024. GAO analyzed all 17 AI risk assessments submitted by the nine SRMAs.

“We found that all the agencies submitted the risk assessments as required, which is great. But unfortunately, none of the assessments fully address the six characteristics that GAO has found provide a sound foundation for effective risk assessment and mitigation,” Dave Hinchman, a director of IT and cybersecurity at GAO, explained in a podcast accompanying the report.

For instance, GAO notes that most assessments didn’t fully identify the potential risks associated with AI uses – such as monitoring and enhancing digital and physical surveillance – or the likelihood of a risk occurring.

Additionally, GAO says none of the assessments fully evaluated the level of each identified risk, meaning “that they did not include a measurement that reflected both the magnitude of harm (level of impact) and the probability of an event occurring (likelihood of occurrence).”

The agencies told GAO they ran into challenges when developing their risk assessments – one being that they only had 90 days from when the president issued to EO to when their first risk assessment was due.

“That’s not a lot of time for an agency to get something up and running, especially with the technology where a lot of folks are still trying to figure out how it’s being used and what they’re doing with it,” Hinchman said. “So, that was a reason that a lot of the agencies cited for them having incomplete assessments.”

The other challenge agencies pointed out to GAO was that they had trouble identifying AI use cases.

“They had trouble identifying these use cases because this is a new technology. It evolves quickly, and agencies also don’t have a lot of historical data about the risks that AI poses to the critical infrastructure,” Hinchman said. “They’re really just beginning this journey and just starting to keep those records.”

GAO is recommending that DHS update its guidance for the AI risk assessments to address the gaps, including activities such as identifying potential risks and evaluating the level of risk. It also notes that these updates should be shared with all the SRMAs – who are required to produce these AI risk assessments annually.

DHS concurred with the recommendation.

“It’s great that the government is starting to look at and think about how AI can impact our nation’s critical infrastructures. But in doing that, and in considering the risk of artificial intelligence, we need to make sure that we’re examining that risk from every angle in a way that’s consistent with foundational practices,” Hinchman concluded.

Read More About
About
Grace Dille
Grace Dille
Grace Dille is MeriTalk's Assistant Managing Editor covering the intersection of government and technology.
Tags