Conrad Stosz, the Office of Management and Budget’s (OMB) director of artificial intelligence and a key player in helping OMB formulate final guidance to Federal agencies on implementing President Biden’s landmark AI executive order (EO), told MeriTalk in an exclusive interview that many public comments on the draft guidance have focused on issues of transparency, consistency, and risk management.
The AI lead said that OMB is taking those three issues into account as it works to polish final policy guidance due at the end of March, as ordered by the EO.
Stosz also said that the Biden administration is lining up to issue a national security memorandum on risk management practices for national security-related uses of AI technology later this year. The AI EO unveiled by the White House in late October in large part focuses on how the government will treat AI technologies outside of the national security arena.
Following release of the AI EO on Oct. 30 of last year, OMB in November issued its draft policy on advancing governance, innovation, and risk management for agency use of AI.
The public comment period on the 26-page draft policy for the use of AI in the Federal government closed on Dec. 5, 2023.
Highpoints of the guidance include orders to appoint chief AI officers at Federal agencies and to adopt a lengthy list of safeguards for agencies to follow while developing applications of AI tech.
In his interview with MeriTalk, Stosz highlighted the key areas of the draft guidance that have drawn outsized interest from commenters.
Stosz noted that the public comment period yielded nearly 250 comments from organizations – including big names like Microsoft, Google, and OpenAI – totaling roughly 2,000 pages of comments OMB had to sift through.
“A lot of commenters wanted to see that agencies would implement the OMB guidance in a consistent way, across agencies, and that people would know what to expect from the government on AI broadly – regardless of which agency they interacted with,” Stosz said. “If you look at the draft guidance, not everything had been standardized completely, and many aspects of AI governance and risk management and how they’re conceptualized in the memo are not specific to a particular AI system and how it’s used.”
Stosz said that OMB has already gotten a head start on taking steps to promote consistency in the approach across agencies. For example, he said the White House convened a council of agencies’ AI leaders in December to discuss the draft guidance, and more broadly to share best practices and lessons learned. “That’s something that we’re going to continue doing on an ongoing basis,” Stosz said.
“Commenters during the comment period also expressed a range of opinions about how to best implement the particular risk management practices that would be mandated by the OMB guidance,” Stosz said. “AI risk management is not something that agencies have as much experience with, at least compared to areas like cybersecurity or privacy.”
The AI lead said OMB is actively working to refine how it will define and communicate the practices for the final draft guidance. He also noted that OMB is working with the National Institute of Standards and Technology to provide further resources and details to agencies “to aid their implementation on the ground of these practices.”
Stosz noted that the Federal government has a broad diversity of AI use cases – with OMB tracking more than 700 publicly reported use cases that agencies are currently deploying. Hence, commenters also cared a lot about transparency.
“They wanted to better understand how Federal agencies are using AI, how they’re assessing its impacts, and how the agencies are ultimately managing the risks from AI they’re using,” Stosz said. “A lot of the comments focused on particular ways that expanded transparency can be baked into the inventory of AI use cases that agencies are required to collect each year and that we provide guidance on at OMB.”
Stosz added that OMB is “looking forward to” considering those comments as it works on updating the next set of instructions for the 2024 inventory that it will be issuing this year.
The final key concern commenters highlighted with the draft OMB guidance revolved around national security uses of AI.
“We also got a lot of comments urging OMB to ensure that national security uses of AI have adequate measures to protect safety and trust and rights. Many of those use cases for national security were exempted from the draft OMB guidance and the executive order’s tasking of the OMB guidance,” Stosz said. “Those use cases – particularly things like classified use cases – may inherently not have the same level of transparency as many civilian use cases.”
He added, “But the administration is also working on a national security memorandum that’s going to direct risk management practices for national security use of AI that affect Americans’ rights and safety, and that’s going to be released later this year.”
Stosz didn’t say exactly how the final guidance will change as a result of these comments, but noted that OMB has reviewed all of them, and “we’ve identified a range of improvements to the guidance based on the public input.”
The AI EO directs OMB to issue its final AI guidance within 150 days of the order – which would fall at the end of March.
Stosz said, “As far as the timeline goes, the administration just hit all of its 90-day timelines and this whole government is really moving out to make sure that we’re hitting all of our deadlines as laid out in the executive order.”
The AI lead said his team is “really encouraged” to see Federal agencies moving quickly on the AI EO’s “aggressive timelines,” and even beginning to implement some of OMB’s draft policy – like tapping chief AI officers.
“The speed with which agencies are choosing their chief AI officers really showcases how seriously many agencies are taking it, and how they’re pushing ahead with their efforts to use AI, to govern it, and also to manage its risks,” Stosz said.
“The speed that we’re seeing is warranted both by the promise that AI has to improve public services – whether it’s making it easier to get access to benefits or preventing drug shortages or fighting wildfires or the many other use cases we see,” he said. “But the risks involved are also urgent, and as these Federal agencies adopt AI, I think they’re also recognizing the need to have leadership in place to govern its use, to take concrete steps to manage its risks, and to take those steps quickly.”