The National Institute of Standards and Technology (NIST) has worked with Federal agencies to help them publicly document compliance with the White House Office of Management and Budget’s (OMB) governmentwide AI policy, NIST’s chief AI officer (CAIO) said Thursday.

OMB’s guidance, which went into effect at the end of March, tasked agencies with publicly posting on their website a plan to achieve consistency with the Federal AI policies – including updating their internal policies, collecting information for their AI use case inventories, removing barriers to responsible use of AI tools, and determining whether a use is rights- or safety-impacting, among other things. The compliance plans were due earlier this week.

Pega Government Empowered
Join government and industry leaders from around the world to discuss how to embrace change and expand what’s possible. Learn more.
“One particular effort that relates to the use of AI across the government is, as part of the [October 2023 AI executive order], OMB had a task to come up with a minimum risk requirement by agencies for the use of AI in their operations, and asked NIST to work with the OMB and agencies to come up with implementation plans,” NIST’s CAIO Elham Tabassi said during a Nextgov webinar on Sept. 26.

“It’s one thing to say we want AI systems to be safe or secure, then we have to work with the community to come up with what that means,” she continued, “What does safe AI systems mean? What does secure AI systems mean?”

“And then another step to that is assure that AI systems are safe and secure,” Tabassi said. “So, what are the tools – they call them benchmarks – that’s needed to go from just saying that the AI systems are trustworthy to actually test and assure that the systems are trustworthy and responsible?”

OMB required agencies to publicly post their compliance plans or publish that the agency doesn’t use AI. OMB requires that those plans be updated every two years until 2036.

Each compliance plan is based on a template OMB provided with three main sections corresponding with the guidance: strengthening AI governance; advancing responsible AI innovation; and managing risks from the use of AI.

The 12 subsections across the compliance plans span from AI talent to harmonization of AI requirements to AI sharing and collaboration to minimum risk management practices, among other things.

For example, to bolster AI talent, the Department of Transportation (DoT) launched the AI Support and Collaboration Center (AISCC) which serves as a centralized, self-service hub for promoting the development of AI talent internally, providing pathways to AI occupations, and assisting employees affected by the application of AI to their work.

The DoT’s plan also notes that its furthering AI sharing and collaboration by prioritizing “the sharing of custom-developed code, including commonly used packages and functions, models, and model weights, which have potential for reuse by other agencies and the public to the maximum extent possible.”

Agencies also listed ways they plan to remove barriers to the responsible use of AI, like creating sandboxes for safe experimentation at the Department of Housing and Urban Development or “making multiple cloud-hosted capabilities available in FY25” at NASA.

Some agencies listed their challenges in the compliance plan. For example, the Energy Department said it is challenged with providing AI tools “and maintaining compliance with evolving cybersecurity standards in the wake of evolving threat vectors.”

“Staff are unable to access and utilize the advanced AI tools and services provided by leading cloud service providers (CSPs) as many critical services are awaiting FedRAMP authorization,” DoE’s plan states. “Furthermore, there are feature parity gaps between the commercial offerings and the federal government-specific cloud environments. While this feature gap is closing, it is likely that the gap will continue as CSPs roll out new services and capabilities. Furthering the challenge is that existing cloud management security practices elongate the timeline between a service achieving FedRAMP approval and when the Department can offer those services to developers within the context of the existing managed cloud environments.”

DoE also listed “access to high-quality and well-curated data for AI training and consumption” as a “work in progress.”

“The existing data infrastructure lacks the necessary integration, governance, and management capabilities to support AI adoption effectively,” DoE wrote. “This is evidenced through fragmented data sources, inconsistent data quality, and inconsistent and insufficient data interoperability standards.”

Each agency’s compliance plan is posted publicly to their website. Some plans are located on the inventory page while others are located on landing pages for policies or the offices of the CAIO.

Read More About
Recent
More Topics
About
Cate Burgan
Cate Burgan
Cate Burgan is a MeriTalk Senior Technology Reporter covering the intersection of government and technology.
Tags