The White House Office of Management and Budget (OMB) today released its finalized policy document for the use of artificial intelligence (AI) within Federal agencies, delivering on-time a core component of the administration’s October 2023 AI executive order (EO).
The first-of-its-kind guidance details AI use in five key areas: risk management; transparency; responsible innovation; workforce; and governance.
OMB first released a draft of this guidance on the heels of Biden’s AI EO on Nov. 1.
In an exclusive interview with MeriTalk last month, OMB’s Director of AI and a key player in formulating the final guidance, Conrad Stosz, highlighted the ways in which the document has evolved over the public comment period – which he noted yielded more than 250 comments from organizations like Microsoft, Google, and OpenAI and totaled roughly 2,000 pages.
The final memo doesn’t stray much from the original draft document, but Stosz noted during the interview with MeriTalk that commenters had particular interest in risk management and transparency.
Upon announcing the release of the new guidance today, Vice President Kamala Harris said on a press call with reporters that OMB’s policy presents “three new binding requirements to promote the safe, secure, and responsible use of AI by our Federal government.”
“First, we are announcing new standards to protect rights and safety,” Harris told reporters. “When government agencies use AI tools, we will now require them to verify that those tools do not endanger the rights and safety of the American people.”
“I’ll give you an example,” she continued, “If the Veterans Administration wants to use AI in VA hospitals to help doctors diagnose patients, they would first have to demonstrate that AI does not produce racially biased diagnoses.”
According to OMB’s guidance, by Dec. 1, 2024, Federal agencies will be required to implement concrete safeguards when using AI in a way that could impact Americans’ rights or safety.
The fact sheet notes that these safeguards include a range of mandatory actions to reliably assess, test, and monitor AI’s impacts on the public, mitigate the risks of algorithmic discrimination, and provide the public with transparency into how the government uses AI. These safeguards apply to a wide range of AI applications from health and education to employment and housing.
OMB stressed that if an agency cannot apply these safeguards, the agency must cease using the AI system, unless agency leadership justifies why doing so would increase risks to safety or rights overall or would create an unacceptable impediment to critical agency operations.
The second binding requirement, VP Harris told reporters, relates to transparency.
“The American people have a right to know that when and how their government is using AI, that it is being used in a responsible way,” Harris said. “And we want to do it in a way that holds leaders accountable for the responsible use of AI.”
“Transparency often, and we believe should, facilitate accountability,” she continued, adding, “And so today, President Biden and I are requiring that every year, U.S. government agencies publish online a list of their AI systems and assessments of the risks those systems might pose and how those risks are being managed.”
Under a Trump-era EO on AI, Federal agencies are required to make public an AI use case inventory. In October, OMB created a new database on AI.gov that compiled more than 700 AI use cases within the Federal government.
Today’s guidance requires agencies to release expanded annual inventories of their AI use cases. OMB issued detailed draft instructions to agencies for publicly reporting their AI use cases alongside the guidance.
Finally, the VP announced that the third requirement relates to internal oversight. “We have directed all Federal agencies to designate a chief AI officer with the experience, expertise, and authority to oversee all … AI technologies used by that agency,” she said.
“This is to make sure that AI is used responsibly, understanding that we must have senior leaders across our government who are specifically tasked with overseeing AI adoption and use,” Harris told reporters today.
Agencies got a head start designating CAIOs, and OMB said it has been regularly convening these officials in a new Chief AI Officer Council since December. The guidance also calls on agencies to establish AI Governance Boards by May 27, 2024. As of today, the Departments of Defense, Veterans Affairs, Housing and Urban Development, and State have established these governance bodies.
Finally, OMB’s guidance directs agencies to expand and upskill their AI talent. Specifically, by summer 2024, the Biden-Harris administration has committed to hiring 100 AI professionals as part of the AI EO’s National AI Talent Surge and will be running a career fair for AI roles across the Federal government on April 18.
“These three new requirements have been shaped in consultation with leaders from across the public and private sector, from computer scientists to civil rights leaders to legal scholars and business leaders,” VP Harris said. “President Biden and I intend that these domestic policies will serve as a model for global action.”
According to senior administration officials on the call with reporters, the OMB guidance offers overarching governmentwide policy for AI use as well as specific policies for certain Federal agencies.
“I was proud to sign the Office of Management and Budget’s first-ever policy focused on governing Federal agencies’ use of AI,” OMB Director Shalanda Young told reporters. “This policy is a major milestone for President Biden’s landmark AI executive order, and it demonstrates that the Federal government is leading by example in its own use of AI.”
She added, “We are committed to following the same rules and guidelines that we are encouraging others to adopt at home and abroad.”
OMB also announced the release of a request for information on responsible procurement of AI in government. The agency is required, via the AI EO, to release guidance on governing AI use under Federal contracts within 180 days of its finalized government AI use policy released today.
With the release of today’s final guidance and additional actions from OMB, the administration is following up on its announcement in January that agencies are meeting their initial 90-day AI EO deadlines and have now all also completed the 150-day actions tasked by the EO.
“The Biden Administration’s AI Memo is a crucial step forward in the federal government’s responsible embrace of AI,” said Rep. Gerry Connolly, D-Va., ranking member of the House Subcommittee on Cybersecurity, IT, and Government Innovation. “Importantly, this memo seeks to address two critical issues – accountability and talent. As we have learned in other federal IT modernization efforts, organizational charts matter. By clearly empowering a Chief AI Officer in each agency, we are not only elevating a focus on AI, we are providing accountability. But the only way we will fulfill our AI goals is by investing in a talented workforce. The creation of AI Talent Leads is one of many ways we will recruit and retain the future AI federal workforce.”