Rep. Nancy Mace, R-S.C., today took the Office of Management Budget (OMB) and the Office of Personnel Management (OPM) to task for what she characterized as their slow work in meeting statutory deadlines to create Federal government policies dealing with artificial intelligence technologies.

At a hearing of the House Oversight Subcommittee on Cybersecurity, Information Technology, and Government Innovation that she chairs, Rep. Mace pointed to requirements of the AI in Government Act approved by Congress in 2020.

That law gave the Office of Management and Budget nine months to issue guidance to Federal agencies on the acquisition and use of AI systems, and it gave OPM 18 months to issue a report assessing Federal AI workforce needs.

“But the administration is way overdue in complying with this law,” Rep. Mace said at today’s hearing.

Red Hat Gov Symposium
Explore what agencies need to unleash AI’s promise. Learn more.

“OMB is now more than two years behind schedule in issuing guidance to agencies. And OPM is more than a year overdue in determining how many Federal employees have AI skills – and how many need to be hired or trained up,” she said.

“Most of the AI policy debate is focused on how the Federal government should police the use of AI by the private sector,” Rep. Mace said. “But the executive branch can’t lose focus from getting its own house in order. It needs to appropriately manage its own use of AI systems – consistent with the law.”

“Bottom line is we need the government to harness AI to improve its operations, while safeguarding against these potential hazards,” the subcommittee chair said.

Rep. Mace also said she was “developing further legislation to ensure Federal agencies employ AI systems effectively, safely and transparently,” but offered no details beyond that.

Rep. Gerry Connolly, D-Va., the panel’s ranking member, cited the current use of AI technologies by Federal agencies including U.S. Cyber Command, the Department of Homeland Security, and the Department of Housing and Urban Development.

“However, like all new tools, if used improperly, AI can result in unintended consequences,” Rep. Connolly said. “For example, automated systems can inadvertently perpetuate societal biases such as faulty facial recognition technology or opaque sentencing algorithms used by our criminal justice system.  AI can also threaten jobs, proliferate misinformation, and raise privacy concerns.”

“That is why I applaud the Biden administration for proactively taking significant steps to ensure transparency in the government’s use of AI,” he said, citing the administration’s release last year of a Blueprint for an AI Bill of Rights.

“Everyone can agree the government has a colossal responsibility of developing the necessary guardrails to curb the risks of this incredible technology,” he said. “And these guardrails must strike the right balance among a host of competing and fundamental values: equity, efficiency, ethics, and the rule of law—just to name a few.  But rules are only as successful as the oversight efforts that enforce them.”

“This committee must hold Federal agencies accountable to ensure they are making appropriate choices about whether and when AI is right for their missions,” Rep. Connolly said.

Read More About
About
John Curran
John Curran
John Curran is MeriTalk's Managing Editor covering the intersection of government and technology.
Tags