Lawmakers and industry experts expressed concern this week that the Federal government will fail to adopt artificial intelligence (AI) technologies in a timely manner.

Those doubts were voiced during a House Oversight Subcommittee on Cybersecurity, IT, and Government Innovation hearing on Dec. 6 about the government’s ability to implement the nearly 150 requirements associated with White House’s AI Executive Order (EO) and corresponding draft guidance from the Office of Management and Budget (OMB) in the time allotted to accomplish them.

“No one can yet judge the impact of the EO or the guidance. For the most part, they’re just kickstarting a process,” subcommittee Chairwoman Nancy Mace, R-S.C., said. “The EO tasks Federal agencies with a massive laundry list of roughly 150 action items to take over the next year and beyond. Dozens of regulations and guidance documents will be issued, every major agency and many minor ones are enlisted in the effort.”

“I’m a little skeptical Federal agencies will keep to the timetable of action laid out in the documents because their track record is pretty useless,” she said. “After all, the draft OMB guidance on government use of AI we’re discussing today was due by law from this administration more than two years ago.”

Daniel Ho, a professor of law at Stanford University, noted that the Federal workforce will be the key to helping agencies meet the timetables for action set out in the EO’s 150 AI tasks.

“The talent pipeline … is going to be absolutely critical for ensuring that the right folks are in place to be able to implement these requirements faithfully and in an informed way by the technology,” Ho said.

Samuel Hammond, senior economist at the Foundation for American Innovation, highlighted how the Federal government must move quickly with regulation and integration of AI in order to keep pace with innovation, noting that there is significant risk if the government fails to adopt AI in a timely manner.

“The question is whether governments will keep up and adapt or be stuck riding horses while society whizzes by in a race car. The risks from adopting AI in government must therefore be balanced against the greater risks associated with not adopting AI proactively enough,” Hammond said.

Rep. Mace questioned if the “slow and reluctant government adoption of AI” could jeopardize the cybersecurity of Federal systems, and Hammond said he believes AI adoption is both a “national security issue and a good government issue.”

“We lived through the pandemic and when you saw those lineups around the block to claim unemployment insurance, a big part of that was because state unemployment insurance systems are built on mainframe computing technology from 50-60 years ago,” he said.

The lawmakers and experts alike agreed that if the U.S. does not lead on AI today, it will open the door for hostile foreign nations to control the values driving the technology.

“There are three possible futures of AI. One is a future of AI abuse unchecked by government regulation,” Ho said. “Another is where the government harms citizens because of improper vetting of AI.”

“But a third future is one where the government protects Americans from bad actors and leverages AI to make lives better — like the VA’s [Department of Veterans Affairs] use of AI to enable physicians to spend more time caring for veteran patients, and less time taking notes. To get there, we must make the right decisions today.”

Read More About
About
Cate Burgan
Cate Burgan
Cate Burgan is a MeriTalk Senior Technology Reporter covering the intersection of government and technology.
Tags