As the adoption of AI technology continues to increase within the Federal government, academic and industry experts warned lawmakers this week that more needs to be done to ensure procurement of this emerging technology is done ethically and responsibly.

During a Senate Homeland Security and Governmental Affairs Committee hearing on Sept. 14, AI experts reiterated to lawmakers that AI holds tremendous potential, but that it can also cause harm if it’s not designed or deployed “responsibly.”

“To successfully and effectively purchase and use AI tools, Federal agencies have to be prepared to address issues like privacy concerns about the use of Federal data to train commercial models and preventing bias in government decision-making,” Sen. Gary Peters, D-Mich., chairman of the committee, said during his opening statement.

Red Hat Gov Symposium
Explore what agencies need to unleash AI’s promise. Learn more.

The process with those outcomes in mind begins, witnesses said, with responsible AI procurement.

Rayid Ghani, a professor in the Machine Learning Department and the Heinz College of Information Systems and Public Policy at Carnegie Mellon University, explained that there has been a lack of attention in the Federal government to the earlier phases of the AI lifecycle – scoping, procurement, designing, testing, deploying, and use.

“Many of the AI systems being used in Federal, state, and local agencies are not built in-house but procured through vendors, consultants, and researchers. This makes getting the procurement phase correct critical – many costly problems and harm discovered downstream can be avoided by a more effective and robust procurement process,” Ghani said.

The Federal government needs to ensure that in procuring AI it “follows a responsible process, and in turn requires AI vendors to follow a responsible process in designing such systems,” he said. That process, Ghani added, will result in AI systems that promote “accountability and transparency and lead towards equitable outcomes for those impacted.”

Fei-Fei Li, a Sequoia Professor in the Computer Science Department at Stanford University and co-director of the Human-Centered AI Institute, echoed Ghani’s sentiment for legislative action that encourages responsible procurement of AI.

According to Li, the AI Training Act approved by Congress in 2022 has pushed the Federal government in the right direction. The bill aims to up-skill procurement officials and equip them with a nuanced understanding of AI capabilities and limitations, she explained.

“Responsible Federal acquisitions and procurement have the true potential to set the norms for AI development and ultimately shape the field of responsible AI more immediately and directly than any future regulation that may or may not come from this Congress,” Li said.

“As the U.S. government’s spending on AI-related contracts has surged, it’s more crucial than ever to closely examine these vendors to ensure their goals align with those of the Federal government,” she said.

Other witnesses said that one solution or piece of legislation is not enough to solve the problem.

Devaki Raj, the former chief executive officer, and co-founder of CrowdAI, explained that AI procurement needs to include ongoing AI model retraining and a retraining infrastructure.

Currently, procurement processes often mean buying AI as a one-off software solution. However, due to the nature of AI, it’s critical to procure AI technologies with the ability to continuously incorporate new data as sensors and missions change, as they invariably do, Raj explained.

“Should members of the committee take away anything from my testimony today, it is that AI must be thought of as a journey, not a destination,” Raj said, who urged the Federal government to implement ongoing training of these systems.

Read More About
Recent
More Topics
About
Lisbeth Perez
Lisbeth Perez
Lisbeth Perez is a MeriTalk Senior Technology Reporter covering the intersection of government and technology.
Tags