The years-long debate of whether or not AI governance should be centralized within one Federal entity stepped back into the limelight Wednesday as lawmakers scramble to craft a bipartisan, comprehensive legislative package on the emerging technology and try to push it over the finish line.

At a Senate Homeland Security and Governmental Affairs Committee hearing on Nov. 8, a Massachusetts Institute of Technology (MIT) professor argued that Congress should prioritize funding a new Federal agency that focuses on AI research and development.

“The government may need to invest in a new Federal agency, which is tasked with doing the same things that the U.S. government used to do – for example, with [the Defense Advanced Research Projects Agency], with other agencies – of playing a leadership for new technologies, and in this instance that would be more pro-worker, pro-citizen agenda,” said MIT’s Daron Acemoglu said during the committee hearing titled “The Philosophy of AI: Learning from History, Shaping our Future.”

“Something along the lines of, for example, the National Institutes of Health, which has both expertise and funding for new research could be very necessary for the field of AI with an explicit aim of investing in the things that are falling by the wayside,” the MIT economics professor argued.

Acemoglu said that a Federal AI agency would help shift the incentives of innovation in the AI realm to be pro-human and pro-worker.

“All leading computer scientists and AI scientists and leading universities are funded and get generous support from AI companies and the leading digital platforms. So, it really creates an ecosystem, in academia as well as in the industry, where incentives are very much aligned towards pushing more and more for bigger and bigger models,” Acemoglu said.

Shannon Vallor, a professor in the ethics of data and AI at the University of Edinburgh, shared a similar sentiment as Acemoglu, warning senators that the innovation incentives in the AI ecosystem today are poorly aligned with the public interest.

“I would emphasize examining the misaligned incentives that we’ve permitted in the AI ecosystem, particularly with the largest and most powerful players, and learn the lessons from the past where we have had success realigning the incentives of innovation with the public interest,” Vallor said.

“We can create clear and compelling penalties for companies who innovate irresponsibly, for companies that get it wrong because they haven’t put in the work to get it right, while at the same time, perhaps capping the liabilities or reducing the risk for innovators who do invest in innovating safely and responsibly and then want to find new ways of using those tools to benefit humans,” Vallor said.

“Because we often see some of the good actors are hearing about the risks of AI systems, the ways that they might fabricate falsehoods or the way that they may amplify bias, and that can actually reduce innovation and narrow it to only those powerful actors who can afford to get it wrong,” she said.

Vallor concluded, “If we adjust those incentives so that the best and most innovative actors in the ecosystem are rewarded for innovating responsibly, and the most powerful ones have to be held liable for producing harms at scale, then I think we can see a way forward that looks much more positive for AI.”

Read More About
About
Cate Burgan
Cate Burgan
Cate Burgan is a MeriTalk Senior Technology Reporter covering the intersection of government and technology.
Tags