Rep. Ro Khanna, D-Calif., surfaced a range of fresh opinions on the future of AI policy from economic and academic experts this week as he is working on possible legislation that could provide tax credits to companies developing AI tech as an inducement to give their employees more of a stake in the success of those firms.

The congressman, whose district includes portions of the Silicon Valley area, on Feb. 15  hosted the House’s first “Congressional AI in Society and for Democracy Roundtable” with nearly two dozen of the world’s top AI scholars, labor leaders, and experts to explore how AI will impact the workforce and education – and to consider ways for Congress to take action.

Brewing Legislation

The conversation on the Hill Thursday largely focused on the need to shift incentives to ensure workers will benefit from AI developments equitably – a legislative idea that the representative currently has in the works.

Rep. Khanna’s spokesperson said he’s drafting legislation that would alter the tax code so that companies are incentivized to provide more of a stake in their success to employees. Offering tax credits could encourage companies that adopt AI to give their workers more of the profit that stems from it.

The spokesperson also confirmed that the congressman is looking at ways to get workers more involved in how companies implement AI, including creating fresh membership requirements for corporate boards.

Rep. Khanna mostly sat quietly and listened during the roundtable, allowing the 23 different AI experts to question and speak amongst themselves. However, in his opening statement, Rep. Khanna highlighted that during the process of creating guardrails and legislation for AI, Congress has focused primarily on leading industry experts and emphasized that “we need to lift up academic voices.”

The roundtable was led by the academics and spent an equal amount of time discussing three topics: worker equity in the age of AI; understanding deception and manipulation of AI; and investing in education, mental health, and digital citizenship in the age of AI.

Rep. Khanna said the goal of the roundtable is to spark more interest in advancing AI legislation in the House, whose top leaders have not focused as much on the issue as in the Senate.

Senate Majority Leader Chuck Schumer, D-N.Y., is leading his own AI working group, which has hosted insight forums with top tech CEOs and labor and civil rights leaders to discuss possible regulations for the rapidly evolving technology.

The House session – which was jokingly considered “bipartisan,” because Rep. Austin Scott, R-Ga., sat in the audience – featured a slew of academic heavyweights, including Stanford University’s Fei-Fei Li, Duke University’s Nita Farahany, and Harvard Law School’s Larry Lessig.

Scholars offered concrete recommendations to Congress on how to best move forward with regulating AI.

AI Incentives Necessary to Empower Workers

When it comes to worker equity in the age of AI, many of the experts at the roundtable were in agreement that shifting AI incentives was necessary to empower workers.

“For about 40 years the government has been putting its thumb on the scale towards favoring capital over labor and replacement or substitution over augmentation,” said economics expert Robert Hockett, who is a professor at Cornell Law School.

“If we fix the incentives and we fix the metrics, I think that decision makers all over the country are likely to lead to make these choices that are more likely to empower workers and less likely to disempower them with AI,” Hockett added.

Erik Brynjolfsson, a professor at the Stanford Institute for Human-Centered AI, echoed Hockett in saying- that Congress must create “an alternative” that encourages “entrepreneurs to invest in humans and create new kinds of value.”

“The problem is that many of our key decision makers have skewed incentives that focus too much on substituting for humans and not enough of augmenting their capabilities, versus America’s tax code taxes capital at much lower rates than labor,” Brynjolfsson said. “This means business owners are steered towards replacing humans with machines wherever they can.”

Harvard’s Lessig emphasized that it’s important to focus on incentives, and why a tax strategy is so critical.

“If we don’t change the incentive, or engagement, then efforts to get rid of deception and manipulation are just not going to address the core problem – which is the facility to bring about misunderstanding of the world as a byproduct of the engagement,” Lessig said.

Education ‘Most Effective and Important’ Tool for AI

The second half of the roundtable focused on how Congress can better understand and regulate the deception and manipulation of AI, which many of the AI experts said has a direct thread to educating society about the emerging tool.

Standford’s Li, who serves as the co-director for the university’s Human-Centered AI Institute, said societal level education of AI is “fundamentally a must have.”

“This is a new technology, and it will be a double-edged sword,” Li said. “Just like all technology … we cannot shut it down because of the harm, just like we couldn’t shut down electricity. We can regulate it by some standards, but we still have to do profound public sector education.”

“That is one of the most effective and important immunizations to the negative impact of disinformation and misinformation,” she said.

Other experts gave concrete recommendations for Congress on regulating AI’s manipulation, like implementing watermarking standards to cut back on deepfakes – an idea that already exists in Senate legislation.

Glenn Cohen, a law professor at Harvard, suggested that all AI should be transparent as possible, donning “nutrition labels” that disclose what data was used to train the tool.

Many also emphasized the importance of educating Americans about the technology at the K-12 level.

“I’ve been pointing to the Finland model of education in K-12 for trying to train against misinformation and using technology from preschool onwards to try and teach those skills,” Duke’s Farahany said. The European country is “a very helpful model that already exists,” she said.

University of California, Los Angeles law professor Andrew Selbst highlighted that, in certain domains, AI guardrails will fall under existing regulation.

“I think the right way to think about risk is the particular context in which it’s operating,” Selbst said. “The AI that operates in education, that’s a different risk profile than one that operates in employment or in health. And we have existing regulation that we can rely on once we’re focused on individual domains.”

“So, focusing on the contextual applications, I think, is really important,” he said.

Rep. Khanna ended the roundtable by saying that he hopes this is “just the beginning” of the House’s conversations on this topic and urged his colleagues to amplify academia’s voices as Congress “develops these ideas” over the next few months.

“The purpose of this was not to have some definitive outcome in this forum, but it was one to highlight folks’ voices,” Rep. Khanna said.

Read More About
Recent
More Topics
About
Cate Burgan
Cate Burgan
Cate Burgan is a MeriTalk Senior Technology Reporter covering the intersection of government and technology.
Tags