Sen. Ed Markey, D-Mass. – along with Reps. Ted Lieu, D-Calif., Don Beyer, D-Va., and Ken Buck, R-Colo. – introduced new legislation on April 26 that would keep humans in the loop in the U.S. nuclear command and control process to prevent artificial intelligence (AI) technologies from having a role in making nuclear launch decisions.
The new bill, titled the “‘Block Nuclear Launch by Autonomous Artificial Intelligence Act of 2023,” aims to ensure that no Federal funds can be used for any launch of any nuclear weapon by an automated system without “meaningful human control.”
“As we live in an increasingly digital age, we need to ensure that humans hold the power alone to command, control, and launch nuclear weapons – not robots,” Sen. Markey said in a press release. “That is why I am proud to introduce the Block Nuclear Launch by Autonomous Artificial Intelligence Act. We need to keep humans in the loop on making life or death decisions to use deadly force, especially for our most dangerous weapons.”
The Department of Defense’s (DoD) 2022 Nuclear Posture Review states that the Pentagon’s current policy is to “maintain a human ‘in the loop’ for all actions critical to informing and executing decisions by the president to initiate and terminate nuclear weapon employment” in all cases.
The bipartisan and bicameral legislation aims to codify DoD’s existing policy.
The bill has gained support in the Senate, with cosponsors including Sens. Bernie Sanders, I-Vt., Elizabeth Warren, D-Mass., and Jeff Merkley, D-Ore.
“While U.S. military use of AI can be appropriate for enhancing national security purposes, use of AI for deploying nuclear weapons without a human chain of command and control is reckless, dangerous, and should be prohibited,” said Rep. Buck. “I am proud to co-sponsor this legislation to ensure that human beings, not machines, have the final say over the most critical and sensitive military decisions.”
With the rapid development of AI technology, Sen. Mark Warner, D-Va., is also looking to put some safeguards in place. Sen. Warner, the top Democrat on the Senate Intelligence Committee, sent a series of letters sent to the CEOs of top AI companies on April 26, asking them to prioritize security, combat bias, and responsibly roll out AI.
Sen. Warner asked the CEOs a number of questions aimed at ensuring that they are taking appropriate measures to address AI security risks.
“With the increasing use of AI across large swaths of our economy, and the possibility for large language models to be steadily integrated into a range of existing systems, from healthcare to finance sectors, I see an urgent need to underscore the importance of putting security at the forefront of your work,” he wrote. “Beyond industry commitments, however, it is also clear that some level of regulation is necessary in this field.”