With the unveiling of its artificial intelligence (AI) executive order (EO) today, a senior administration official said the Biden-Harris administration is working aggressively to execute the timeline of the EO – with the goal of completing the directives in timespans ranging from 90 days to one year.

“The most aggressive timing is within 90 days for some of the safety and security actions,” the senior administration official said during a press call with reporters. “One of the longer timings are some of the larger reporting and grant making and the like is 270 days to 365 days.”

The EO establishes new standards for AI in eight categories including: safety and security; privacy; equity and civil rights; supporting consumers and workers; innovation and competition; American leadership; and government use of AI.

The sweeping order issued today also issues major policy marching orders to no less than seven Federal agencies for specific follow-up work and calls on Congress to approve legislation on data privacy and other AI-related topics.

“President Biden has issued a landmark executive order to ensure that America leads the way with responsible AI innovation,” the senior administration official said. “The President several months ago directed his team to pull every lever and that’s what this order does – bringing the power of the Federal government to bear in a wide range of areas to manage AI’s risk and harnesses them.”

The “New Standards for AI Safety and Security” section of the EO lays out how the administration plans to protect Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content. For example, the Department of Commerce has been tasked with developing guidance for content authentication and watermarking to clearly label AI-generated content.

“Watermarking here is essentially meant to solve those problems of inauthentic synthetic content” like deepfakes, the senior administration official said. “We do not have the executive authority to tell AI companies all across the United States, ‘You have to watermark your output.’”

“What we are doing is telling the Department of Commerce to help develop the technology and help develop standards … so it’s very straightforward for companies to do the watermarking,” the official said.

In the same section, the EO calls for developers of the most powerful AI systems to share their safety test results and other critical information with the U.S. government. In accordance with the Defense Production Act, the EO will require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the Federal government when training the model and must share the results of all red-team safety tests.

“It only applies above a threshold of capability,” the official said. “This is not going to catch AI systems trained by graduate students or even professors.”

“It’s really catching the most powerful systems in the world,” the official continued, adding, “My understanding is that it will not catch any systems currently on the market. This is primarily a forward-looking action for the next generation of models.”

The Biden-Harris administration is also continuing to work with Congress to move forward AI legislation on Capitol Hill.

“AI policy is like running a decathlon, and there’s 10 different events here, and we don’t have the luxury of just picking we’re just going to do safety or we’re just going to do equity or we’re just going to do privacy. We have to do all of these things,” the senior administration official said. “The fact pattern here is such that AI is already a very important technology accelerating very quickly. And Congress has a lot to do. We understand that, but we think that it is important, and it is likely that Congress will continue to focus on AI. Certainly, we’ve seen extraordinary levels of interest.”

The Office of Management and Budget (OMB) is also gearing up to release its AI guidance for Federal agencies. All of these initiatives – including those taken in the past – make for one sweeping effort to regulate AI in the United States, the official said.

“I would characterize this executive order as building upon previous executive orders and things like the Blueprint for the AI Bill of Rights,” the official said. “This is not erasing the whiteboard and starting over – this is continuing to build on things. I would say this action is sweeping and touches a wide range of areas, while still maintaining a fair amount of detail in the actions in those areas. So that’s certainly I think the objective here, but I would not characterize this as a wholesale restart but rather a continued expansion and operationalizing some of what’s in the Blueprint for the AI Bill of Rights.”

“The executive order directs the completion of the OMB M memo, as it’s called, on AI governance. I think we’ll see that soon. That’s probably one of the most immediate deliverables to follow the executive order,” the senior administration official said. “I think it’s fair to say the full picture of America’s approach to governing its use of AI can be seen in this EO and also in the M memo and those two documents in that area together.”

Read More About
About
Cate Burgan
Cate Burgan
Cate Burgan is a MeriTalk Senior Technology Reporter covering the intersection of government and technology.
Tags