The National Institute of Standards and Technology (NIST) is on track to release the first version of its Artificial Intelligence (AI) Risk Management Framework by January 2023, NIST’s IT Laboratory Chief of Staff told lawmakers on Thursday.

The NIST AI Risk Management Framework is intended for voluntary use, and to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems. The agency has been developing the framework since July 2021, and has solicited feedback through workshops and public comment.

“The technology and standards landscape for AI will continue to evolve,” Elham Tabassi, NIST’s IT Laboratory Chief of Staff, said during a House Research and Technology subcommittee hearing. “Therefore, NIST intends for the framework and related guidance to be updated over time to reflect new knowledge, awareness, and practices.”

In addition, NIST will build off the framework to produce additional guidance, standards, measures, and tools to help agencies and industry evaluate and measure AI trustworthiness in specific use cases.

However, Tabassi said “a lot more work is still required to get there,” which is why NIST put out a call for contributions. NIST released a second draft of the framework for comments in writing by Sept. 29, and will hold a workshop on Oct. 18-19 for further comments.

The goal of that participation is to have diverse input into the operating standards AI technologies must abide by in order to be considered ethical and fair, Tabassi said.

Tabassi also noted that NIST’s pending AI standards will make several recommendations on actions needed to ensure the U.S. remains a leader in technical and scientific standards development, “partially to build foundations for international standards.”

“The [framework] is trying to provide a shared lexicon, [an] interoperable way to address all of these questions, but also provide a measurable process metrics and methodology to measure them and manage these risks,” she added.

She noted that in addition to the public-private partnerships the Biden administration has wanted to foster international partnerships for emerging technologies, NIST is also looking to craft standards with international cooperation.

“We have significantly expanded research into harmful AI bias, and we are engaging with Federal agencies, international ones, and the U.S.-EU Trade and Technology Council on the matter,” Tabassi said.

Read More About
About
Lisbeth Perez
Lisbeth Perez
Lisbeth Perez is a MeriTalk Senior Technology Reporter covering the intersection of government and technology.
Tags