The National Institute of Standards and Technology (NIST) is seeking feedback on its Artificial Intelligence Risk Framework (AI RMF), which looks to provide guidelines for AI developers to incorporate “trustworthiness” into their designs, according to a request for information (RFI) posted to the Federal Register.

NIST is looking for information that will help refine and inform the development of an AI RMF for voluntary use. NIST is creating these guidelines at the recommendation of the National Security Commission on AI.

“Surges in AI capabilities have led to a wide range of innovations,” the RFI says. “These new AI-enabled systems are benefitting many parts of society and economy from commerce and healthcare to transportation and cybersecurity. At the same time, new AI-based technologies, products, and services bring technical and societal challenges and risks, including ensuring that AI comports with ethical values.”

Among the goals of the AI RFM is to provide common definitions and use language that is easily understandable by a broad audience. NIST also wants the AI RFM to be broadly applicable, risk-based, and outcome-focused, among other qualities.

“Defining trustworthiness in meaningful, actionable, and testable ways remains a work in progress, Inside and outside the United States there are diverse views about what that entails, including who is responsible for instilling trustworthiness during the stages of design, development, use, and evaluation,” the RFI says.

NIST is seeking responses to the RFI by August 19.

Read More About
Recent
More Topics
About
Lamar Johnson
Lamar Johnson
Lamar Johnson is a MeriTalk Senior Technology Reporter covering the intersection of government and technology.
Tags