The Department of Commerce’s National Telecommunications and Information Administration (NTIA) is calling for independent audits of high-risk AI systems as part of a series of eight recommendations in its new AI Accountability Policy Report released Wednesday. 

“Responsible AI innovation will bring enormous benefits, but we need accountability to unleash the full potential of AI,” NTIA Administrator Alan Davidson said on March 27. “NTIA’s AI Accountability Policy recommendations will empower businesses, regulators, and the public to hold AI developers and deployers accountable for AI risks, while allowing society to harness the benefits that AI tools offer.” 

The report calls for improved transparency into AI systems, independent evaluations to verify the claims made about these systems, and consequences for imposing unacceptable risks or making unfounded claims. 

The AI Accountability Policy Report makes eight sets of policy recommendations across three categories to accomplish these goals: guidance, support, and regulations. 

In the guidance category, NTIA is calling on the Federal government to work with stakeholders to create guidelines for AI audits and auditors. This includes refining guidance on the design of audits, designing evaluation standards for audits, and creating auditor certifications, the agency said.  

NTIA also said the Federal government should work with stakeholders to improve standard information disclosures for AI, like nutrition labels. “Greater transparency is needed on AI system models, architecture, training data, input and output data, performance, limitations, appropriate use, and testing,” the report says. 

The final recommendation to the Federal government in the guidance section is to work with stakeholders to make recommendations about applying existing liability rules and standards to AI systems – including who is held accountable for AI system harms. 

In the support category, NTIA recommends the Federal government invest in people and tools. Specifically, it calls for investing in “resources necessary to meet the national need for independent evaluations of AI systems,” including by supporting the National Institute of Standards and Technology’s AI Safety Institute and the National Science Foundation’s National AI Research Resource. 

The report specifically says that the government should focus on resources like datasets, cloud infrastructure, red-teaming, and workforce development.  

NTIA’s report also says the government should support research. “Federal government agencies should foster the creation of reliable and widely applicable tools to assess when AI systems are being used, on what materials they were trained, and what capabilities and limitations they have.” 

In the final set of policy recommendations – regulations – NTIA suggests that Federal agencies require independent audits and regulatory inspections of high-risk AI model classes and systems, such as those that present a high risk of harming rights or safety, both before release or deployment and on an ongoing basis.  

The report also recommends that the Federal government strengthen its capacity to address risks and practices related to AI across sectors of the economy. This could include maintaining registries of high-risk AI deployments, AI adverse incidents, and AI system audits. 

Finally, the report calls on the Federal government to require government suppliers, contractors, and grantees to adopt sound AI governance and assurance practices. 

NTIA’s AI Accountability Policy Report follows more than 1,400 comments from stakeholders and interested members of the public, who offered suggestions last year to create earned trust in AI systems – comments that ultimately helped to inform the final product, NTIA said.  

The agency’s work on the AI Accountability Policy Report came months ahead of President Biden’s AI executive order, which tasked agencies with adopting a myriad of AI policy requirements. For example, the White House’s Office of Management and Budget is currently working to create agency guidance for procurement of AI use under Federal contracts.  

Read More About
About
Cate Burgan
Cate Burgan
Cate Burgan is a MeriTalk Senior Technology Reporter covering the intersection of government and technology.
Tags