Global tech trade association ITI has released a first-of-its-kind AI Accountability Framework that highlights practices companies are using to develop and deploy AI technology safely and securely.

The purpose of the framework, ITI said, is to demonstrate practices that are key to achieving the responsible deployment of AI systems while also informing ongoing AI policy development.

ITI’s AI Accountability Framework defines responsibilities across the entire AI ecosystem, outlining steps AI developers and deployers should take to address high-risk AI uses, including for frontier AI models.

It also introduces the concept of auditability, where an organization retains documentation of risk assessments, to increase transparency in AI systems.

“The technology industry appreciates the important role that consumer trust plays in advancing the adoption of AI and furthering innovation. ITI’s AI Accountability Framework serves to deepen that trust by detailing practices that developers, deployers and integrators are taking to increase AI safety and mitigate risk, and is a guide that policymakers can build on as they contemplate approaches to AI governance,” said ITI’s Vice President of Policy Courtney Lang.

The 11-page framework details seven “baseline” practices AI deployers and developers should use, and are using, today:

  • Early and continuous risk and impact assessments throughout the AI development lifecycle;
  • Testing frontier models to identify and address vulnerabilities prior to release;
  • Documenting and sharing information about the AI system with others in the AI value chain;
  • Undertaking explanation and disclosure practices so that end-users have a basic understanding of the AI system and know when they are interacting with an AI system;
  • Using secure, accurate, relevant, complete, and consistent training data;
  • Ensuring that AI systems are secure-by-design to protect end-users; and
  • Appointing AI risk officers and training employees and personnel who are interacting with or using AI systems.

Lawmakers in the House and Senate have introduced dozens of bills to place guardrails around the emerging technology but have yet to pass any concrete legislation – despite states moving ahead with AI regulations. In March, the White House’s Office of Management and Budget released its finalized policy document for the use of AI within Federal agencies, keying on risk management; transparency; responsible innovation; workforce; and governance.

Read More About
Recent
More Topics
About
Cate Burgan
Cate Burgan
Cate Burgan is a MeriTalk Senior Technology Reporter covering the intersection of government and technology.
Tags