As part of its proposed set of U.S. AI regulatory principles, the White House Office of Science and Technology Policy (OSTP) urged Federal regulators to “limit regulatory overreach” of the technology and its applications by the private sector.

OSTP said the 10 principles – released today in a draft memo to the heads of executive agencies – should be used to “govern the development and use of artificial intelligence (AI) technologies in the private sector.” The guidance, which comes as a follow-up to President Trump’s Feb. 11, 2019, executive order on AI, was developed as part of the White House’s American AI Initiative.

“Building upon this Administration’s record of leadership in artificial intelligence, the U.S. AI regulatory principles set the Nation on a path of continued AI innovation and discovery,” said Michael Kratsios, U.S.  chief technology officer. “By reducing regulatory uncertainty for America’s innovators, increasing public input on regulatory decisions, and promoting trustworthy AI development, the principles offer the American approach to address the challenging technical and ethical issues that arise with AI technologies.”

OSTP based the principles on the White House’s three primary goals for AI:

  • “Limit Regulatory Overreach: Regulators must conduct risk assessment and cost-benefit analyses prior to any regulatory action on AI, with a focus on establishing flexible frameworks rather than one-size-fits-all regulation.
  • Ensure Public Engagement: Regulators must base technical and policy decisions on scientific evidence and feedback from the American public, industry leaders, the academic community, non-profits, and civil society.
  • Promote Trustworthy AI: In deciding regulatory action related to AI, regulators must consider fairness, non-discrimination, openness, transparency, safety, and security.”

The principles, which also align with larger themes laid out in last year’s executive order, are:

  • Public Trust in AI – OSTP acknowledged the risks AI poses to privacy, individual rights, autonomy, and civil liberties. The memo notes that AI’s “continued adoption and acceptance will depend significantly on public trust and validation,” saying that the government’s approach to AI must “promote reliable, robust, and trustworthy AI applications.”
  • Public Participation – “Public participation, especially in those instances where AI uses information about individuals, will improve agency accountability and regulatory outcomes, as well as increase public trust and confidence,” OSTP says.
  • Scientific Integrity and Information Quality – The memo explained, “agencies should hold information, whether produced by the government or acquired by the government from third parties, that is likely to have a clear and substantial influence on important public policy or private sector decisions (including those made by consumers) to a high standard of quality, transparency, and compliance.”
  • Risk Assessment and Management – “It is not necessary to mitigate every foreseeable risk,” the memo argues. “Instead, a risk-based approach should be used to determine which risks are acceptable and which risks present the possibility of unacceptable harm, or harm that has expected costs greater than expected benefits.”
  • Benefits and Costs – OSTP said that “agencies should, when consistent with law, carefully consider the full societal costs, benefits, and distributional effects before considering regulations related to the development and deployment of AI applications.”
  • Flexibility – “Rigid, design-based regulations that attempt to prescribe the technical specifications of AI applications will in most cases be impractical and ineffective, given the anticipated pace with which AI will evolve and the resulting need for agencies to react to new information and evidence,” the memo explains.
  • Fairness and Non-Discrimination – Addressing another potential concern regarding AI, OSTP said that agencies must consider, “in a transparent manner” the impact AI may have on discrimination. The memo noted that “in some instances, [AI may introduce] real-world bias that produces discriminatory outcomes or decisions that undermine public trust and confidence in AI.”
  • Disclosure and Transparency – “In addition to improving the rulemaking process, transparency and disclosure can increase public trust and confidence in AI applications,” the memo says. “At times, such disclosures may include identifying when AI is in use, for instance, if appropriate for addressing questions about how the application impacts human end users.”
  • Safety and Security – “Agencies should promote the development of AI systems that are safe, secure, and operate as intended, and encourage the consideration of safety and security issues throughout the AI design, development, deployment, and operation process,” OSTP explains.
  • Interagency Coordination – OSTP said that “agencies should coordinate with each other to share experiences and to ensure consistency and predictability of AI-related policies that advance American innovation and growth in AI, while appropriately protecting privacy, civil liberties, and American values and allowing for sector- and application-specific approaches when appropriate.”

Once the memo is finalized it will be delivered to Federal agency leaders and will be open for public comment. OSTP noted that when agencies propose regulations for AI in the privacy sector, the agency will have to show the White House how the proposed regulations align with OSTP’s principles.

Read More About
More Topics
Kate Polit
Kate Polit
Kate Polit is MeriTalk's Assistant Copy & Production Editor covering the intersection of government and technology.