In the midst of developing an executive order on AI to “protect Americans’ rights and safety,” the Biden-Harris administration announced it has secured voluntary commitments from eight additional AI private sector heavyweights.

The new private sector pledges follow seven other big names announced earlier this summer, and promising to “help drive safe, secure, and transparent development of AI technology.”

Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI, and Stability have been added to the list of companies that have agreed – on a voluntary basis – to three commitments: ensuring products are safe before introducing them to the public, building systems that put security first, and earning the public’s trust.

These organizations join Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI who all agreed to the same eight pledges in July.

So far, 15 leading AI companies have committed to:

  • Internal and external security testing of their AI systems before their release to guard against some of the most “significant sources of AI risks, such as biosecurity and cybersecurity, as well as its broader societal effects”;
  • Sharing information across industry and with governments, civil society, and academia on managing AI risks;
  • Investing in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights;
  • Facilitating third-party discovery and reporting of vulnerabilities in their AI systems;
  • Developing robust technical mechanisms to ensure that users know when content is AI generated, such as a watermarking system;
  • Publicly reporting their AI systems’ capabilities, limitations, and areas of appropriate and inappropriate use;
  • Prioritizing research on the societal risks that AI systems can pose, including on avoiding harmful bias and discrimination, and protecting privacy; and
  • Developing and deploying advanced AI systems to help address society’s greatest challenges, “from cancer prevention to mitigating climate change.”

There is no timeline yet for President Biden’s AI executive order and bipartisan legislation, but the fact sheet published today notes that the Office of Management and Budget will soon release draft policy guidance for Federal agencies to “ensure the development, procurement, and use of AI systems is centered around safeguarding the American people’s rights and safety.”

Read More About
More Topics
Cate Burgan
Cate Burgan
Cate Burgan is a MeriTalk Senior Technology Reporter covering the intersection of government and technology.