On October 30, 2023, President Joe Biden stood in the East Room of the White House and unveiled an executive order (EO) on artificial intelligence (AI) that was breathtaking in scope. “AI is all around us,” the president said as he signed the 111-page document.

The EO directs more than 50 Federal entities to take more than 100 specific actions to implement guidance the White House laid out across eight overarching policy areas, ranging from promoting AI safety and security to encouraging Federal use of the technology. The government, it said, will work to “ensure that safe and rights-respecting AI is adopted, deployed, and used.”

Federal agencies have since increasingly adhered to the EO’s can-do spirit, accelerating AI use and employee training and appointing chief AI officers to oversee deployment. The order has served as “a tremendous accelerator to making sure that every single agency across the Federal government is taking AI seriously and investing in a workforce that is ready to tackle these issues,” said Lauren Boas Hayes, senior advisor for technology and innovation at the Cybersecurity and Infrastructure Security Agency.

In all, agencies recently reported about 1,200 current and planned AI use cases to the U.S. Government Accountability Office (GAO), in areas that included analyzing data to identify border activity and targeting scientific specimens for planetary rovers.

Agencies Turn to Implementing the EO Long Term

Yet GAO also noted that most AI use cases remain in the planning phases and that the government lacks broad guidance on how agencies should use and acquire AI. The EO calls for such guidance, and the Office of Management and Budget published a draft in November 2023.

Meanwhile, rapid developments surrounding AI have raised a series of questions that government and industry experts are exploring as attention turns to implementing the EO over the long term: What policies and strategies should be undertaken to realize the EO’s goals, what are the key challenges, and how can they be overcome?

“The EO is certainly ambitious,” said Bill Wright, global head of government affairs for Elastic, an enterprise security, observability, and search solutions provider. “Overall, agencies have made considerable progress toward the objectives of the EO, and they have done it in spite of the tight deadlines.”

Key moves have included the Department of Commerce’s development of a comprehensive plan for instituting AI technical standards, and the General Services Administration’s creation of an AI Center of Excellence, Wright noted, adding, “There is still a lot of work to do. As with any new technology, unforeseen policy issues will need to be addressed along the way.”

An analysis of the EO from Stanford University’s Institute for Human-Centered Artificial Intelligence echoed his sentiments. “There is much to admire in this EO, but also much to do,” the institute wrote. Realizing the document’s vast ambitions, the analysis said, requires “empowered senior officials, staff with AI expertise, incentives to prioritize AI adoption, mechanisms to support and track implementation, specific guidance, and White House-level leadership.”

The EO Is an Ambitious Document for an AI Future

The AI EO focuses on seizing the promise of AI while managing any potential risks. Its broad goals extend across eight policy areas:

  • Safety and Security – Mitigating risks related to AI adoption, especially in the areas of biosecurity, cybersecurity, national security, and critical infrastructure. Requirements include developing best practices for testing and deploying trustworthy AI and establishing an AI Safety and Security Board
  • Innovation and Competition – Attracting AI talent to the United States and promoting AI innovation. Requirements include streamlining the visa process for foreigners seeking to come to the United States for AI education or work and establishing a pilot program to enhance existing AI training programs for scientists
  • Worker Support – Preventing AI adoption from disrupting the workforce. Requirements include reporting on how AI affects the labor market and publishing best practices that help employers minimize harm to employees
  • Consideration of AI Bias and Civil Rights – Taking action to prevent bias and civil rights violations from AI adoption. Requirements include convening agencies and regulators to determine how to stop potential algorithmic and AI-related discrimination and publishing guidance for Federal contractors to prevent bias in AI systems used in hiring
  • Consumer Protection – Minimizing harm to consumers from AI usage. Requirements include establishing an AI safety program to monitor and improve AI deployment in health care and developing policies and guidance for AI use in education
  • Privacy – Evaluating and mitigating privacy risks associated with the collection and use of Americans’ data – risks that could be exacerbated by AI. Requirements include identifying commercially available information procured by agencies and creating guidance for agencies when they evaluate the use of privacy-enhancing technologies
  • Federal Use of AI – Coordinating AI use by Federal agencies and enhancing hiring and training to increase the Federal AI workforce. Requirements include convening an interagency council on Federal use of AI and pursuing “high impact” AI use cases
  • International Leadership – Establishing the United States as a global leader in AI development and adoption. Requirements include establishing a global engagement plan to promote and develop AI standards and publishing an AI in Global Development Playbook

Actions in 2024 Will Demonstrate AI Momentum

In the near term, there is plenty that Federal agencies can do to work toward achieving the goals of the EO, experts said.

“In essence, 2024 will focus on enabling the foundations, ecosystems, and initial coordinating work to set up the longer-term horizon objectives of the order, while demonstrating momentum and continued investment in the administration’s AI priorities,” said Tony Holmes, practice lead for solutions architects public sector at Pluralsight, a technology workforce skills provider.

Holmes noted several short-term actions agencies are likely to take this year, including:

  • Expanding research and pilot funding opportunities from science agencies such as the National Science Foundation around priorities including privacy-preserving techniques and sector-specific AI research
  • Updating to procurement protocols and human resources policies to enable faster hiring and acquisition of AI tools and talent across agencies, and ramping up skills-building programs for the Federal workforce
  • Maturing of data collection and benchmarking to track metrics on usage, effectiveness, and adoption of AI systems in Federal settings

Robust outreach and dialogues with external stakeholders such as academia, civil rights groups, industry, and the international community to help shape EO implementation. The National Institute of Standards and Technology (NIST), for example, recently issued a public request for information to help carry out its responsibilities under the EO, including the establishment of a plan for global engagement on promoting and developing AI standards.

“NIST will work with private and public stakeholders to carry out its responsibilities under the executive order,” NIST Director Laurie Locascio said. “We are committed to developing meaningful evaluation guidelines, testing environments, and information resources to help organizations develop, deploy, and use AI technologies that are safe and secure, and that enhance AI trustworthiness.”

Longer-Term Challenges Await for the AI EO

Longer term, said Wright of Elastic, agencies should focus on overcoming a series of challenges that could slow the EO’s implementation. Among them:

  • The EO calls on Congress to approve legislation on data privacy and other AI-related topics. As agencies await AI legislation around ethical usage, Wright said, they can prepare by “strategically organizing their data and reducing data silos in their organizations. As we go deeper into AI use cases, it will be essential to maintain full visibility into all your data, of all types.”
  • While “generative AI unlocks countless possibilities for refining citizen services and boosting employee productivity” as its use increases, Wright said, “agencies are rightfully cautious about using their sensitive data to train public large language models.” He recommended retrieval augmented generation (RAG), a technique for enhancing the reliability of generative AI models. “Using a RAG approach, you can integrate internal data and context into generative AI applications without exposing that data to the public domain,” Wright said.
  • With many government leaders concerned about inaccurate generative AI outputs, Wright said, many agencies are also turning to RAG “to layer in context to generative AI outputs. RAG grounds large language models with an agency’s private data, thereby strengthening the quality and accuracy of the outputs.”

Holmes, of Pluralsight, also sees a series of challenges, such as budgetary constraints as AI ambitions grow, as well as data and infrastructure needs because many agencies “lack modern data management, computing power, and tools required to support advanced AI models.” Industry can provide technology resources, platform access, and guidance on building scalable data pipelines, he noted.

Perhaps the central challenge, Holmes said, is AI knowledge gaps, compounded by difficulties in talent recruitment and retention. “The AI market is highly competitive, and agencies struggle with hiring and keeping qualified personnel,” he said.

The AI knowledge gap is also top of mind for agency leaders including Sheena Burrell, CIO for information services at the National Archives and Records Administration.

“What we’re trying to do right now is upskill some of our employees that may have had some experience in this realm,” she said.

Holmes recommended partnering with industry on talent exchange programs and making a wholesale investment in training Federal workers in AI skills. He said the effort should prioritize reskilling current employees over hiring new talent.

“Adaptable training programs allow you to tailor the curriculum and leverage existing knowledge,” Holmes said. He suggested that agencies focus training on both AI foundations and practical applications – and make training widely available.

In measuring the EO’s ultimate success, he concluded, “widespread internal AI expertise will be critical to driving accountability and responsibility in deployment. Training benchmarking and outcomes should be part of a broader set of success metrics tracked over time. Without an AI-literate workforce, policies on paper may have little practical impact.”

Read More About
About
MeriTalk Staff
Tags