2023 was the year of artificial intelligence (AI), and MeriTalk can say without a shadow of a doubt that 2024 will be just as action-packed with Federal AI goodies.
With the emergence of ChatGPT, AI came to the forefront of the Federal government’s policy decisions. From the Biden-Harris administration’s long-awaited executive order (EO) on AI, to Capitol Hill’s race to regulate the emerging technology, here are some of the top 2023 Federal AI moments – in no particular order – that you should care about as we head into 2024.
NIST AI RMF
The year of AI started out strong as the Department of Commerce’s (DoC) National Institute of Standards and Technology (NIST) finally unveiled the first version of its AI Risk Management Framework (RMF) in January, after spending 18 months developing the policy.
“The framework is intended for voluntary use,” said NIST Director Laurie Locascio. “It provides a flexible but structured and measurable approach to understand and measure and manage AI risks.”
DoC Deputy Secretary Don Graves said that the AI RMF should help to accelerate AI innovation and growth while advancing – rather than restricting or damaging – civil rights, civil liberties, and equity.
NIST worked closely with the White House’s Office of Science and Technology Policy to create the 48-page document. The AI RMF is intended to be complimentary to the White House’s Blueprint for an AI Bill of Rights – which was released at the end of 2022.
NIST released a playbook, roadmap, and a Trustworthy and Responsible AI Resource Center alongside the AI RMF to help organizations identify key activities for advancing the framework.
President Biden’s AI EO
In October, President Biden signed his long-awaited AI EO. The more than 150-page document saw agencies immediately spring into action with their marching orders to roll over well into 2024 and beyond.
The sweeping AI EO had seven main focus points: new standards for AI safety and security; protecting Americans’ privacy; supporting workers; ensuring responsible and effective government use of AI; promoting innovation and competition; standing up for consumers, patients, and students; advancing equity and civil rights; and promoting American leadership abroad.
Biden’s AI policy document saw some immediate actions from agencies over the last two months, from the National Science Foundation establishing four new National AI Research Institutes to the Cybersecurity and Infrastructure Security Agency’s new Roadmap for AI.
Other agencies kicked off their implementation of the policy by appointing leads on AI work. For example, the Department of Energy launched a new Office of Critical and Emerging Technology and named Helena Fu the director. Other agencies, like the General Services Administration and the Education Department named their chief data officer and chief technology officer, respectively, to serve as chief AI officers.
OMB AI Policy
On Nov. 1, the Office of Management and Budget (OMB) followed through on its marching orders in the AI EO, releasing draft policy on Advancing Governance, Innovation, and Risk Management for Agency Use of AI.
The guidance aims to establish AI governance structures in Federal agencies, advance responsible AI innovation, and manage risks from government uses of AI.
“By prioritizing safeguards for AI systems that pose risks to the rights and safety of the public – safeguards like AI impact assessments, real-world testing, independent evaluations, and public notification and consultation – the guidance would focus resources and attention on concrete harms, without imposing undue barriers to AI innovation,” OMB Director Shalanda Young wrote in the 26-page draft policy memo.
The public comment period on OMB’s draft policy for the use of AI in the Federal government closed on Dec. 5.
Inaugural Federal Report on AI in Education
For all of the teachers and academics out there, the Education Department’s first-ever Federal report on AI in education was monumental.
Ahead of the Biden administration’s AI EO, the Department of Education’s Office of Educational Technology released “Artificial Intelligence (AI) and the Future of Teaching and Learning: Insights and Recommendations” in May that summarizes the opportunities and risks for AI in teaching, learning, research, and assessment based on public input.
The 71-page report addresses the clear need for sharing knowledge, engaging educators and communities, and refining technology plans and policies for AI use in education.
It recognizes AI as a rapidly advancing set of technologies that can enable new forms of interaction between educators and students, help educators address variability in learning, increase feedback loops, enhance student adaptivity, and support educators.
However, it also outlines risks associated with AI – including algorithmic bias – and the importance of trust, safety, and appropriate guardrails to protect educators and students.
Race to AI Regs
Earlier this summer, Senate Majority Leader Chuck Schumer, D-N.Y., announced first-of-their-kind AI Insight Forums, which kicked off on Sept. 13 and concluded after nine sessions discussing the biggest issues in AI, from workforce to national security to privacy and everything in between.
Sen. Amy Klobuchar, D-Minn. – who has been active in all nine of the closed-door Senate AI meetings – said the different AI bills that have been introduced over the past year will be “gathered together in a package” and “the plan would be to work on them early in [2024].”
Sen. Klobuchar has been a leader in AI legislation, with a focus on generative AI and the effects of the upcoming 2024 election.
Her Protect Elections from Deceptive AI Act would ban the use of AI to generate deceptive content to influence Federal elections. The bill aims to identify and ban “deep fakes” depicting Federal candidates in political ads. Sen. Klobuchar said the bill would work hand in hand with a watermarking tool that can label whether images have been generated with AI.
Happy Holidays from MeriTalk! We wish you a cyber-safe and secure holiday season and can’t wait to see what 2024 brings.