Implementing AI Assurance Safeguards Before OMB’s December Deadline
By Gaurav (GP) Pal, stackArmor Founder and CEO
In March 2024, OMB released groundbreaking new guidance in accordance with President Biden’s Executive Order on AI for the government’s safe use of artificial intelligence – the first of its kind government-wide policy on AI.
Under this new policy, government agencies must meet and implement mandatory AI safeguards that provide more reliability testing, transparency, and testing of AI systems. Agencies have to implement these safeguards by December 1, 2024.
The new mandates are designed to drive a thoughtful and considered approach to implement AI assurance safeguards and focus on the steps needed for long-lasting AI safety and development in their operations.
To meet this deadline and create long-lasting change, agencies should leverage and augment existing practices – such as the Authority To Operate (ATO) process – to add AI Assurance guardrails checking for safety, bias, and explainability in addition to confidentiality, integrity and availability. With new and emerging AI Risk Management guidance from NIST, ATOs with AI Risk Management Overlays can be applied to IT systems using AI so agencies can continue implementing safe solutions by assessing and managing risk.
New Guidance Will Lead to Safe AI Development
Over the last two years, we have seen a rapid evolution of technology with generative AI, making it imperative that the public sector catch up to this advancement for its successful and safe use.
The Biden administration and federal agencies have been making a significant effort to get ahead of advancing innovation by focusing on AI safety, development, and research. We have seen this through NIST’s AI Safety Institute (AISIC) announced in February – bringing together over 200 private sector stakeholders to help prepare the U.S. for AI implementation by developing responsible standards and safety evaluations.
NIST recently released helpful guidance designed to help manage the risks of generative AI. This guidance serves as a companion resource to NIST’s AI Risk Management Framework (AI RMF) and Secure Software Development Framework (SSDF).
What Agencies Need to do Ahead of the Deadline
Agencies should use documents like NIST AI RMF to create a risk classification methodology and create a risk baseline for conducting AI risk assessments ahead of OMB’s newly established December 2024 deadline.
To meet the ambitious deadline set forth in the new OMB guidance, agencies must take advantage of the current methodologies and frameworks in place, including NIST’s RMF and SSDF and look to implement robust test and evaluation techniques on the training data and models. Both frameworks are a good starting place for agencies looking for a high-level roadmap in AI security management.
By using a well-known RMF process to discover, classify, POAM (plan of action and milestone), and monitor the risks, leaders can quickly leverage what is available to them more efficiently and correctly for long-lasting and sustainable change.
However, current frameworks need more specific guidance and actions for agency leaders who need to implement the safeguards under the OMB framework. Leaders, including Chief AI Officers and Chief Information Officers, need to leverage additional tools, frameworks, and guidance to achieve these safeguards for the secure and responsible use of AI – adding to the complexities and challenges agencies are already facing.
Agencies should look to augment and leverage existing mechanisms to manage AI risk and enable the success of the mission to allow for agencies to reap the benefits of the Generative AI and AI/ML technologies.
With OMB’s new guidance and the subsequent deadline looming, agencies have a great opportunity to enable the mission while ensuring a safe and rights-respecting approach to be integrated into their day-to-day operations.
Over the past two years, we have seen many new frameworks that agencies can use; however, the challenge will be integrating different systems and frameworks to meet the demands of the OMB guidance by December.
The December 2024 deadline for implementing AI safeguards presents a significant challenge for government agencies. However, by leveraging existing frameworks such as NIST’s RMF and SSDF, as well as implementing an authority to operate (ATO) system for AI, agencies can work towards meeting the requirements outlined by OMB. The focus on AI safety and development is crucial, and by taking proactive measures, agencies can ensure the responsible and secure use of AI systems in their operations.