A top official with the Department of Homeland Security (DHS) said Thursday that generative AI (GenAI) tools used within the Federal government should be risk-evaluated, but that there would be limitations to a program similar to the Federal Risk and Authorization Management Program (FedRAMP) for GenAI.
FedRAMP is program administered by the General Services Administration (GSA) that provides a standardized, government-wide approach to security assessment, authorization, and continuous monitoring for cloud products and services used by Federal government agencies.
“There’s a desire to have almost like a FedRAMP-like process for the AI specific risks of generative AI, and I think the OMB memo doesn’t envision that part,” Michael Boyce, director of DHS’s newly-established AI Corps, said during the July 11 ATO and Cloud Security Summit in Washington, D.C.
In March of this year, the Office of Management and Budget (OMB) released its finalized policy document for the use of AI within Federal agencies, delivering on a core component of the administration’s October 2023 AI executive order.
“I think the OMB memo envisions that it will be more pushed to the individual agencies for those AI-specific risks,” Boyce said. “And the reason is, is because we don’t have a centralized mechanism for managing all operational risks across the government.”
“We need to integrate the standard mechanisms of operational risk that we do for any of our programs in thinking about how we deliver these solutions, and not see this as just an IT problem or something that can be centralized with some model evaluation, and then we can move on,” Boyce added.
DHS has said that it aims to be a leader in the Federal government when it comes to AI. Earlier this year, the department announced a new initiative focused on hiring 50 AI experts.
Late last month, DHS announced it made its first 10 hires.
“Folks are extremely interested in coming into these Federal positions,” Boyce touted. “We’re over 10,000 applications, and we have 15 people on board.”
“We’re one of the first, if not the first, digital service teams to be focused actually on a technology as opposed to a program area. We also are focusing on some of those key skill sets here – AI/ML engineers, your data scientists, we’re hoping we can bring on some sort of AI security experts as well,” he said. “We actually just posted a new opening that really goes into detail of those exact roles. I made a point of not calling it IT specialist – it’s AI technology expert.”
“The idea here is we’ll bring on these diverse teams, and then it really will be a range of projects,” Boyce explained. “We’ve talked to over 100 people across the department. We’ve started to engage both at a governance level and on an implementation level with a number of agencies … We’re hoping to put out some more information about that soon.”