As artificial intelligence (AI) adoption continues to grow across the Federal government, officials said on July 15 it’s important to share lessons learned across the government, and spoke about the importance of operational and organization efficiencies in the AI adoption process.

At a virtual event organized by AI in Government, experts from the General Services Administration (GSA), National Science Foundation (NSF), and NASA talked about how their agencies got started with the AI journey.

Critical issues that sit at the nexus of policy and technology. Learn more.

Bryan Lane, Director of Data and AI at GSA, said the agency made sure to take an honest account of its own organizational and operational maturity, and along those lines said it’s also important to determine whether a strategy or roadmap and requirements to drive the AI journey are in place.

“Do you have the right development programs to train and educate people on artificial intelligence?” posited Lane. “As we move through the layers of operational maturity for AI, we get into things like DevSecOps cloud and infrastructure data operations, machine learning operations, and you can have different levels of maturity across those operational areas. You can have a very mature cloud DevSecOps environment, but you may have data scientists that are still operating on local machines and testing one-shot models.”

Lane added that GSA looked at things specifically related to capacity building, along with acquisition and product development capabilities.

Ed McLarney, Digital Transformation Lead for AI and ML at NASA, said the space agency reached out through its data governance board to ask for more diversity and skillsets. That led to the effort getting a legal representative and somebody from NASA’s engineering and safety center to dig into how it did things scientifically and technically.

“We also had a few representatives from our research science, engineering, and just kind of business professional community, so overall it was a relatively small team – about 20 people doing the core research and ideating,” said McLarney.

“Also, we found while we were doing the work, we would get incoming rounds of new information or new policies, all the time,” added McLarney.

“One of my old bosses, when I was in the military, always talked about ‘there’s a good idea cut-off date for any given something.’” He said. “So we had to choose if we want to get something on the street, we’ve got to decide how much input we’re going to take, create our initial document, get that out there, let people begin discussing it and using it, and thinking about how they would adapt or adopt for their own uses, and then … document as we go.”

A key part of the AI journey is data, and accounting for the data that the agency will be utilizing. Elanchezhain Sivagnanam, Chief Enterprise Architect at NSF, said that a mature deployment of AI requires establishing training data and building an efficient feedback loop.

“I think the bottom-most layer – the foundational layer— would be the trustworthy training data,” said Sivagnanam. “How do we develop a trustworthy training data that the parent automation needs to publish good documentation … and needs to be automated and all that stuff? It needs to have good inventory definitions, lineage, and also associated legal policies and that’s very important to how we use the data.”

Read More About
More Topics
Jordan Smith
Jordan Smith
Jordan Smith is a MeriTalk Senior Technology Reporter covering the intersection of government and technology.