Senior technology officials at Red Hat said today that Federal government organizations face highly consequential decisions over the next year on how they choose to deploy artificial intelligence (AI) technologies and emphasized that much of the AI innovation is taking place via open source software in which the company specializes.

Speaking at the Red Hat Government Symposium in Washington, D.C., produced by MeriTalk, Chris Smith, VP of government sales at Red Hat, said the rapid pace of generative AI adoption globally is having the impact of ratcheting up the importance of the AI paths that government organizations choose to take.

“The decisions you make as government leaders over the next six to 12 months will likely be some of the most critical technology choices in our country’s history,” Smith said.

“Why is that? It’s because of the accelerated pace of generative AI adoption,” he said, citing research that shows faster adoption rates of GenAI compared to early-stage adoption rates of personal computers and internet services.

“Generative AI is spreading exponentially and faster than anybody could have predicted,” he said. “Just think about it – just two years after the public release of ChatGPT, 20 percent of Americans aged 18 to 64 reported using generative AI, with 28 percent using it at work already.”

“To put that in perspective, it took three years for PCs [personal computers] to get [to] 20 percent adoption rate and another 10 years to hit 40 percent,” he said. “AI is here and it’s rapidly impacting.”

He said several major factors that government tech leaders should consider about AI deployments include: how to handle lifecycle challenges associated with AI models, such as updating and retraining models in a production environment; how to tackle interoperability challenges between different AI systems and platforms; technical barriers to integrating across edge devices and systems; and emerging trends in cloud security and cyber resiliency.

During a keynote address at today’s symposium, Steven Huels, VP of AI engineering at Red Hat, reminded that the company has been helping government with AI tech for nearly a decade, but that “with the moment generative AI is having … we’re seeing explosive growth in adoption across the board.”

“When you consider the decisions you’re making around your AI platform, it used to be these things were experimental,” he said. “They would sit in a corner of your organization with a specialized set of analysts that would play with that system, but they weren’t really considered mission critical.”

Going forward, however, Huels emphasized that “the consideration that you put into choosing your core AI platform needs to be taken with the same level of care that you did for your core platforms for applications, databases, and security in the past.”

“These are going to be investments that need to stand the test of time,” he said. “They’re going to have to be life-cycled, they’re going to have to be maintained, updated, and advance with technology over time. So when you look at this don’t go into it lightly.”

“There’s a lot of experimentation happening in the generative AI world, there’s a new startup every other week, there’s a new model every other week. You’re going to have to make a lot of risky, speculative bets when it comes to AI, but you don’t want your core platform to be one of those bets,” he said. “You want your platform to enable you to make those bets without compromising your long-term success.”

As for Red Hat’s abilities to help government agencies with AI adoption, he reminded that in many cases, “we’ve been working in your data center for years.”

“Our core DNA is in helping automate data center operations,” he said. “We’ve taken that same general approach forward to how we help you deploy and automate AI. At the end of the day, the thing that’s ultimately going to determine your success when it comes to AI is your ability to operationalize AI alongside the rest of your applications.”

“If it’s always a snowflake that sits out there on the side, it’s never going to be treated as mainstream, it’s never going to get the same amount of attention, and ultimately, those things are the things that end up being ignored and fall by the wayside and fail,” he said.

Huels explained a list of a challenges to AI model development – including concerns about data, model selection, training, and life-cycling – and said that “the good news is that … all of this new innovation for AI is really happening in open source.”

“This this has been a huge shift,” he said. “When I started doing this 20-some-odd years ago … it was just two or three, maybe four companies that were doing things around predictive analytics, and they were all proprietary data or proprietary software vendors, so you got no ability to influence or engage in in the actual software development life cycle.”

“That has all changed,” he said. “The bulk of the innovation for AI is now happening in open source, which means, from Red Hat’s perspective everything you know and love about what we do for security and trust and life cycling for open source, for all of your core mission critical platforms today, we’re doing all of that for AI as well, and this is what has informed our overall portfolio strategy.”

Read More About
About
John Curran
John Curran
John Curran is MeriTalk's Managing Editor covering the intersection of government and technology.
Tags