
Artificial intelligence (AI) is moving quickly from experimentation to production across the federal government, and that shift is bringing new security pressures into already complex environments. As agencies connect generative AI tools and AI-enabled services to sensitive data and mission workflows, they need protection that matches the speed, scale, and ambiguity of AI-driven threats. MeriTalk recently sat down with Anish Patel, director of federal sales at Cloudflare, to discuss why shadow AI is often a workforce signal rather than rebellion; what an AI-ready security posture looks like across human, prompt, and agentic layers; and more.
MeriTalk: As you work with federal agencies, how have you seen AI changing the attack surface? What new risks are you seeing as agencies move from AI pilots into production systems that touch mission and citizen services?
Patel: AI isn’t necessarily changing the attack surface as much as it’s making that surface more accessible to attack. The vulnerabilities and exposure points have always been there, but what used to take an attacker many coordinated steps can now be accelerated. That speed matters because it frees bad actors to focus on more creative and harder-to-detect tactics.
In the pilot phase, agencies often treat AI like a sandbox – people are testing and experimenting, and critical data may not always be in play. But as agencies move to production, that sandbox gets connected to critical systems, and the threat perimeter becomes more contextual. The analogy I use is the difference between protecting a child at home versus protecting a child in an airport. At home, the environment is more controlled. In an airport, there’s constant movement and unfamiliar actors. In the AI world, challenges become model poisoning and prompt injections. The challenge is less about new categories of attacks and more about how much harder reality is to decipher.
MeriTalk: When you talk with federal security and AI teams, what blind spots around shadow AI and GenAI use worry you the most?
Patel: Shadow AI is getting a lot of attention, but it’s not fundamentally different from shadow IT. And I think it’s important to reframe what it usually means: People operating outside provisioned processes typically aren’t doing it out of rebellion. Shadow IT is often a cry for help. People are trying to meet mission deadlines, but they don’t have the right tools or they don’t have access to them quickly enough, so they look elsewhere.
The difference with shadow AI is that a small mistake can be amplified much faster. Decades ago, an error might have stayed local and faded quickly. Today, mistakes can spread broadly and persist. An innocuous copy-and-paste moment can become a major problem for an organization.
The challenge is less about whether shadow use will happen, because it will – it’s more about whether agencies have visibility and control. In the federal government, there’s already a strong focus on security controls, so agencies can redirect users toward authorized tools and make sanctioned paths easier.
MeriTalk: What does a robust AI security posture look like in a federal environment? How should agencies think about securing employee interactions with GenAI tools, as well as AI agents accessing application programming interfaces and data stores, in the context of current guidance from the federal government?
Patel: The Office of Management and Budget guidance around accelerating federal use of AI through innovation, governance, and public trust leans toward enabling innovation, not just regulating it. The real challenge is balancing both: maintaining strong security controls while ensuring innovation is not slowed down. Historically, security tools improved monitoring and control, but manual reviews for compliance, security, and policy took months. In the AI innovation space, those reviews have to speed up drastically.
A robust posture follows an OODA loop approach: observe, orient, decide, and act. I think about managing AI risk in layers. First is the human layer: ensuring only authorized staff can reach sanctioned AI applications inside the boundary.
Second is the prompt layer: defining inline guardrails so users don’t go out of bounds as they interact with models. I compare those guardrails to a digital TSA agent: People may not enjoy the friction, but there’s real value in making sure everyone understands what’s not allowed.
Third is the agentic layer, where systems talk to other systems on a person’s behalf. The risk is exponentially greater because there’s no human in the loop, which makes it harder to catch a bad action before it goes too far. That’s why agencies need both readiness for this posture and improved processes that don’t drag legacy baggage forward. Technology is rarely the hardest part; integration and process change are. The goal is confident risk reduction that enables agencies to simplify processes that weren’t designed for the AI era.
MeriTalk: Cloudflare’s platform protects both workforce use of AI tools and AI-enabled applications. Can you walk us through the core capabilities of Cloudflare’s AI Security Suite and how those capabilities address the needs of federal agencies?
Patel: I’ll start with a use case. We worked with a highly regulated entity that was concerned about data seeping into public large language models. If private data makes its way into a public model, someone querying for related information could receive that private data. Cloudflare can turn on visibility tools, and because we sit in front of so much internet-bound traffic, agencies can discover employees using AI tools. In this case, users were pushing data to more than 150 different public AI services that were unsanctioned.
Blocking everything doesn’t work because people will find another way to get what they need to serve the mission. Instead of a whack-a-mole strategy, we focus on visibility and a confidence scorecard approach. Agencies can see the tools that employees are accessing, define which AI tools are safe and allowed, and set zero trust policies to permit those tools. Then, agencies can apply data loss prevention policies to automatically redact sensitive project names or personally identifiable information that might get pasted into a prompt. The work continues, but the risk of accidental exposure is reduced and quantified.
At a higher level, we address four major AI security themes. First is protecting employee use of GenAI tools. Second is securing interactions between AI agents and corporate resources in a machine-to-machine world. Third is protecting AI-powered apps, like agency chatbots and AI-enabled services, from data loss and attacks. And fourth is helping developers use AI to build faster without compromising on security. The goal is to harmonize security policies across public access, employee access, and developer needs, whether traffic is flowing inside-out or outside-in.
MeriTalk: Without naming names, can you share an example that illustrates how a public sector or highly regulated customer used Cloudflare and what kind of visibility and risk reduction they gained?
Patel: One of the government’s challenges is that some of its data needs to be public. AI can help analyze that information if people ask the right questions, but you still need a filter between users and applications and data.
Cloudflare is that filter in some agencies. It enables accurate logging of every application being accessed, visibility into which services are heavily utilized, and clear identification of accesses shouldn’t occur – like internal systems being hit from the outside. That visibility creates real security value. With Cloudflare, agencies set a standard for access, and if it’s met, the user passes through without friction. That filtering helps agencies innovate without compromising security.
MeriTalk: How do you expect AI security challenges to evolve for Federal agencies over the next two to three years, and how will industry respond?
Patel: Attacks that might have taken years to develop and propagate can now be accelerated, and as more people get savvy, the volume and speed of attacks will increase dramatically. The question becomes: How do you filter through the noise when the level of attack activity gets so high? As AI accelerates more complex attacks, agencies will need systems that can handle scale and complexity without drowning teams in noise.
At the same time, AI applications are changing. Developers are moving toward applications that are extensible and customizable not just by developers, but by end users. That means the security problem is no longer one application accessed by millions of people – it could become hundreds, thousands, or even millions of iterations of an application, all accessed at massive scale. Traditional review cycles won’t work; humans won’t be able to keep up. We will need AI to protect against AI. Modern approaches that enable just-in-time authorization and auditing – and maintain the boundary between what’s private and what’s public without compromising user experience – will become essential.
And user expectations will change quickly. Right now, people tolerate AI interactions that take a minute. Over the next year or two, people will begin to expect instant responses. That makes programmability, real-time guardrails, and policy enforcement even more critical, because speed can’t come at the expense of trust.