Critical Questions to Ask When Considering Explainable AI (XAI) for Your Federal Agency

AI

By now, Federal IT decision-makers are very familiar with machine learning (ML) and artificial intelligence (AI).

They know that – especially when augmented by artificial intelligence for IT operations (AIOps) to automate IT functions – ML and AI conceivably have no limits when it comes to expanding key capabilities.

Example of these capabilities include: helping agencies better design weapon systems; anticipating mission-hindering weather conditions; responding to disasters; predicting equipment maintenance needs; managing supply chains/inventory; distributing vaccines; and so much more. Seeing the vast potential for boosted efficiencies and efficacies, 91 percent of agencies are either piloting or adopting AI in some form, compared to just 73 percent of global organizations overall which are doing so.

And while the government continues to explore what ML/AI/AIOps can do, it is missing out on an essential knowledge component: It doesn’t know enough about how and why these intelligent machines do what they do.

This leads to trust gaps that are hindering progress. It is often difficult even for IT practitioners to understand an ML/AI/AIOps module’s decision-making and resulting actions, much less business and operations-side users. The inner workings of these systems are so complex, they’re commonly called “black boxes” within tech circles. Given their impact on agencies’ daily tasks and long-term strategies, IT teams must drive toward a greater awareness of these innovations.

This is where the concept of explainable AI (XAI) enters the equation, as a means of enabling humans to better comprehend these technologies. It provides the so-far elusive “how” and “why” answers in a way that users both understand and, more importantly, trust. As a result, XAI is poised for major global market growth, increasing from $3.55 billion two years ago, to nearly $21.8 billion by 2030.

Within the government, however, less than one-fifth of agency executives say they are preparing their workforces for explainable AI systems.

To illustrate XAI’s value, let’s say a Navy officer on a ship is at her work station, and her computer screen indicates that an AI machine has come up with a fix for a critical application which is not performing as needed. Without XAI, the officer struggles to figure out how the machine came to its root-cause and remediation recommendation. She sorts through an onslaught of data and, after a half hour, concludes (albeit, with a hint of doubt) that the machine’s read on the situation is likely correct and she puts in play its remediation steps.

But the half hour represents lost time. In the case of an application that is down while supporting key mission functions such as intelligence, surveillance, and reconnaissance (ISR), the outcomes could prove crippling.

However, with XAI bringing an immediate sense of how the machine “thinks,” and why it is recommending its specific remediation, the officer can instantly see all of the data upfront – presented in an end-to-end, fault-free structure, allowing for the review and replay of intermediate results – thereby gaining the required trust of the machine, and authorizing remediation while avoiding any lost time.

These, and other scenarios, make a compelling case for XAI adoption. In pursuing this course and seeking buy-in, agency IT leaders need to carefully consider the following critical questions and responses:

What exactly can AI do for us?

AI is a central driver of digital transformation, empowering government teams with commercial-like efficiencies, reliability, productivity, security, and speed. It applies predictive analytics to mission-related tasks and objectives, so users can make informed, data-backed decisions for future initiatives. Automation plays a lead role in this, bringing all of these benefits, along with the reduction of errors.

The federal government is committed to this transformation, as 82 percent of agency officials believe their organization needs to become more technologically advanced. Three of five feel that COVID-19 – which ushered in the work from home (WFH) era, creating a demand for new technology arrangements and collaboration/communications tools – has expedited their digital transformation.

How do I illustrate XAI’s value to gain budget support for it?

It begins with trust. In healthcare, for example, Veterans Affairs (VA) doctors are using XAI to make better decisions about treatment. Standard AI may tell them that a patient is at high risk and recommend a treatment plan. But XAI will tell them why the patient is at high risk, and why it is recommending the treatment plan. This establishment of trust will lead to additional buy-in for future XAI initiatives.

There are also compliance considerations. Regulatory policies often mandate the capturing of “explanations” for auditing purposes. That’s where XAI steps in, to satisfy these requirements.

Do I need to deploy XAI for all ML/AI/AIOps functions, or should we proceed more selectively?

XAI adds value to all AI/ML/AIOps functions. As indicated, it directly addresses the trust component. In addition, it enhances the predictive power of AI. If you’re working strictly with a black box, you won’t understand its answers, and you’ll remain at a loss as to what to do with the answers in terms of taking action that could impact future outcomes. You also can’t improve upon existing AI program algorithms if you can’t comprehend the program. XAI fills in these clarity gaps.

It’s perfectly reasonable to feel somewhat skeptical about yet another “new” technology that comes along. But let’s not think of XAI as an entirely new concept. Instead, we should treat it as a logical extension of existing AI, because it is. After all, everything else in business and life – R&D, sales/marketing initiatives, entertainment platforms, automobiles, appliances, etc. – perpetually drives toward the next improvement.

AI should serve as no exception. It has proven its value in providing the “what” in the answers we seek. XAI gives us that essential “how” and “why” to enable us to fully leverage the power of this innovation. Subsequently, agencies more readily and effectively achieve mission goals, because XAI is giving them absolute confidence in their decisions.

About Willie Hicks
Willie Hicks is the public sector CTO for Dynatrace.