As the Department of the Air Force deploys AI to maintain a competitive advantage and equip warfighters with the most effective tools to do their job, CIO Lauren Knausenberger said her agency is taking a “human-in-the-loop” approach to maintain strong ethical guidelines.

At the NVIDIA GTC Conference on March 22, Knausenberger explained that it is “critically important” for the entire Department of Defense to ensure AI is used ethically.

“We do have very strong ethical guidelines for AI in the department,” Knausenberger said. “First, we have to be using AI responsibly so humans remain responsible for the decisions, we can leverage the data from AI, we have to minimize unintended bias, make AI as equitable as we possibly can, we have to ensure that it is traceable, we have to ensure it is reliable, and also that it is governable. And none of these things are easy.”

In order to meet those ethical guidelines, Knausenberger said the Air Force intends to “always” keep a human-in-the-loop, even if the AI might have a better understanding than the human.

“Our intent at this point is to ensure that the human is always in-the-loop and that the human can very clearly understand the level of confidence with a particular set of solutions or recommendations,” she said. “And that that human is the one that we’re holding accountable, that they understand how to use the tools in front of them and the recommendations.”

Although sometimes it can be difficult for a human to gain a deep understanding of AI – especially because a machine learning (ML) algorithm teaches itself – Knausenberger said the Air Force is working with the Massachusetts Institute of Technology (MIT) to “understand exactly what is within the black box” of AI.

Knausenberger explained the MIT partnership, saying the Air Force is “working through explainable machine learning, and that combines ML and human-computer interaction methodologies, to identify criteria and develop models that enable humans to understand that black box.”

In a similar way that elementary school teachers tell their students to show their work in math class, Knausenberger said by having the algorithm show its work, the Air Force hopes “to better understand what is inside that black box.”

As the Air Force continues to educate its workforce on AI, Knausenberger said it can keep its employees doing what they love.

“People don’t like to be used for things that machines should be doing. People want to be focused on high-level thought and decision making,” Knausenberger said.

“Anytime that we have someone not focused on mission and doing manual process or stuck in administration or stuck in things that don’t work, we’re keeping them away from that mission that they love,” she added. “So, part of what we’re doing here is is maintaining that focus. People want to be doing things that they’re passionate about and they want to see the effect of their work.”

“I do truly believe that this is a critical investment area and that the digital foundation that we build today will be the foundation for our future competitive advantage,” Knausenberger said.

Read More About
About
Grace Dille
Grace Dille
Grace Dille is MeriTalk's Assistant Managing Editor covering the intersection of government and technology.
Tags