An important step in advancing artificial intelligence (AI) initiatives includes fortifying algorithms for AI, which are often brittle and “not good,” said Dr. John Beieler, program manager at the Intelligence Advanced Research Projects Activity (IARPA). Automation will also have a peripheral impact as a result of this, panelists at the Defense One Genius Machines event said today.

“We have bad algorithms,” Dr. Beieler said when asked about how close the Federal government was to making larger advancements in AI.

Panelists including Dr. Beieler and Dr. Adam Cardinal-Stakenas, Chief of the Data Science Research Division for the National Security Agency’s (NSA) Research Directorate, said that real-world changes to testing algorithms can break those algorithms because of their fragility. And they spoke of how slight physical changes in tests can confuse algorithms and thus make them vulnerable.

Data scientists will play an important role in broadening the capabilities of AI, and analysts may be better equipped to answer the “why” questions about the causality of events, the panelists said.

Dr. Beieler used the example of an AI system that can detect the breakout of a riot, but is unable to interpret how it got started or accurately identify the signs of a potential disturbance. The best AI can do at this point is provide analysts with as much information as possible that would otherwise take hundreds of analysts incalculable man hours to organize.

While the fear that automation will take away blue-collar jobs is valid, technology professionals don’t have to worry yet about automation taking over their positions. Dr. Cardinal-Stakenas said the day-to-day need for analysts in the AI process won’t change that much because the human-machine relationship is still very much co-dependent.

Read More About
Recent
More Topics
About
Jordan Smith
Jordan Smith
Jordan Smith is a MeriTalk Senior Technology Reporter covering the intersection of government and technology.
Tags