The Office of Management and Budget (OMB) is being careful about introducing any kind of unintended bias in its use of machine learning and artificial intelligence (AI) technologies, according to Margie Graves, Deputy CIO at OMB, who said today that Federal agencies should be very judicious about the technologies’ “fit for use.”

Getting the wrong answers and violating civil liberties and privacy isn’t an option when it comes to innovation, Graves said at AFCEA’s Homeland Security event.

“Am I incorporating unintended bias? Because machines can learn by us just like anything else,” Graves offered. She went on to describe the challenge between recognizing feedback that represents a real implementation-oriented tweak that should be made to an algorithm, versus others that are less well founded and that may represent unintentional bias.

“It’s almost like when you have a medical breakthrough, you want to make sure that the ethics catch up, the rules of the road, the constant conversation of ‘you know what? We didn’t think about this – we need to address it,’ and we need to give people, you know, a solid platform to where they don’t vary from intent,” she said.

Input from private citizens and privacy groups will be essential to avoiding unintended bias and giving ethics a chance to catch up, she said. Crowdsourcing input is at the center of the data strategy that OMB is about to launch, Graves said.

Read More About
More Topics
Jordan Smith
Jordan Smith
Jordan Smith is a MeriTalk Senior Technology Reporter covering the intersection of government and technology.