The Government Accountability Office (GAO) recently released a report on an artificial intelligence forum it held in Washington, D.C. last summer. It shows that government’s thinking about the ups and downs of thinking machines. Two highlights to make you think.

Cyber Insider Threat
“Adversaries would be highly motivated to exploit cyber defense systems that are based on machine learning algorithms,” GAO determined. “Adversaries could attempt to pollute data used to re-train machine learning algorithms. Other attacks might attempt to trick machine learning algorithms by repeatedly testing for and then exploiting blind spots in the learning module of an algorithm.”

Join us on May 3 as we host a one-day transformative conversation on how dynamic, holistic, and metric-driven approaches to understanding Cyber Exposure will enable today’s digital transformation agenda. Learn more and register.
With the odds stacked against a Federal enterprise that responds to billions of incidents each day, GAO contends that finding a way to develop ethical, intelligent systems that responsibly manage data will be key to leveraging AI cybersecurity as the threat landscape continues to shift.

Add the alarming evidence that data scientists don’t fully understand how these machines learn, and that need becomes even more apparent. The Department of Defense is testing autonomous capabilities in its war machines. “Teaching” morality to a machine seems the stuff of science fiction, but that’s the world we’ve entered. Finding a way to do that will be imperative when lives hang in the balance.

Legal

“The widespread adoption of AI raises questions about the adequacy of current laws and regulations,” GAO said. The report promotes greater interaction between legislators and innovators.

Accessing enterprise AI data could be essential for the government to ensure public safety and privacy concerns. Businesses will likewise look to leverage their data for competitive advantage.

“Establishing a ‘safe space’ to protect sensitive information such as intellectual property, privacy and brand information,” GAO concluded, will encourage businesses to more readily contribute their data. Forum participants shared optimism that government could get the data it needed to properly protect the public while maintaining proprietary data protections. Doing so may require legislative retooling.

For example, the implications of decision making become murky when machines are making the decisions, such as in laws where intent plays a key role.  “If someone programs AI to learn to make money, and it does so in a nefarious way, it is not clear how current laws could be used to prosecute the creator of the AI,” asserted a panelist.

If you think you want more, then read the report here.

Read More About
Recent
More Topics
About
Joe Franco
Joe Franco
Joe Franco is a Program Manager, covering IT modernization, cyber, and government IT policy for MeriTalk.com.
Tags