As government leaders are working to develop artificial intelligence policy, AI experts today urged lawmakers to ensure that any AI policy framework increases transparency and public trust in the technology.
At the Dell Technologies Forum in Washington, D.C., today, AI experts held forth during a roundtable discussion in which they explained that there’s an opening for policymakers now to build that kind of trust and transparency into AI policy.
“We should start thinking about transparency and disclosure,” said John Roese, the global chief technology officer at Dell Technologies. “If you walked into a supermarket, picked up something and it didn’t have a food label on it, would you eat it? Of course not. Because after a few decades, we’ve been conditioned to say that disclosure is important.”
“The measure of trust is not based on necessarily the provider, but are they following the conventions that we’ve created to provide a known informed consumer,” he added.
The goal of AI policy, Roese said, should be “to increase transparency and trust between the AI systems and humanity that’s using them.”
Greg Myers, operating partner at Cota Capital and former vice president of Federal for Microsoft, agreed with Roese, noting that AI policy should be viewed in a positive light.
“I think great policy that encourages trust and clarity is kerosene for innovation. It’s not a hindrance. It’s not a headwind,” Myers said. “Clarity would really help spark this thing.”
Bobbie Stempfley, vice president and Business Unit security officer at Dell Technologies, added that AI policy will only help to improve the technology.
“When the results of the technology are bad, it tends to be because the process or the culture is bad,” Stempfley said. “Digital systems are generally just a manifestation of organizational realities, and if you can’t tackle that organizational reality, you aren’t going to improve it just because you digitize it.”
Myers said that industry has a responsibility to provide government executives and policymakers with the right AI solutions, and “to sit down with clients and talk about outcomes.”
“One of the things that kind of underlies good policymaking is a good understanding of the technology, a good understanding of what problems we should even try to be solving,” said Hodan Omaar, a senior policy analyst at the Information Technology and Innovation Foundation’s (ITIF) Center for Data Innovation.
The Biden administration is expected to put out its AI executive order (EO) any day now, and Omaar said she hopes the EO will focus on AI regulation, especially as it relates to AI innovation standards.
“You can’t do regulation well if you don’t have good standards. If we don’t have a good definition of AI, what are we even regulating? … So, I’d like to see more of the [EO] conversation tilt on that side of things,” Omaar said.