The Organization for Economic Cooperation and Development (OECD) plans to open later this year an artificial intelligence (AI) policy “observatory,” following the OECD’s approval in May of an intergovernmental standard on AI agreed to by 36 member countries, including the U.S.
That was the word on July 25 from Adam Murray, an international relations officer at the State Department’s Office of International Communications and Information Policy, at an event organized by AI in Government. Murray represents the U.S. at the Organization for Economic Cooperation and Development (OECD), and explained in his remarks today the “human-centered” AI principles adopted by OECD.
The principles include five pillars for countries to prioritize: inclusive growth, sustainable development, and well-being; human-centered values and fairness; transparency and “explainability”; robustness, security, and safety; and accountability. Policy practices that can further those principles include: investing in AI research and development; fostering a digital ecosystem for AI; shaping “enabling policies” for AI; building human capacity and preparing for labor market transformation; and international cooperating to build “trustworthy” AI.
Murray said the coming AI Policy Observatory will be an online hub for AI public policy, and metrics.
Speaking about the OECD AI standard, Murray said the standard aims to promote trust in the technology, and “sustainable growth for all.”
He said the standard also sets the groundwork for a “regulatory outlook that goes from the laboratory to the market.”
With the standard in place, “going forward our focus is on implementation,” Murray said, adding that the policy observatory should help in that effort.