Google urged governments to avoid excessive regulation of artificial intelligence (AI), instead suggesting “international standards and norms” in key areas in a white paper released last week.

“Overall, Google believes the optimal governance regime is one that is flexible and able to keep pace with developments, while respecting cultural differences. We believe that self and co-regulatory approaches will remain the most effective practical way to address and prevent AI related problems in the vast majority of instances, within the boundaries already set by sector-specific regulation,” the paper notes.

However, Google offered five areas where government “has a crucial role to play in clarifying expectations about AI’s application on a context-specific basis.” They are:

  • Explainability standards;
  • Fairness appraisal;
  • Safety considerations;
  • Human-AI collaboration; and
  • Liability frameworks.

On explainability, the paper notes that offering high-level explanations for why AI systems behave a certain way would be helpful in improving the public’s trust and ensuring accountability. Google suggests that governments assemble a collection of best practices and create a scale with different levels of explanation.

Concerning the fairness appraisal, the paper acknowledges that AI is increasingly becoming involved in decision-making. Googles suggests that governments clarify the prioritization of competing factors in fairness, and assess the impact of privacy and discrimination laws on measuring the decisions of AI.

In regards to safety considerations, the paper notes that precautions are needed to ensure human safety, but within reason and in proportion to the potential harm. The company suggests that governments work with companies to do safety certification marks.

When addressing human-AI collaboration, the paper notes that while some processes should include a human in the loop, some processes may benefit from not having human involvement. The paper suggests that government establish red-line areas where human involvement is needed, and offer guidance on when humans can turn off AI systems that may have consequences to life and property.

In terms of liability frameworks, Google made it clear that they do not support legal personhood for AI, and suggested that governments take a cautious approach with liability laws.

Read More About
More Topics
MeriTalk Staff