Microsoft Corp. this week offered up five key points for governments to consider as they approach whether and how to regulate artificial intelligence technologies going forward.

Brad Smith, who has served in top legal and regulatory roles at Microsoft over the past 20 years and is now the company’s president and vice chair, laid out the company’s AI regulatory vision in a May 25 blog post.

In talking about regulation scenarios, Smith reprised one statement in a 2019 book that he co-wrote: 

“Don’t ask what computers can do, ask what they should do.” He added, “Four years later, the question has seized center stage not just in the world’s capitals, but around many dinner tables.”

“Another conclusion is equally important: It’s not enough to focus only on the many opportunities to use AI to improve people’s lives,” Smith said.

“This is perhaps one of the most important lessons from the role of social media,” he continued. “Little more than a decade ago, technologists and political commentators alike gushed about the role of social media in spreading democracy during the Arab Spring. Yet, five years after that, we learned that social media, like so many other technologies before it, would become both a weapon and a tool – in this case, aimed at democracy itself.”

“Today we are 10 years older and wiser, and we need to put that wisdom to work,” Smith said. “We need to think early on and in a clear-eyed way about the problems that could lie ahead. As technology moves forward, it’s just as important to ensure proper control over AI as it is to pursue its benefits.”

AI regulation – or “guardrails” as Smith put it – “require a broadly shared sense of responsibility and should not be left to technology companies alone,” he said.

Smith urged that governments consider the following five areas from his “blueprint” for addressing current and emerging AI issues through public policy, law, and regulation:

  • Build on new government-led AI safety frameworks. “The best way to succeed is often to build on the successes and good ideas of others … Especially when one wants to move quickly,” Smith said. “In this instance, there is an important opportunity to build on work completed just four months ago by the U.S. National Institute of Standards and Technology, or NIST. Part of the Department of Commerce, NIST has completed and launched a new AI Risk Management Framework.”
  • Require effective “safety brakes” for AI systems that control critical infrastructure. “This is the right time to discuss this question,” Smith said. “This blueprint proposes new safety requirements that, in effect, would create safety brakes for AI systems that control the operation of designated critical infrastructure. These fail-safe systems would be part of a comprehensive approach to system safety that would keep effective human oversight, resilience, and robustness top of mind. In spirit, they would be similar to the braking systems engineers have long built into other technologies such as elevators, school buses, and high-speed trains, to safely manage not just everyday scenarios, but emergencies as well.”
  • Develop a broad legal and regulatory framework based on the technology architecture for AI. ‘We believe there will need to be a legal and regulatory architecture for AI that reflects the technology architecture for AI itself,” Smith said. “In short, the law will need to place various regulatory responsibilities upon different actors based upon their role in managing different aspects of AI technology. For this reason, this blueprint includes information about some of the critical pieces that go into building and using new generative AI models.”
  • Promote transparency and ensure academic and nonprofit access to AI. “While there are some important tensions between transparency and the need for security, there exist many opportunities to make AI systems more transparent in a responsible way,” Smith said. “That’s why Microsoft is committing to an annual AI transparency report and other steps to expand transparency for our AI services. We also believe it is critical to expand access to AI resources for academic research and the nonprofit community.”
  • Pursue new public-private partnerships to use AI to address “inevitable societal challenges” that the technology will create. “One lesson from recent years is what democratic societies can accomplish when they harness the power of technology and bring the public and private sectors together,” Smith said. “It’s a lesson we need to build upon to address the impact of AI on society.” He continued, “Important work is needed now to use AI to protect democracy and fundamental rights, provide broad access to the AI skills that will promote inclusive growth, and use the power of AI to advance the planet’s sustainability needs.”
Read More About
About
John Curran
John Curran
John Curran is MeriTalk's Managing Editor covering the intersection of government and technology.
Tags