Federal Chief Technology Officer Michael Krastios addressed the similarities and differences between the European Union (EU) and United States’ artificial intelligence (AI) strategies.

At a Hudson Institute event today, Krastios discussed the recently released EU AI strategy and compared it to the United States’ strategy released last month.

As part of its AI strategy, the European Commission (EC) urges the EU to adopt a “human-centric” approach to AI systems. EC calls its approach to AI a “twin objective” to promote the use of AI while addressing its risks. “Harnessing the capacity of the EU to invest in next-generation technologies and infrastructures, as well as in digital competencies like data literacy, will increase Europe’s technological sovereignty in key enabling technologies and infrastructures for the data economy,” the strategy reads.

GDIT Emerge
Showcasing real solutions that are making an impact on mission success. Learn More

Krastios opened his comments with praise for the EC’s plan. “I think we’re very encouraged to see a lot of focus in their document on the importance of fostering an innovation ecosystem that is friendly to artificial intelligence technologies,” he said. He also highlighted that both the EU and the United States take a “values-based approach” to regulating and encouraging AI innovation.

However, he did have areas where there “is some room for improvement” in the EC’s strategy. Namely, the EC’s suggestion to separate the use of AI into high-risk use cases and low-risk use cases. He said that under the EC mode, while use cases termed low risk have to do very little, all high-risk use cases have to go through “fairly extensive” regulation. “We believe this all or nothing approach isn’t the best approach to regulating AI technologies,” he explained. “We think that AI regulation is best served on a spectrum of sorts.” Krastios acknowledged that some AI use case will require heavy Federal regulation, and said the United States is “prepared to do so.” However, he said some use cases that require regulation may only need light regulation. This approach ties in almost identically to the U.S. strategy with a light hand in Federal regulations. The strategy told Federal regulators to “limit regulatory overreach” of AI technologies and applications by the private sector.

As for the United States’ approach, Krastios explained that the United State believed it needed to create a model that was “use-based, risk-based, and sector-specific.” He further criticized what he sees as an overly rigid model for regulation. “There has to be a bit of spectrum and flexibility in the model so you can regulate appropriately,” he explained.

As part of the need for flexibility, Krastios highlighted the United States’ focus on public engagement. “The most important thing in the United States’ model is public engagement,” Kratsios said. He explained that while the Federal government has many technology experts on payroll, the government also wants to harness the private sector’s knowledge. Additionally, he said that focusing on public engagement also increases trust in AI technologies. “We need to engender trust between the American public and the technologies we are using,” he said.

Read More About
More Topics
Kate Polit
Kate Polit
Kate Polit is MeriTalk's Assistant Copy & Production Editor covering the intersection of government and technology.