The Department of State is working to engage the technology sector in countering violent extremism online.

“Over the last year, tech companies have removed more and more terrorist content that violates their terms of service agreements,” Michael R. Ortiz, deputy coordinator of countering violent extremism in the Bureau of Counterterrorism and Countering Violent Extremism at the State Department, wrote in a blog post.

The Global Coalition to Counter-ISIL’s Communications Working Group, which is led by the United States, the United Arab Emirates, and the United Kingdom, regularly meets with technology companies from 30 countries to share methods to counter extremist online posts and promote positive narratives instead.

“When some governments see violent extremist messaging and recruitment material proliferating online, their first reaction is to take down accounts or sites–or in more extreme cases, shut down the Internet altogether,” Ortiz said. “While this may temporarily limit terrorists’ online activities, it also can have a significant negative impact on positive uses of the Internet, and may result in individuals becoming more frustrated with government institutions. We need to take great care as we proceed.”

The State Department sponsored a Hacking for Diplomacy class at Stanford University in the fall, in which students applied to the course by submitting a proposed solution for countering extremist messages online or a similar specified problem, using technical expertise and social science.

“We want the private sector, the tech sector specifically, to figure out how we can leverage the power of the crowd to take on two groups of trolls,” said Shaarik Zafar, special representative to Muslim communities at the State Department, in a video to interested students.

The first group of negative online commenters includes people who post widespread hateful messages about a broad group of people after a terrorist attack occurs.

“We want to figure out what ways we can promote resilience online after these types of attacks by empowering and highlighting positive voices,” Zafar said.

The second group includes people who pull others into committing violent acts.

“We don’t want to censor voices,” Zafar said. “We believe in free speech but we do believe there’s a role in highlighting at a time of great suffering and tragedy, positive voices.”

Stanford students who were interested in taking the class could choose this problem to tackle when they submitted their applications. Students worked in teams in a startup atmosphere alongside mentors from the State Department and private sector, including Google.

The European Commission told Silicon Valley to do more to curb hate speech online in December. The commission published a report that said that only 40 percent of perceived hate speech that gets flagged by Silicon Valley companies, which equals about 600 posts, gets reviewed within 24 hours. Of those 600 posts, about a quarter get removed from the sites.

Facebook, Microsoft, Twitter, and YouTube, announced that they would create a joint database of digital fingerprints for violent terrorist imagery that they have removed from their platforms one day before the commission’s report was publicly released. The companies promised to share the content with each other with the goals of identifying potential terrorist content more efficiently and curbing the spread of online terrorist content. Companies that aren’t participating in this project can use the information to identify similar content on their platforms, review it against their own policies, and make the decision whether to remove the material.

“It is clear, countering violent extremism online remains a complex challenge,” said Ortiz. “Addressing it will require governments, private companies, civil society organizations, schools, and communities to work together to make progress on making the Internet safer for all.”

Read More About
More Topics
Morgan Lynch
Morgan Lynch
Morgan Lynch is a Staff Reporter for MeriTalk covering Federal IT and K-12 Education.