Google has vowed to intensify its fight against online extremism, acknowledging that “more needs to be done.” The tech giant pledged to take four additional steps which it believes will help combat terrorist content.
The company announced its plans in a blog post on Sunday, stressing that “there should be no place for terrorist content on our services.”
“While we and others have worked for years to identify and remove content that violates our policies, the uncomfortable truth is that we, as an industry, must acknowledge that more needs to be done. Now.”
Google went on to share four steps it plans on taking to combat online terrorism.
First, it will be increasing its use of technology “to help identify extremist and terrorism-related videos.” This includes applying its “most advanced machine learning research” to train new “content classifiers” that will be able to determine if a violent video is, for instance, news footage or an Islamic State (IS, formerly ISIS/ISIL) propaganda clip.
The company noted, however, that technology alone “is not a silver bullet,” and vowed to increase the number of independent experts in YouTube’s Trusted Flagger program, noting that reports from “trusted flaggers” are accurate over 90 percent of the time.
“Machines can help identify problematic videos, but human experts still play a role in nuanced decisions about the line between violent propaganda and religious or noteworthy speech,” Google wrote.
The company also vowed to take a “tougher stance” on videos which do not explicitly violate Google’s policies, including those which contain “inflammatory religious or supremacist content.”
“In future these will appear behind an interstitial warning and they will not be monetized, recommended or eligible for comments or user endorsements. That means these videos will have less engagement and be harder to find.”
Lastly, Google has vowed to expand its role in counter-radicalization efforts by building on its existing Creators for Change program, in which “ambassadors” use their online voices to combat hate speech, xenophobia, and extremism. The new addition, called the ‘Redirect Method,’ will redirect potential Islamic State recruits to anti-terrorist content which can “change their minds about joining.”
Google said it has also recently committed to working with industry colleagues including Facebook, Microsoft and Twitter, as part of a joint effort to tackle terrorism online.
“Extremists and terrorists seek to attack and erode not just our security, but also our values; the very things that make our societies open and free. We must not let them,” Google wrote in conclusion.
Google’s blog post came just days after Facebook released its own open letter on how the social network counters terrorism, stating that the company wants Facebook “to be a hostile place for terrorists.”