When calls for murder do not violate social media community standards, we have a problem

The terrorists were on many occasions known to police and had already hinted at their intentions online.

Facebook (photo credit: FACEBOOK)
Facebook
(photo credit: FACEBOOK)
A short while ago I received an email about a Facebook post detailing a poem calling for the stabbing of Jews. Unfortunately, the Internet, especially social media, appears to be full of hatred of and incitement against Jews. What shocked me the most, however, was that once this post was reported to Facebook through its internal reporting system, the answer came back within a few short hours that the post “doesn’t violate its community standards.”
For a few months when there were regular stabbing attacks against Jews in Israel which cost dozens of lives, and eventually spread to Europe and other places around the world, the “#stab” hashtag was exceedingly popular on Twitter, meaning that not only were social networks used to legitimize attacks, but some also used them to urge further attacks.
The “megaphone effect” of social media has been demonstrably used to recruit terrorists, incite to violence and preach hatred. Many of those who recently committed terrorist attacks across the globe left strong social media footprints that make their ideology clear, and sometimes even their bloody intentions.
In June of this year, Larossi Abballa filmed himself murdering a couple in France, and in July two youths pledged allegiance to the Islamic State before filming their slitting of a priest’s throat near Rouen. All appeared on social media networks.
The terrorists were on many occasions known to police and had already hinted at their intentions online.
Two years ago already, the UK’s Government Communications Headquarters head Robert Hannigan said that Internet giants such as Twitter, Facebook and WhatsApp had become “command- and-control networks... for terrorists and criminals.” He suggested that the executives who ran these organizations were “in denial” about the way they were being used for nefarious purposes.
It should be obvious to state that the bar for what constitutes freedom of expression should be raised extremely high, but when your freedom of expression violates a freedom to safety and security then it is clear which should take precedence.
As legal philosopher Zechariah Chafee, Jr. once wrote, “Your right to swing your arms ends just where the other man’s nose begins.”
This should be the principle utilized by any means of communications, including social media. The social networks must be made to be accountable for hate speech which is spread on their platforms. They are not passive observers and have the capability to monitor billions of accounts for commercial purposes, so they can just as easily monitor and delete and ban the purveyors of hate speech or incitement to violence.
Some academics at the University of Miami have even created an algorithm that could help us predict future terrorist events.
Earlier in the year, Germany made a deal with Facebook, Twitter and Google to get tougher on offensive content, especially against migrants, with the outlets agreeing to apply domestic laws, rather than their own corporate policies, to reviews of posts. Subsequently, the networks hired hundreds of people to filter its German content.
Additionally, in May, Facebook, Twitter, Microsoft, and YouTube agreed to the European Commission’s regulations that require them to review "the majority of" hateful online content within 24 hours of being notified — and to remove it, if necessary — as part of a new "code of conduct" aimed at combating online hate speech and terrorist propaganda across the EU.
These are good examples of how national governments can and should apply laws of incitement more vigorously in tandem with social networks.
Hate speech online has a far more powerful and multiplying effect on social networks than anywhere else, and should be treated thusly.
Social media networks should be more vigorous in their suspensions of those who incite to hate and recruit for terrorism. If suspensions do not continue at a consistent pace and with consistent criteria, the targeted network will regenerate.
As fellow with George Washington University’s Program on Extremism, M. Berger testified before the US House of Representatives Committee on Foreign Affairs in 2015: “The suspension process is akin to weeding a garden. You don’t ‘defeat’ weeds, you manage them, and if you stop weeding, they will grow back.”
To defeat extremism, hate and terrorism online, a new international authority should be created to monitor and recommend best practices to both governments and those running social networks in an open and transparent manner, so all stakeholders are aware of the issue and its parameters.
Social media has tremendous positive aspects and has been used to break down barriers and borders in a way impossible just a generation ago.
It is literally building communities of people who might otherwise have no interaction, and it is right when these communities have standards of code and practice, like any other.
Nonetheless, when calls to kill and spread hatred are ignored then the standards are deeply lacking and should be reformulated in a way that makes all people feel safe online and in particular on social networks.
The author is the president of the European Jewish Congress and the European Council on Tolerance and Reconciliation.