Israel calls on world nations to regulate social media anti-Semitism

Ministry official states that while the issue is certainly controversial for Americans, it is important to discern the nature of the Internet and to act accordingly.

Social media apps Twitter and Facebook [Illustrative] (photo credit: REUTERS)
Social media apps Twitter and Facebook [Illustrative]
(photo credit: REUTERS)
The Foreign Ministry on Monday called on governments around the world to regulate social media in order to combat anti-Semitism and violent incitement, reiterating the government’s support last year for Internet censorship during an anti-racism conference.
Speaking at the annual gathering of the Conference of Presidents of Major American Jewish Organizations in Jerusalem, Akiva Tor, the director of the Foreign Ministry’s Department for Jewish Communities, stated that while the issue is certainly controversial for Americans, it is important to discern the nature of the Internet and to act accordingly.
“What is YouTube? What is Facebook? What is Twitter? And what is Google?” he asked. “Are they a free speech corner like [London’s] Hyde Park or are they more similar to a radio station in the public domain?” Referring to cartoons of Palestinians killing Jews and other such material circulating online, Tor asked why platforms such as Google search, You- Tube, Facebook and Twitter are “tolerating” violent incitement and “saying they are protected in a holy way by free speech.”
“How is it possible that the government of France and the European Union all feel that incitement in Arabic on social media in Europe calling for physical attacks on Jews is permitted and that there is no requirement from industry to do something about it,” he continued, adding that Israel is working with European partners to push the technology sector to adopt a definition of anti-Semitism so its constituent companies can “take responsibility for what they host.”
Tor also took issue with Facebook for its position that it will take down material that violates its terms of service following a complaint, asking why the social-networking giant cannot self-regulate and use the technology at its disposal to identify and take down offending content automatically.
“If they know how to deliver a specific ad to your Facebook page, they know how to detect speech in Arabic calling to stab someone in the neck. It is outrageous [that technology] companies hide behind the First Amendment. Industry won’t correct itself without regulatory requirements by governments,” he asserted.
Following the Foreign Ministry’s biennial Global Forum for Combating Anti-Semitism last year, a similar statement was issued calling for the scrubbing of Holocaust denial websites from the Internet and the omission of “hate websites and content” from web searches.
Citing the “pervasive, expansive and transnational” nature of the Internet and the viral nature of hate materials, that conference’s final document called upon Internet service providers, web hosting companies and social media platforms to adopt a “clear industry standard for defining hate speech and anti-Semitism” as well as terms of service that prohibit their posting.
Such moves, the document asserted, must be implemented while preserving the Internet’s “essential freedom.”
The GFCA document called upon national governments to establish legal units focused on combating cyberhate and to utilize existing legislation to prosecute those engaging in such prejudices online.
Governments, likewise, should require the adoption of “global terms of service prohibiting the posting of hate speech and anti-Semitic materials,” it was recommended.
In the United States, content- hosting companies are generally exempt from liability for illegal material as long as they take steps to take it down when notified.
According to Harvard’s Digital Media Law Project, online publishers who passively host third-party content are considered fully protected from liability for acts such as defamation under the Communications Decency Act.
Despite the broad immunities given to online publishers, both under the First Amendment and the Communications Decency Act, there are many in Israel who believe that social networks bear significant responsibility for hosted content.
Last October, 20,000 Israelis sued Facebook, alleging the social media platform is disregarding incitement and calls to murder Jews being posted by Palestinians.
The civil complaint sought an injunction to require Facebook to block all racist incitement and calls for violence against Jews in Israel, but no damages.
It acknowledged that Facebook has taken some steps (such as implementing rules concerning content it will prohibit) and that it has taken down some extreme calls for murder, but only after Israelis complained.
The plaintiffs argue that Facebook is “far from a neutral or passive social media platform and cannot claim it is a mere bulletin board for other parties’ postings.”
They say Facebook “utilizes sophisticated algorithms to serve personalized ads, monitor users’ activities and connect them to potential friends” and claim it “has the ability to monitor and block postings by extremists and terrorists urging violence, just as it restricts pornography.”
In a December op-ed in The New York Times, Google executive chairman Eric Schmidt wrote that the technology industry “should build tools to help deescalate tensions on social media – sort of like spell-checkers, but for hate and harassment.”
“We should target social accounts for terrorist groups like the Islamic State and remove videos before they spread, or help those countering terrorist messages to find their voice.
Without this type of leadership from government, from citizens, from tech companies, the Internet could become a vehicle for further disaggregation of poorly built societies, and the empowerment of the wrong people and the wrong voices,” he wrote.
Several days later, Germany announced that Facebook, Google and Twitter had agreed to delete hate speech from their websites within 24 hours.
Berlin has been trying to get social platforms to crack down on the rise in anti-foreigner comments in German on the web as the country struggles to cope with an influx of more than 1 million refugees last year.
Despite these efforts, however, Twitter recently posted on its company blog that “there is no ‘magic algorithm’ for identifying terrorist content on the Internet, so global online platforms are forced to make challenging judgment calls based on very limited information and guidance.”
“In spite of these challenges, we will continue to aggressively enforce our rules in this area, and engage with authorities and other relevant organizations to find solutions to this critical issue and promote powerful counter-speech narratives.”
Asked about Tor’s policy recommendations Monday, Simon Wiesenthal Center associate dean Rabbi Abraham Cooper replied that based on recent meetings he believes that both private industry and European governments have been taking the issue much more seriously since November’s terrorist attacks in Paris.
In the case of Twitter, Cooper said that while work remains to be done, the micro-blogging company is “now taking significant steps on the terrorism issue and… [now] there is a whole different mentality and attitude when it comes to terrorism.”
This issue requires a great deal of effort by interested parties to lobby companies to have more transparent rules regarding hate, Cooper added, saying Tor is “right to raise the alarm” but that he is unsure that passing legislation should be the first priority.
“I don’t know if you have to go there,” he said.
Yonah Jeremy Bob and Reuters contributed to this report.