Twitter updates its rules against hateful conduct to include religion

“Our primary focus is on addressing the risks of offline harm,” Twitter explained, “and research shows that dehumanizing language increases that risk."

July 11, 2019 19:10
2 minute read.
A 3D-printed logo for Twitter is seen in this illustrative picture

A 3D-printed logo for Twitter is seen in this illustrative picture. (photo credit: DADO RUVIC/REUTERS)


Dear Reader,
As you can imagine, more people are reading The Jerusalem Post than ever before. Nevertheless, traditional business models are no longer sustainable and high-quality publications, like ours, are being forced to look for new ways to keep going. Unlike many other news organizations, we have not put up a paywall. We want to keep our journalism open and accessible and be able to keep providing you with news and analysis from the frontlines of Israel, the Middle East and the Jewish World.

As one of our loyal readers, we ask you to be our partner.

For $5 a month you will receive access to the following:

  • A user experience almost completely free of ads
  • Access to our Premium Section
  • Content from the award-winning Jerusalem Report and our monthly magazine to learn Hebrew - Ivrit
  • A brand new ePaper featuring the daily newspaper as it appears in print in Israel

Help us grow and continue telling Israel’s story to the world.

Thank you,

Ronit Hasin-Hochman, CEO, Jerusalem Post Group
Yaakov Katz, Editor-in-Chief


The microblogging site Twitter has updated its rules against hateful conduct to include language that dehumanizes others on the basis of religion, according to a “Twitter Safety” blog post.

“Our primary focus is on addressing the risks of offline harm,” the post, published Tuesday, explained, “and research shows that dehumanizing language increases that risk.”

An example of such dehumanizing religious language to be removed is: “We need to exterminate the rates. The [members of x religious group] are disgusting.” Another example: “We don’t want more [members of x] in this country. Enough is enough with those MAGGOTS!”

Twitter said that if tweets posted before the rule was established are submitted for review and found to be unacceptable as per the new rules, then it will delete those posts but will not suspend the users’ accounts.

The social media giant defines dehumanization as: “Language that treats others as less than human. Dehumanization can occur when others are denied of human qualities (animalistic dehumanization) or when others are denied of their human nature (mechanistic dehumanization).”

According to the post, Twitter asked for feedback on its dehumanization policy last year and received more than 8,000 responses of feedback from people in more than 30 countries. This update is part of that feedback.

In addition, Twitter said it tried to make its rules easier to understand and provided additional, longer and in-depth training to its staff to make sure they are well informed when reviewing reports.

Twitter is looking into adding a ban on language directed at other protected groups, but said this will require some additional time and research.

In recent years, some researchers and organizations have studied the possible causal links between online hate speech and actual hate crime. For example, in 2018, German researchers published a report called “Fanning the Flames of Hate: Social Media and Hate Crime,” which found that social media has not only become a fertile soil for the spread of hateful ideas, but also motivates real-life action.

The Pittsburgh synagogue gunman used the social network Gab to threaten Jews.

“We will continue to build Twitter for the global community it serves,” said the blog post, “and ensure your voices help shape our rules, product and how we work.”

Join Jerusalem Post Premium Plus now for just $5 and upgrade your experience with an ads-free website and exclusive content. Click here>>

Related Content

Oil tankers pass through the Strait of Hormuz
July 23, 2019
Boris Johnson’s Middle East policy problems


Cookie Settings