Twitter updates its rules against hateful conduct to include religion

“Our primary focus is on addressing the risks of offline harm,” Twitter explained, “and research shows that dehumanizing language increases that risk."

A 3D-printed logo for Twitter is seen in this illustrative picture (photo credit: DADO RUVIC/REUTERS)
A 3D-printed logo for Twitter is seen in this illustrative picture
(photo credit: DADO RUVIC/REUTERS)
The microblogging site Twitter has updated its rules against hateful conduct to include language that dehumanizes others on the basis of religion, according to a “Twitter Safety” blog post.
“Our primary focus is on addressing the risks of offline harm,” the post, published Tuesday, explained, “and research shows that dehumanizing language increases that risk.”
An example of such dehumanizing religious language to be removed is: “We need to exterminate the rates. The [members of x religious group] are disgusting.” Another example: “We don’t want more [members of x] in this country. Enough is enough with those MAGGOTS!”
Twitter said that if tweets posted before the rule was established are submitted for review and found to be unacceptable as per the new rules, then it will delete those posts but will not suspend the users’ accounts.
The social media giant defines dehumanization as: “Language that treats others as less than human. Dehumanization can occur when others are denied of human qualities (animalistic dehumanization) or when others are denied of their human nature (mechanistic dehumanization).”
According to the post, Twitter asked for feedback on its dehumanization policy last year and received more than 8,000 responses of feedback from people in more than 30 countries. This update is part of that feedback.

In addition, Twitter said it tried to make its rules easier to understand and provided additional, longer and in-depth training to its staff to make sure they are well informed when reviewing reports.
Twitter is looking into adding a ban on language directed at other protected groups, but said this will require some additional time and research.
In recent years, some researchers and organizations have studied the possible causal links between online hate speech and actual hate crime. For example, in 2018, German researchers published a report called “Fanning the Flames of Hate: Social Media and Hate Crime,” which found that social media has not only become a fertile soil for the spread of hateful ideas, but also motivates real-life action.
The Pittsburgh synagogue gunman used the social network Gab to threaten Jews.
“We will continue to build Twitter for the global community it serves,” said the blog post, “and ensure your voices help shape our rules, product and how we work.”