Meta should get stronger, not weaker, on terror-related content moderation - opinion

Meta’s Oversight Board is considering discarding one of its most important language analysis tools in fighting terrorism on social media.

 Facebook Chairman and CEO Mark Zuckerberg testifies at a House Financial Services Committee hearing in Washington, U.S., October 23, 2019 (photo credit: REUTERS/ERIN SCOTT/POOL/FILE PHOTO)
Facebook Chairman and CEO Mark Zuckerberg testifies at a House Financial Services Committee hearing in Washington, U.S., October 23, 2019

As the Supreme Court considers whether Google and Twitter do enough to remove terrorist material on their platforms, Meta is considering loosening its content moderation standards. 

Meta’s Oversight Board, a 23-member independent panel that makes policy recommendations, has solicited public comment on the question of whether to loosen Meta’s Dangerous Individuals and Organizations policy. This policy references a strict adherence to removing content that uses the Arabic word shaheed to praise someone who participated in a violent action and is associated with a known terror organization. 

Shaheed most nearly translates to martyr and is frequently an accolade or endorsement of those killed while perpetrating acts of violence and terror. In their briefing on the issue, Meta expresses its concern “that its current approach may result in significant over-enforcement, particularly in Arabic-speaking countries.” 

Among other words, shaheed currently serves as a signifier of content that promotes or praises terrorism. It has proven to be a useful indicator of dangerous activity in the digital space. While, on its own, it can certainly be used innocuously when referring to people who pass away young or who are well respected, it “accounts for more content removals under the Community Standards than any other single word or phrase [in Arabic] on Meta’s platforms.” 

Meta is clear on the consequences. In their request, they acknowledge that loosening this policy will result in more content praising acts of violence being left online. Poor timing aside, discontinuing the use of shaheed as an indicator for harmful content would be a terrible policy decision that could hamstring Meta’s capacity to remove content that promotes terror on Instagram and Facebook. 

Despite the Board’s sensitivity, Meta actually under-utilizes shaheed as a tool for identifying terrorist content. 

In a new analysis, digital antisemitism tracker CyberWell identified and reviewed 300 pieces of content on Facebook that our technology flagged as highly likely to be antisemitic and which contained the word shaheed and its variations. Despite Meta’s current policy, posts using the word shaheed to praise individuals who commit acts of violence are still readily available on the platform. Furthermore, the current policy does not go far enough.

CyberWell thus recommends three powerful ways that shaheed can be further used as a screening mechanism to find and eliminate pro-terror content.

The first is by filtering posts that use shaheed in conjunction with dangerous individuals and organizations who are not yet classified as such by the United States government. 

Meta says it uses the United States government’s official lists of Foreign Terrorist Organizations (FTO), alongside Specially Designated Narcotics Trafficking Kingpins and Specially Designated Global Terrorists, to identify those entities. And while such lists are useful, it takes time and bureaucracy for the State Department to classify a group as an official FTO, making the list a lagging indicator of terror activity—especially in a region like the Middle East. 

Burgeoning terrorist groups like the Lion’s Den, which was recently founded in the West Bank, have yet to be officially named as an FTO. That hasn’t stopped them from carrying out numerous terror attacks, using social media as a recruitment and propaganda tool all the while. In CyberWell’s opinion to the Oversight Board, we outline specific organizations, names, and hashtags that should be flagged and ask Meta to dedicate more resources to identifying similar campaigns praising terror. 

The second is by targeting posts where shaheed appears alongside a reference to Jews. CyberWell found that a quarter of such content that was flagged by our technology contained clear Jew-hatred. While that may not seem like a high percentage, the correlation makes it worthy of flagging the combination of Jew and shaheed in Arabic. One particularly ironic piece of hatred that CyberWell reviewed featuring the word shaheed actually accused Jews of running the Qatari news network Al-Jazeera from behind the scenes. 

Finally, Meta’s efforts to flag shaheed should reflect an understanding of all its linguistic permutations in Arabic. CyberWell’s analysis operationalizes not only the singular, but also the feminine, plural, misspelled, and verb forms of the word, to more accurately capture the full range of its use. Meta could flag content more efficiently by relying on other keywords that are commonly linked to praising acts of terror. For example, CyberWell increased the precision of identifying violent pro-terror content by searching specifically for posts that contained the Arabic word for ‘hero’ alongside shaheed

Meta’s willingness to loosen its policy on shaheed isn’t just tone-deaf, it is dangerous. Meta itself acknowledges that a relaxed approach could enable “content on its platforms that intends to legitimize terrorism” and “could be perceived as promoting voice over the value of safety.” 

Instead of weakening enforcement, Meta should be refining its use of shaheed to better capture the full range of its implementation in violent pro-terror contexts. CyberWell has submitted our opinion to the Oversight Board, and we encourage users to do the same. Please submit your comments to take an active role in holding social media platforms accountable. 

Tal-Or Cohen Montemayor is the Founder and Executive Director of CyberWell, the world’s first live database of online antisemitism. CyberWell’s platform is designed to drive the enforcement and improvement of community standards and hate speech policies across the digital space.