TikTok rife with racist, antisemitic content aimed at children – study

Though TikTok's Terms of Service say that users cannot upload hateful content, the app is very lax about enforcing these rules.

An example of antisemitic content spread through TikTok. (photo credit: screenshot)
An example of antisemitic content spread through TikTok.
(photo credit: screenshot)
Far-right extremists have found a new social media platform to spread hateful content online: the relatively new TikTok platform, which is primarily used by young children, a new study revealed.
The Study was written by Gabriel Weimann, professor of communication and senior researcher at the Institute for Counter Terrorism (ICT), professor emeritus at the University of Haifa and guest professor at the University of Maryland, and Natalie Masri, a research assistant and graduate student at ICT. It is going to be published in the academic journal Studies in Conflict and Terrorism.
 Prof. Gabriel Weimann. (Photo credit: Courtesy)
Prof. Gabriel Weimann. (Photo credit: Courtesy)
Weimann is an expert in the online activities of terrorists, and has been studying them since 1998. While he previously focused on other groups, such as Islamist terrorists, he recently shifted to studying the far Right. This, he told The Jerusalem Post, is because of how widespread it has become in recent years.
“The far Right is now the most dominant source of terrorism in the world,” Weimann explained. “More people are killed by the far Right today than by Islamist terrorists.”
In addition, the far Right has also gone unchecked for too long, he added, as it has only recently begun to be considered terrorism.
“The far Right has traditionally been protected from being defined as terrorism because they were protected by free speech and politics, among other things,” Weimann said. “But recently they started going beyond hate speech and have started committing actual, public attacks.”
The idea of the far Right using online platforms to spread hate is nothing new, and has been going on for at least three decades. However, most only think about their activities on mainstream social media platforms such as Facebook, Twitter and Instagram, not to mention sites rife with such content, such as 4chan.
TikTok, however, has been largely overlooked.
Weimann discovered the presence of the far Right on TikTok by accident, with research assistant and co-author Masri having found it while doing research online.
“I was shocked when she told me, because I thought it was only for kids. Even my granddaughter uses TikTok,” Weimann told the Post. “But after I saw what she found and we looked into it, I decided to spend several months surveying the app.”
Developed by Chinese company ByteDance, TikTok is a relatively new social media app, having launched in 2016 in China and in 2017 abroad. The app allows users to upload and view short lip-synced videos with an easy-to-use interface.
While many users seek only to upload humorous and entertaining memes, others use the app to spread hateful messages, with many of them sharing neo-Nazi and white supremacist propaganda or calling for violence against Jews and people of color.
TikTok is the fastest growing app and social media platform in the world. It’s the seventh most downloaded app of the decade with over two billion downloads, and it boasts a user base in the hundreds of millions.
This, Weimann explained, is one of the reasons why the far Right’s presence on TikTok is especially dangerous when compared to Facebook and Twitter.
ANOTHER REASON is the age of its users.
“The app is marketed towards young children, from the age of 4-5 to 15-16: a very innocent and gullible audience,” he told the Post.
Though the app's Terms of Service specify that all users must be aged 13 or older, many are still clearly younger than that. In addition, 41% of its users are aged 16-24.
This exposure to hateful content at such a young age is especially dangerous, because it has the potential to glorify hate crimes and seduce impressionable youths into developing extremist views, or even committing violent acts themselves.
“There is a direct correlation between the rise of far-right extremism and the far Right’s presence online,” Weimann explained. “All of the far-right individuals who committed terrorist attacks against synagogues – like in Halle, Pittsburgh and Poway, and mosques, like in Christchurch – were active online, uploading extremist content and being exposed to the extremist content of others. Some of these terrorists became heroes, too. Brenton Tarrant, who was behind the 2019 Christchurch mosque shooting in New Zealand, videoed and livestreamed his attack, and he’s now celebrated.”
While there are unlikely to be adults streaming their attacks on TikTok in the manner Tarrant did on Facebook Live, Weimann explained that this isn’t their goal in the first place.
“It’s a different audience. That means they [the far Right] are trying to seduce those that are the future fighters,” he told the Post. “This is not about adults who will commit attacks within a day or two of seeing such content, but more about recruiting and seducing the next generation of far-right extremists. The seduction is a gradual and sophisticated process, and all those extremists we find today as adults were exposed to extremist content online.”
SEARCHING THROUGH  TikTok was no easy task.
“[The study] was one of the most complicated and difficult in my 22 years of studying online content, even more than studying the Dark web,” Weimann told the Post.
“This is because of how well concealed it is on TikTok. They [the users] know that using keywords like ‘Hitler’ or ‘Nazi’ is too difficult, so they use a combination of symbols and hashtags – SSHitler, for instance. In addition, we couldn’t just stick to videos and texts: We had to analyze symbols, too. And some of them are more obscure.
"They won’t just stick to using a swastika. They could use the ‘sonnenrad,’ the black sun, as well," Weiman said. "There could be someone wearing a sonnenrad t-shirt in the video, rather than an easily identifiable symbol in a thumbnail. This is not easy to track, because no computerized system will detect it. And there were thousands of videos to go through.”
 
The study found that a majority of the far-right hate speech on the app was related to antisemitism and Holocaust denial. This included videos of Nazi rallies with a variety of antisemitic messages.
 
Other videos were centered on racism and white supremacy, including a video of a young boy saying: “White people need to start owning some black people, so take your black friends and sell them.” Another video turns Tarrant into the intro for a fictional video game called Brenton Tarrant’s Extreme Mosque Shooter, with the option to “start” and “load game” alongside a picture of the Christchurch shooter.
Another trend the study found was that many accounts were named after far-right attackers and slogans. This includes accounts named after individuals such as Tarrant, by organizations such as the Klu Klux Klan – and using codes, such as including the number 88, a white supremacist numerical code referring to “Heil Hitler.”
One way to trace the spread of hate on the app is by seeing the amount of results and views associated with a particular hashtag. For example, trying to search “AdolfHitler” or “KKK” will bring up the result “no results found – this phrase may be associated with hateful behaviour” – but a person can use hashtags of the same phrases.
Among the most popular of these hateful hashtags are #hailhitler, which has over 3,300 views; #boogaloo – referencing a future civil war – which has 12,800 views; and #whitegenocide – a white supremacist conspiracy theory – which has 61,500 views.
ALL OF THIS is made worse by the fact that such content on the app is almost entirely unregulated.
“Unlike most other online platforms that are, in a way, Western and owned by Western companies, TikTok is a Chinese company, so it's harder for them to be responsible with regulating their content,” Weimann explained. “They aren’t pressured the same way Americans can pressure Facebook to moderate content.”
Though TikTok's Terms of Service state that users may not upload any content that is inflammatory, offensive, hateful, discriminatory, racist, sexist, antagonistic or criminal, the app has yet to enforce this.
And yet, it isn’t as if TikTok is incapable of regulating the content on the platform.
“The company is very careful about any content concerning China,” he said. “If you upload any content whatsoever that is against the Chinese government, it will be taken down – they are very careful about this. But they aren’t careful about anything else.”
In order to pressure the platform into regulating itself, Weimann suggests a combination of political and economic pressure. Though political pressure against China is difficult, economic pressure has precedence, with TikTok having already been forced to pay a $5.7 million fine to the US government in 2019 after illegally collecting personal information of children under the age of 13.
This is especially notable due to the app’s security flaws. The study cites a 2020 research report by Israeli cybersecurity firm Check Point, which revealed that hackers can exploit flaws in the app to send messages that could take over accounts and access private videos and information.
This combination of political and economic pressure has worked before. One example is from 2018, when the social media platform Tumblr banned adult content because of a combination of factors, including being unappealing for advertisers; a US federal law making websites liable for knowingly assisting or facilitating illegal sex trafficking; and restrictions on content put in place by Apple, according to a Wired report at the time.
Weimann agrees that something like this could happen again. But the first and most important step in getting TikTok to self-regulate is to raise awareness.
“We need to create awareness about the dangers of TikTok, but that isn’t easy because most people think it’s just a clean app for kids,” he told the Post. “So the first step is to create global awareness. TikTok is not that clean.
“After that, solutions will come by themselves. But one measure will not be enough: Multiple measures will be needed. And if we combine these measures, we can make TikTok improve its online content.”