Concerns are rising over the use of livestream and social media by lone wolf attackers and terrorists who are posting antisemitic manifestos, messages and even their attacks on these platforms. But are social media and technology to blame?
Following Wednesday’s attack outside a synagogue in Halle, Germany
, which left two people dead, the Simon Wiesenthal Center called on “Internet giants Amazon and Facebook to immediately eliminate the livestreaming” option on both social media platforms.
The 27-year-old German attacker used Amazon’s livestream platform Twitch
to film his shooting rampage, which included an antisemitic message to the public, his attempts to enter the Halle synagogue, and the shooting of his victims. The video lasted about 35 minutes.
According to the BBC, the video remained online for 30 minutes after it was livestreamed and garnered some 2,200 views.
Twitch said that “Any act of violence is taken extremely seriously,” and that it has a “zero tolerance policy” for such content. “We worked with urgency to remove this content and will permanently suspend any accounts found to be posting or reposting content of this abhorrent act,” Twitch said, adding that the account used by the shooter to livestream his attack had been created two months prior to the incident.
The Guardian reported that Twitch quickly removed the video, but copies had already been downloaded and shared elsewhere on the Internet.
In an incident echoing the Halle attack earlier this year, a mass shooting at a mosque in Christchurch, New Zealand, was also broadcast live by the assailant using Facebook.
“Terrorism and hate attacks are now featuring ‘live streaming’ as a key part of their strategies to spread fear and recruit,” said Rabbi Abraham Cooper, associate dean of the Wiesenthal Center and director of its Global Social Action Agenda. “That the world’s most powerful social media platforms, along with Twitter and others allow for the hijacking of services by terrorists and bigots is intolerable. What else needs to happen before these powerful marketing services institute time-delay and other technical means to stop this burgeoning deadly social media activity?
“Clearly whatever protocols – if any – are in place, these giants, who are an integral part of daily life of billions of people, must address this challenge, or Washington and other governments will,” he warned.
The Global Internet Forum to Counter Terrorism (GIFCT) activated what it calls a Content Incident Protocol (CIP), to enable and empower companies to more quickly respond to active events.
“We are actively removing perpetrator-created content related to the attack, hashing it and placing the hashes into a shared database for Hash Sharing Consortium members, to prevent its viral spread across our services,” GIFCT said in a statement. “We are in close contact with each other and remain committed to disrupting the online spread of violent and extremist content.”
However, Dr. Tehilla Shwartz Altshuler, who heads up the Media Reform Program and Democracy in the Information Age at the Israel Democracy Institute, told The Jerusalem Post: “It’s a mistake to blame technology as the source of our problems.”
Social media platforms cannot censor or check content being uploaded to the platform, otherwise the platform won’t look the way it does today, she said.
“At every single moment, hundreds of millions of moments of video are being uploaded,” she said. “The thing is that this is Facebook live, and people usually upload uninteresting or non-newsworthy content in terms of public interest.”
Shwartz Altshuler said that on occasion those videos, such as terrorist attacks, can be highly invasive in terms of public safety. When this happens, the content becomes interesting for the public.
“I wouldn’t want social media platforms to censor any piece of content before its uploaded to the network, but once something like that is posted and becoming a news item, and is going viral,” it is the social media platform’s responsibility to “put the foot on the pedal of virality,” said Shwartz Altshuler, meaning “they need to stop the content from being spread, then take it off the platform and to do that very fast.”
She stressed that 24 hours, which is the amount of time that it sometimes takes for such content to be removed, is too long.
“It needs to be done within an hour,” Shwartz Altshuler said. “You can tell me that an hour is an eternity on social media, but what we know is that the main thing is to clean all the mainstream” Internet and platforms as fast as possible.
“We can’t clean the whole Internet,” she emphasized. “The expectation on the big platforms is that with their power, comes their responsibility.”
Shwartz Altshuler explained that these networks are monitored all the time.
“You can see the outbreak of a [viral] video, they can see it, they know in real time from regular media that something bad is taking place, so they can take responsibility and wake up much, much faster than they do today.”
For Shwartz Altshuler, there is also another issue that needs to be taken into consideration, which is not the virality of the video itself.
“Facebook knows a lot about us, they sell our psychographic information to advertisers – they know if we have a lack of self-esteem, when we are tired, when we are sad, or sometimes our sexual habits – they know a lot about us,” she said. “My question is whether Facebook has become the biggest tool for crime prediction in the world and what do they do with it. Because if they can know that someone has the tendency to commit suicide, how come they don’t know that someone has the tendency to commit a terrorist attack? Or maybe they do?”