Can new regulations in the EU, U.S. help combat cyber terrorism?

Around 700 new pieces of ISIS propaganda were disseminated online in January 2018 alone.

A man holds a laptop computer as cyber code is projected on him (photo credit: KACPER PEMPEL/REUTERS)
A man holds a laptop computer as cyber code is projected on him
(photo credit: KACPER PEMPEL/REUTERS)
There were two major developments this week in the EU and US’s ongoing battle against terrorists use of social media, one of which unexpectedly also relates to using weapons of mass destruction.
The first was the European Commission’s release on Wednesday of proposed new binding rules that would require social media platforms to remove illegal terrorist content within an hour of national authorities flagging it.
This comes after Germany successfully implemented a similar one-hour rule earlier in 2018.
Those that fail to comply could be fined billions of dollars based on 4% of global annual revenue.
That penalty is the same big-time threat that the EU unsheathed in May to enforce its new privacy rules with social media.
Twitter, Google and Facebook have been working with the EU voluntarily in recent years on the issue.
However, the EU Commission’s new move toward binding rules after multiple rounds of escalating non-binding guidelines, implies that more needs to be done and by using a blunt stick.
Around 700 new pieces of ISIS propaganda were disseminated online in January 2018 alone, said the commission.
Moreover, online platforms will need to have direct lines open on a constant basis between law enforcement and decision- makers so that there can be fast responses.
Advertisement
Social media is also being encouraged to use more automated artificial intelligence algorithms to identify and delete certain suspicious users.
Of course, the real test will be whether the fines will be enforced.
To date, Germany has not issued a major fine and the EU has not finalized major fines relating to its new privacy rules – though privacy rules allow for private lawsuits, which have been filed.
Still, it continues the sea change in the EU’s attitude toward terrorism on social media after some years of criticizing Israel for cracking down on online platforms.
Until recently, social media platforms could argue that they were not responsible for content posted by third-parties.
The new rules make them responsible and with a loaded (from an economics perspective) gun.
The second development was the publishing of a ground-breaking report by the James Martin Center for Nonproliferation Studies about “weaponizing social media to muddy the WMD waters.”
Discussing Russia's cyber influence operation of the 2016 US election is common.
But the idea that states, most likely Russia, now regularly uses social media on a grand scale to influence US political debate on issues like intervention in Syria following the use of chemical weapons, is eye-opening.
The report said that “synthetic actors” (bots, trolls, and cyborgs, which masquerade under false pretenses to accomplish a political goal) on social media are “likely the main driving force behind shaping the character of the counter- narrative discussion surrounding the use of chemical weapons in Syria.”
Analyzing the trade craft and possible effects of disinformation produced by suspected synthetic actors on Twitter concerning chemical weapons use in Syria, the report found that a staggering 16% to 20%, of all Twitter counter-narrative messaging is likely disseminated by Russia.
A network of highly message- disciplined synthetic actors was activated following the April 7, 2018, chemical attack in Douma, Syria.
After the messaging attempt “failed,” when the Trump administration intervened, many counter-narrative accounts went inactive, bolstering the report’s ability to identify them as synthetic actors.
Fascinatingly, the most common procedural tactic employed by these users was threatening not to vote for Trump again.
The idea was that this tactic would disarm Trump supporters into being open to someone who seems like a struggling supporter like them.
The report listed four other main tactics: 1) defaming Western institutions to discredit their claims about Syrian use of chemical weapons; 2) blaming jihadists for the attacks; 3) hinting that a destructive (often nuclear) escalation would result from a Western retaliatory strike; and 4) preying on Western religious and cultural sympathies for supposedly besieged Syrian Christians and the secular Bashar Assad regime.
Probably the most important recommendation from the report is that “social networks… take care to scrutinize accounts that were created immediately after controversial events if the accounts only engage in discussion about that event,” such as Syria’s use of chemical weapons.
The simplest step social networks can take, said the report, is to ban scripted bot accounts which can be distinguished from organic accounts “in that their fully automated content typically consists of large, abnormal degrees of repetition.” Twitter can likely detect this using metadata analysis.
It said that Twitter has already started this process, claiming to have banned 70 million accounts in the first half of 2018, but that Twitter had missed active bots.
The report also encouraged Twitter to institute a verification system, which could alert other users to questionable accounts.
Unlike the EU’s rules, however, these are just suggestions from a think tank.
Social media giants have shown that at the end of the day, they are ready to tackle terrorists and state manipulations of their platforms only up until a point.
Getting beyond that point, like in most areas of business, requires a stick.
Maybe curtailing influence operations will be included in a later round of rules.