Writing on a computer keyboard [Illustrative].
(photo credit: INGIMAGE)
The increase in usage of social media for incitement requires Internet companies to install conflict-mitigation features in social media platforms.
The current surge in violence in the Israeli-Palestinian conflict has been characterized by a massive use of social media. Facebook, YouTube, WhatsApp and Twitter all serve as platforms to deliver messages of hate, call for violent actions and share gruesome pictures or videos of recent attacks. Smartphones are used to take pictures and videos which are instantly shared in WhatsApp groups, Facebook posts and a few hours later reach headlines of mainstream media. The development of technology, the human thirst for reality TV and media greed help perpetuate an intractable conflict and keep it in a vicious constantly growing circle.
The central role which social media plays in the current wave of violence is demonstrated through political statements from MKs and even Prime Minister Benjamin Netanyahu, who said that “what we see here is a linkage between fundamentalist Islam and [the] Internet, between [former al-Qaida leader Osama] bin-Laden and [Facebook CEO and co-founder Mark] Zuckerberg.” Moreover, the Knesset held a special day of discussion regarding “Fighting cyber-bullying and online violence in the virtual space and social media.” Simon Milner – Facebook’s policy director in the UK and MENA region – attended the discussion and expressed Facebook’s policies and views about the situation: “Nothing is more important than the safety of people using Facebook...We also want to encourage you to use our reporting tools if you see content that doesn’t belong on Facebook...When content is reported to us we investigate to see if it breaches our standards and if it does, we take it down.”
It is important to note that social media is not one of the causes of the Israeli-Palestinian conflict, but at the same time its features make it an effective tactical tool that can be used in any conflict. Its applications allow more people to be exposed to more information in almost real time. There is no censorship or professional interpretation, leaving the “truth” to be decided according to one’s nationalistic bent and emotional intelligence. While each side believes that using social media will help its narrative win out, the net effect on the conflict is negative – the conflict is exacerbated, hate, prejudice and violence increase.
Theoretically, the same social media applications can be used to spread messages of peace and reconciliation, thus promoting security and stability.
In reality however, conflict-mitigating content in social media is less in quantity, and weaker. In times of harsh conflict, group solidarity dominates. Those who promote peaceful messages are usually excluded as traitors and unrealistic as the rosy picture they portray irritates the majority that is “living the true struggle.”
Nevertheless, social media can be an effective tool in conflict resolution when the content is monitored and users are taking part in a structured and facilitated process involving smaller groups. Since it is impossible to monitor all the content uploaded to social media and the only measures to deal with destructive content are users “unfriending,” reporting or deleting it, it is up to the social media companies themselves to develop and install conflict- mitigation features in social media applications.
Such features should be developed by tech and conflict professionals and be based on dispute resolution principles that will be adapted technologically. Users should first understand the damage their posting or sharing of violent content causes and second be prevented from doing so, via warnings, blocking the content and limiting their access to their account.
Since “reporting” dangerous content as a monitoring tool has many flaws such as non-objectivity, there is a need for a tool which identifies content immediately and blocks it from being posted and shared. Internet giants should invest in developing algorithms that identify certain keywords or violent footage and manage its exposure – in the same way they control pornographic content.
Following the attacks in France and California, further pressure has been put on Facebook, Twitter and Google by the European Commission, that demands stricter action on what it defines as “online terrorism incitement and hate speech.” The Internet giants have already increased response times from when harmful content is reported until it is removed, as well as the number of “trusted flaggers” who report it. Nevertheless, these actions can only help minimize the potential damage a little. A real solution must be technologically implanted within the platforms.
A good example for built-in conflict mitigation software is Ebay’s Dispute Resolution Center that was created to manage acquisition and transaction disputes. Ebay’s decision to develop this application is rooted in its own interests. Recognizing that the process of buying and selling goods online will sometimes end up in disputes, it is in the interest of the company to have a mechanism to resolve such disputes. Since Facebook, Twitter and other social media applications still do not have similar effective mechanisms, the implication is that they still do not have a business interest to doing so.
If moral considerations will not drive them, it will be up to governments to use legal measures which will ensure that social media is not used to enhance and perpetuate conflicts.The author is a conflict management expert and consultant. His focus is on using technology as a tool in conflict management.