Spread of disinformation on social media should concern democratic world

Likud’s digital campaign, which included intensive use of Facebook Live, a Facebook Messenger chatbot and aggressive tweeting, failed to bring it victory.

Reporters lined up at a media conference (Illustrative) (photo credit: TNS)
Reporters lined up at a media conference (Illustrative)
(photo credit: TNS)
With the second round of the elections now finally behind us, we can breathe a sigh of relief that the campaign did not feature any major disinformation efforts that were harmful to the electoral process and skewed the results. None of the conspiracy theories aired – ranging from what the Iranians did or did not manage to acquire from Benny Gantz’s phone, whether Ehud Barak did or did not fly on Jeffrey Epstein’s private plane in the company of underage girls, or whether Avigdor Liberman is a KGB agent, to accusations about the elections being “stolen by the Arabs” – really gained any viral success on social networks.
Moreover, the Likud’s digital campaign, which included intensive use of Facebook Live, a Facebook Messenger chatbot and aggressive tweeting, failed to bring it victory.
Despite this, the spread of disinformation over social networks should now be an issue of concern to the entire democratic world, Israel included. While this phenomenon would seem to no longer need any introduction, we will define disinformation here as the presentation of false or semi-false claims in a format that resembles content produced by established media entities.
Disinformation is produced and distributed by a broad range of organizations, some of which do so deliberately and for a variety of reasons: for example, to influence election campaigns or social processes, to create entertainment, or to gain revenues via clickbait provided by such content. We should also mention that the term “fake news” has become blurry, in that it is now used to describe not only the deliberate production of disinformation, but also any information that appears in the mainstream media with which someone disagrees (for example, for some people this would include anything not broadcast by Channel 20 in Israel or Fox News in the United States).
THERE ARE four main features of disinformation that should concern us: its scope; the damage it inflicts to our thinking; its financial aspect; and its political aspect.
The first cause for concern is the extent of our use of social networks as a source for political information, and the enormous volume of unfounded information to which they expose us. Studies have shown that fake news spreads faster and more effectively across social networks than real news. Indeed, we are familiar with many examples of fake news making the rounds on social media: from the stories of the Washington pizzeria before the 2016 US elections to climate change denial and anti-vaxxer campaigns.
 If this were a form of media with only marginal use, there would be no issue. But in a survey we conducted in August at the Israel Democracy Institute among a representative sample of the population – Jews and Arabs of all socioeconomic strata, and from all sides of the political map – we asked respondents what were the sources of most of their information on political issues. As might be expected, news websites were the leading source of news, followed by television. The surprising finding was that social networks came in next, well ahead of radio and print newspapers: 76% reported that they receive news content over social networks at least once a day – and 40% of Jews and 66% of Arabs cited this as one of their preferred media for accessing news.
The second source of concern is the cognitive context: disinformation is realistic, inasmuch as it portrays real people acting in line with their genuine motivations in circumstances that closely resemble the real world. It is also designed to be emotionally arousing, making it more memorable and more likely to be transmitted and repeated. If it is also absorbing and engaging, then it is especially likely to lead us to acquire false beliefs. In terms of our thinking processes, it is much easier to believe what we read than to disbelieve it, and mere exposure to something can make it seem familiar and correct. Moreover, people believe family and friends more implicitly than strangers, so misinformation that is spread through social media has a better chance of working its way into our beliefs.
Remarkably, while 58% of the Israeli public says it believes that the traditional media (press, radio, and television) contain more real news than fake news, less than a quarter total (22% of Jews and 29% of Arabs) think the same is true of social media. In other words, there is a huge gap between public trust in social networks and in traditional media, yet the former is the preferred source of news for a large segment of the population. In the history of media, there has never been a situation in which such a large portion of the public declares that it uses a news source in which it has so little faith.

ON THE face of it, this is an encouraging finding, in that at least the public is aware that social media is awash with fake news. But the real story is that so many of us continue to rely on a news source that we know is unreliable. Why is this happening? The main reason is that we believe that we are able to discern between real news and fake news, using our own judgment and intelligence. We tell ourselves that we are skilled enough to identify fake news, that we are able to correctly analyze media by the content of stories and by comparisons with our existing knowledge.
When we asked respondents, “How do you distinguish between real news and fake news?” 48% of Jews (with almost no difference by political affiliation) responded, “according to the references, sources, explanations and arguments given in the story”; 39% chose “according to the extent to which the story is aligned with previous knowledge I have”; 28% selected “according to the number of experts who took the story seriously after it was published”; and only around 4% said that they decide based on the number of shares and likes the story has garnered on social networks.
This flattering self-assessment, which can to some extent be classified as wishful thinking, contradicts various studies showing that people are in fact unable to identify fake news and are unable to avoid passing it on. Our survey found a similar pattern when we asked respondents if they can distinguish between paid-for promotional content and news content online. Most responded that they can, despite the findings of studies conducted in Israel which reveal that people are unable to identify promotional content in journalistic articles.
In 2017, the psychologist Neil Levy published an article titled “The Bad News about Fake News,” in which he explained that “fake news is more pernicious than most of us realize, leaving long-lasting traces on our beliefs and our behavior, even if we know it is fake when we consume it, or after the information it contains is corrected.” He objects to what he calls a naïve attitude, in which people assume that careful consumption and fact-checking can do away with the problem of fake news for responsible individuals. He notes that on the contrary, sophisticated consumers are very much at risk.
DUE TO the psychological phenomenon known as belief perseverance, even when people remember that a claim has subsequently been retracted, they may continue to cite the original retracted claim in explaining events. This is why, Levy claims, fake news that has been corrected by fact-checking sites has not been disarmed; rather, it continues to have pernicious effects. For example, a research study conducted by Nyhan, Reifler, Richey & Freed in 2014 found that correcting the myth that vaccines cause autism was effective in terms of altering beliefs, but in practice, among those parents who were initially in favor of vaccines, it actually resulted in a drop in their intention to have their children vaccinated.
That is, achieving change on the cognitive level does not necessarily achieve the desired change on the behavioral level, and may even be counterproductive.
Overall, 40% of the public use social networks as a news source, even while expressing a strong lack of trust in their reliability; that is, they understand the limitations that arise from having content that is not edited or reviewed. On the other hand, the public mostly trusts itself to be able to identify misleading information, an assumption that research studies have shown to be unfounded. This situation means that the public is exceedingly vulnerable, and the possibilities for misleading and manipulating them are greater than anything previously known.
Of course, we also need to talk about the financial aspect. Disinformation does not exist in a vacuum; it is driven by the business model of social media platforms (Facebook, Twitter, YouTube, and so on), and their desire to generate traffic and user engagement to increase revenues. Their algorithms are entirely designed for this purpose, as is their practice of collecting users’ private information and targeting them with messages. It is this business model that also drives their relationship with government, particularly the pressure they exert on governments to continue to exempt them from responsibility for content distributed via their networks.
Should they desire to do so, these platforms could outlaw or prevent the dissemination of fake content, without needing any particularly sophisticated software. Certainly they could put the brakes on such content going viral. A small example from the Israeli elections: it took Facebook 24 hours to suspend the Likud’s Messenger chatbot, after it posted that Israeli Arabs “want to destroy us all – men women and children,” and even then, it enforced only a temporary shutdown. This is a perfect illustration of the larger and more troubling state of affairs, in which the platforms are not truly interested in preventing the distribution of fake news, since they profit from it.
On the face of it, if this is such a burning social issue, we might have expected to see the emergence of a commercial technological ecosystem offering solutions. This is what Inbal Orpaz from the INSS has tried to find out in a recent search. There are some companies that provide ranking mechanisms for content as a way of assessing its trustworthiness; applications to ensure exposure to diverse perspectives and information sources; fact-checking initiatives based on either human oversight (such as the “Mashrokit” [“Whistler”] section in Israel’s Globes financial newspaper) or automatic review (such as VineSight); and artificial-intelligence-based enterprises (such as AdVerif.ai’s FakeRank) that allow advertisers to identify fake-news sites so as to avoid advertising on them. The newest and perhaps most challenging incarnation of fake news – deep-fake videos – is also beginning to attract technological countermeasures such as Israel’s Serelay, which enables anyone who has taken a picture or video with their phone to prove to the recipient that it is real.
HOWEVER, THE truth is that products such as these are for the most part not designed to deal with disinformation over social networks. Instead, they will serve commercial organizations that fear that their operations will be damaged by fake news (consider the dangers of reputational damage to corporations, the possibility of bringing down a bank by fostering hysteria among its clients, the need for identification on money transfer applications, and so on), as well as state security and intelligence organizations.
Could they be adopted by the traditional media, to explain to readers and viewers that what they have seen on social media is incorrect? Unfortunately, the traditional media is already collapsing due to the lack of a viable business model, and perhaps they benefit from fake news in exactly the same way that the social networks do – gaining maximum attention with minimum responsibility. Looking again at the election campaign, it is reasonable to assume that the influence of any fake news on public consciousness was due in equal measure to coverage in the traditional media and in digital media.
At this point, we need to address the behavior and influence of the traditional media, its social contract with the public, and the business models on which it is founded. We also need to discuss the links between traditional media and social networks: when traditional media reports what is happening on social networks as a substitute for creating a meaningful and professional agenda of its own, then it is merely undermining its own viability. And when traditional media cooperates in spreading rumors, sharing intrusive private information, or other digital efforts at manipulation, it is accelerating the disintegration of liberal democracy, which is the media’s primary source of protection. Is there any chance of all this being addressed? It seems unlikely.
So far, we have seen that it is not certain that technology can help; it is also fairly certain that public education cannot, since cognitive sophistication may not offer protection against fake news – and neither is the traditional media a source of much hope. Is our only recourse to turn to politicians and decision-makers, and seek that they force through change by means of regulation?
For example, they could revoke Section 230 of the US Communications Decency Act, which grants social media platforms freedom from liability for content that they distribute; break up the giant social media companies, which have amassed unprecedented power and become too large to be restrained; impose better protections of the right to privacy, to prevent fake news from being targeted specifically at those who are most susceptible to it; enable the courts to issue speedy writs requiring the removal of unlawful content, particularly during elections; provide subsidies and benefits for companies that develop scalable technologies for identifying fake news and verifying information; and help the free press transition to new and more viable economic operating models.
But is any of this realistic? Will politicians really agree to deny themselves the opportunity to lie over social networks, or at least to use fake news as a weapon against their opponents? Do they even have the power to stand up to the giant social media platforms? Should we not be deterred by concerns about the government becoming too involved in the question of what content may and may not be distributed?
Disinformation is not just one of the greatest challenges of our generation: It is an entire ecosystem that influences our opinions and attitudes over the long term, and damages our self-confidence about being able to establish what is true. But more than anything, it tells the story of our era – in which a technological revolution promised to bring greater openness, increase diversity of opinions, and flatten hierarchies – and yet within half a generation, became a powerful tool wielded against the public by corporations and other entities driven by greed and a lust for power.
Thus, we need to take a fresh look at our constitutional beliefs, which have been around since the time of John Stuart Mill and his concept of the marketplace of ideas. The first of these is that the proliferation of ideas and opinions will ultimately lead to a greater understanding of the truth. Is this indeed the case in a world in which the economy of technology and algorithms results in false information gaining greater and greater dissemination? The second is our assumption as a society that we should believe everything unless it is proven false. Do we really want to live in a society in which nothing is considered true unless it has been proven to be so? Only when we can honestly answer these questions will we be able to get our act together and address the challenge of our times.

The writer is a senior fellow at the Israel Democracy Institute.