Facebook’s design makes it unable to control vaccine misinformation

The world’s largest social media platform failed to eliminate falsehoods during the Covid-19 pandemic.

Woman holds smartphone with Meta logo in front of a displayed Facebook's new rebrand logo Meta in this illustration picture taken October 28, 2021. (photo credit: REUTERS/DADO RUVIC/ILLUSTRATION/FILE PHOTO)
Woman holds smartphone with Meta logo in front of a displayed Facebook's new rebrand logo Meta in this illustration picture taken October 28, 2021.
(photo credit: REUTERS/DADO RUVIC/ILLUSTRATION/FILE PHOTO)

Mark Zuckerberg – beware! Researchers who have examined the amount of misinformation spread on your social media platform Facebook during the COVID-19 pandemic have found that its design makes it unable to control falsehoods. 

New research published today in the prestigious Nature Journal’s Science Advances suggests that the COVID-19 vaccine misinformation policies of Facebook – still the world’s largest social media platform – were not effective in combating misinformation. The study, led by researchers at the George Washington University (GW) in Washington, D.C. found that Facebook’s efforts were undermined by the core design features of the platform itself.

The study was entitled “The efficacy of Facebook’s vaccine misinformation policies and architecture during the COVID-19 pandemic.”

Online misinformation is dangerous because it promotes distrust in science, undermines public health, and may even trigger civil unrest. It also undermines trust in scientific evidence and medical recommendations.

“There is significant attention given to social media platforms and artificial intelligence governance today. However, this discussion largely focuses on either content or algorithms. To effectively tackle misinformation and other online harms, we need to move beyond content and algorithms to also focus on design and architecture,” said GW engineering management and systems engineering Prof. David Broniatowski. “Our results show that removing content or changing algorithms can be ineffective if it doesn’t change what the platform is designed to do – enabling community members to connect over common interests – in this case, vaccine hesitancy – and find information that they are motivated to seek out.” 

Vials with Pfizer-BioNTech and Moderna coronavirus disease (COVID-19) vaccine labels are seen in this illustration picture taken March 19, 2021.  (credit: REUTERS/DADO RUVIC/ILLUSTRATION/FILE PHOTO)
Vials with Pfizer-BioNTech and Moderna coronavirus disease (COVID-19) vaccine labels are seen in this illustration picture taken March 19, 2021. (credit: REUTERS/DADO RUVIC/ILLUSTRATION/FILE PHOTO)

Facebook is designed to build communities around the things people care about. To do so, it uses several different architectural features, including fan pages that promote brands and community celebrities, enabling a relatively small group of influencers to reach large audiences. These influencers can then form groups that are explicitly designed to build communities in which community members can exchange information, including how to access misinformation or other compelling content off of the platform. These group members, and especially group administrators (who are often page-content creators) can then make use of Facebook’s newsfeed algorithms to make sure that this information is available to those who care to see it. 

Efforts to remove misinformation do little to stop it

The researchers found that while Facebook made a significant effort to remove a lot of anti-vaccine content during the COVID-19 pandemic, overall engagement with anti-vaccine content did not decrease beyond prior trends; in some cases, it even increased. 

This finding – that people were equally likely to spread vaccine misinformation before and after Facebook’s extensive removal efforts–is incredibly worrisome, said public health Prof. Lorien Abroms of GW’s Milken Institute School of Public Health who contributed to the article. “It shows the difficulty that we face as a society in removing health misinformation from public spaces.” 

In the content that was not removed, there was an increase in links to off-platform, low-credibility sites and links to misinformation on “alternative” social media platforms like Gab and Rumble, especially in anti-vaccine groups, she added. “In addition, remaining anti-vaccine content on Facebook became more – not less – misinformative, containing sensationalist false claims about vaccine side effects that were often too new to be fact-checked in real-time.”

There was also “collateral damage,” stressed the researchers, as pro-vaccine content may have also been removed as a result of the platform’s policies and, overall, vaccine-related content became more politically polarized. In addition, anti-vaccine content producers used the platform more effectively than pro-vaccine content producers. Although both had large page networks, anti-vaccine content producers more effectively coordinated content delivery across pages, groups, and users' news feeds. 

Even when Facebook tweaked its algorithms and removed content and accounts to combat vaccine misinformation, the researchers say the architecture of the platform pushed back. 

“Think of Facebook’s architecture like a building. An airport is designed to direct people to easily and securely get people to and from gates, and a stadium is designed to safely gather a group of people for a show,” Broniatowski said. “If these buildings were not designed to balance travel and entertainment against safety and security, people might routinely get hurt. Now think of Facebook’s structure in the same way – it is designed to allow motivated people to build communities and easily exchange information around any topic.” 

He noted that Facebook’s architecture sets the policy up for failure. “Individuals who are highly motivated to find and share anti-vaccine content are just using the system the way it's designed to be used, which makes it hard to balance those behaviors against public health or other public safety concerns. You have to change the architecture if you want to create that balance.” 

The study suggested that social-media platform designers could promote public health and safety by working collaboratively to develop a set of “building codes” for their platforms that are informed by scientific evidence to reduce online harms. 

Building architects have to balance a building’s design goals against compliance with rules to protect people who use them,” he explained. “They have to be designed to promote public safety by complying with fire, flood, pollution, and earthquake codes. They need to promote security by incorporating design features such as well-marked, accessible entrances, exits, and turnstiles to prevent vandalism, panic, terrorism, and riots. They have to promote public health by complying with codes for plumbing, sanitation, and ventilation, and they have to be designed to respect municipal codes such as noise and other nuisance ordinances. These rules are usually developed by a partnership between industry, government, and community organizations, and they are informed by solid science and practice. Governance strategies could facilitate these partnerships and support the science.”

According to the researchers, their study is the first and only scientific evaluation of the efficacy of the world’s largest social media platform’s attempt to systematically remove misinformation and misinformative accounts.