The pandemic has given rise to social media accounts operated by malicious actors who aim to sow misinformation about COVID-19. As vaccination campaigns get underway, they threaten to stymie the push toward herd immunity around the world. Misinformation about masks and injections could contribute — and have contributed — to low adoption rates and increased disease transmission, making it difficult to prevent future outbreaks.
While several studies have been published on the role disinformation campaigns have played in shaping narratives during the pandemic, new research this month from collaborators at Indiana University and the Politecnico di Milano in Milan, Italy, as well as a German team from the University of Duisburg-Essen and the University of Bremen, specifically investigates the scope of automated bots’ influence. The studies identified dozens of bots on Twitter and Facebook, particularly within communities where “low-credibility” sources and “suspicious” videos proliferate. But counterintuitively, neither study found evidence that bots were a stronger driver of misinformation on social media than manual, human-guided efforts.
The Indiana University- and Politecnico-affiliated coauthors of the first study, titled “The COVID-19 Infodemic: Twitter versus Facebook,” analyzed the prevalence and spread of links to conspiracy theories, falsehoods, and general disinformation. To do so, they extracted links from social media posts that included COVID-19-related keywords like “coronavirus,” “covid,” and “sars,” noting links with low-credibility content by matching them to Media Bias/Fact Check’s database of low-credibility websites and flagging YouTube videos as suspicious if they’d been banned by the site. Media Bias/Fact Check, which was founded in 2015, is a crowdsourced effort to rate sources based on accuracy and perceived bias.
In their survey, between January 1 and October 31, the researchers canvassed over 53 million tweets and more than 37 million Facebook posts across 140,000 pages and groups. They identified close to a million low-credibility links that were shared on both Facebook and Twitter, but bots weren’t responsible for the spread of misinformation alone. Rather, aside from the first few months of the pandemic, the primary sources of low-credibility information tended to be high-profile, official, and verified accounts, according to the coauthors. Verified accounts made up almost 40% of the number of retweets on Twitter and almost 70% of reshares on Facebook.
“We … find coordination among accounts spreading [misinformation] content on both platforms, including many controlled by influential organizations,” the researchers wrote. “Since automated accounts do not appear to play a strong role in amplifying content, these results indicate that the COVID-19 ‘infodemic’ is an overt, rather than a covert, phenomenon.”
In the second paper, titled “‘Conspiracy Machines’ — The Role of Social Bots during the COVID-19 ‘Infodemic,’” researchers affiliated with the University of Duisburg-Essen sought to determine the extent to which bots interfered with pandemic discussions on Twitter. In a sample of over 3 million tweets from more than 500,000 users distinguished by hashtags and terms such as “coronavirus,” “wuhanvirus,” and “coronapocalypse,” the coauthors spotted 78 likely bot accounts that published 19,117 tweets throughout a 12-week time period. But while many of the tweets contained misinformation or conspiracy content, they also included retweets of factual news and updates about the virus.
The studies’ results would appear to conflict with findings in July from Indiana University’s Observatory on Social Media, which implied that 20% to 30% of links to low-credibility domains on Twitter were being shared by bots. The coauthors of that work claimed that a portion of the accounts were sharing information from the same set of websites, suggesting that coordination was occurring behind the scenes.
Researchers at Carnegie Mellon University also published evidence of misinformation-spreading bots on social media, supporting the Observatory on Social Media’s preliminary report. In May, they said that of over 200 million tweets discussing the virus since January, 45% were sent by likely bot accounts, many of which tweeted conspiracy theories about hospitals being filled with mannequins and links between 5G wireless towers and infections.
It’s possible the steps Twitter and Facebook took to stem COVID-19 misinformation tamped down on bot-originated spread between early this year and the fall. Twitter now applies warning labels to misleading, disputed, or unverified tweets about the coronavirus and the company recently said it might require users to remove tweets that “advance harmful false or misleading narratives about COVID-19 vaccinations.” For its part, Facebook attaches similar labels to COVID-19 falsehoods and has pledged to remove vaccine misinformation that could cause “imminent physical harm.”
Twitter also recently announced it’s relaunching its verified accounts program in 2021, which it paused in 2017, with changes to ensure greater transparency and clarity. The network also plans to create a new account type that will identify accounts likely to be bots.
Between March and October, Facebook took down 12 million pieces of content on Facebook and Instagram and added fact-checking labels to another 167 million posts, the company said. In July alone, Twitter claimed it removed 14,900 tweets for COVID-19 misinformation.
There’s signs that social media platforms continue to struggle to combat COVID-19 misinformation and disinformation. But the research so far paints a mixed picture regarding bots’ role in the spread on Twitter and acebook. Indeed, the major drivers appear to be high-profile conspiracy theorists, conservative groups, and fringe outlets — at least according to the Indiana University- and Politecnico-affiliated coauthors.
“Our study raises a number of questions about how social media platforms are handling the flow of information and are allowing likely dangerous content to spread,” they wrote. “Regrettably, since we find that high-status accounts play an important role, addressing this problem will probably prove difficult.”