Faced with the erroneous content that circulated on Facebook and Twitter, the two internet giants have put in place tools to fight false news around the pandemic. But have these remedies been effective?
“We are not just fighting an epidemic. We are fighting an ‘infodemia’.” On February 15, facing the looming Covid-19 pandemic, the Director General of the World Health Organization (WHO) sounded the alarm. On the occasion of the Munich Security Conference (in English), Tedros Adhanom Ghebreyesus hammers: “False information spreads faster and easier than this virus, and it is just as dangerous.”
From false suspicion targeting billionaire Bill Gates to conspiracy theories on the deployment of 5G through the so-called “state scandals”, fake news around the coronavirus has proliferated on the internet and social networks. Reports of publications, dissemination of verified information, algorithmic limitations … The platforms wished to position themselves as ramparts in the face of information. But has it been effective?
Imran Ahmed was struck by the proliferation of false information on the coronavirus. “For the first time, we realized that disinformation cost lives, and not just far from us, regrets the director of the Center to counter online hatred in London (United Kingdom). Because of some fake news, people have refused to wash their hands, for example. ” An observation made by the BBC itself (in English) in May, aggregating accounts of poisonings, assaults and other fatal consequences of this “infodemia”.
An “infodemia” which is however difficult to assess. But the production of fact-checking articles, written in reaction to fake news, could be a relevant indicator. According to the Reuters Institute for the Study of Journalism (in English), the number of audit articles in the English language would have increased by 900% between January and March. An increase also noted by Cristina Tardáguila, Deputy Director of the International Fact-Checking Network (IFCN). Since the start of the epidemic, the 88 organizations in its “Coronavirus Facts Alliance” (in English) – to which franceinfo belongs – have written no less than 7,115 articles.
However, all the organizations that are part of it know full well that many lies still need to be ‘fact-checked’. But we have no idea the extent of the fake news universe.at franceinfo
Enough to allow him to observe “eight waves of disinformation” or rather eight recurring themes around the coronavirus that are the origin of the virus; images of so-called faintings of infected people; false remedies and false preventive measures; the charges against China; the alleged supremacy of a race or a religion on the Covid-19; phishing at the time of confinement (online attacks have increased by 30,000% worldwide since January, reports France Inter); the fake news disseminated by politicians, or attributed to them and finally the manipulation of statistics, especially at the time of deconfinement.
These waves crossed the planet with the virus. Some may have been definitively ‘debunked’ [discréditées] like bat soup but others, like 5G, are tough on us.at franceinfo
Faced with the spread of infox, several online platforms have decided to better arm their users by innovating. Twitter and Facebook, for example, have set up pages for aggregating official content on the Covid-19 from recognized media and authorities such as the WHO or the French Ministry of Health.
According to Facebook (in English), the initiative would have attracted “two billion users” at least one time. WHO has also broadcast messages via chatbots (automatic chat systems) on the WhatsApp and Messenger applications, properties of the platform created by Mark Zuckerberg. But Twitter is not to be outdone: on June 10, the social network started testing a new feature and any user who wishes to share an article without having read it on the platform must confirm their choice. A way to fight against the spread of catchy news.
In the same vein, WhatsApp messaging has limited large-scale information sharing since April: from a certain number of shares with other users, the same message can only be transferred in one conversation to the time. A barrier introduced in 2018, modified in 2019 and reassessed against the coronavirus, which would have already reduced the transfer of messages shared many times by 70%, according to information from franceinfo.
Because the fight against false information is not new: in November 2016, Google and Facebook were accused of having favored the election of American President Donald Trump by letting fake news circulate on their platforms. Since then, challenged by the European Union, the platforms have developed policies to combat disinformation.
On the Facebook side, information tracking is done in three ways: by “human moderation“, automatic tracking and the work of 60 partner fact-checking organizations, including 5 French media. Thanks to 7,500 evaluations produced by them, the social network affirms (in English) thus posting warnings on more than 50 million publications during the pandemic. Warnings which all refer to fact-checking articles on the Covid-19 and which trigger a drastic algorithmic limitation of their distribution.
What to contain, a priori, the distribution of information: according to the company (in English), seeing these mentions on the screen, users ignore the original publication “95% of the time”. And when these publications create “an imminent danger” for the person, the giant of Silicon Valley would even go so far as to delete them: “hundreds of thousands” content would have suffered this fate in March.
Twitter, for its part, has said that it too is targeting false publications on the Covid-19. “Our automated systems have detected more than 4.3 million accounts targeting discussions around [du] Covid-19 with spammers or manipulators “, reports Twitter to Franceinfo. “But unlike Facebook, Twitter does not give a verdict”, observes Cristina Tardáguila.
A note allows users to consult a page which aggregates verified information on the Covid-19. But he does not say what is true and what is false.at franceinfo
A method (in English) introduced just before the epidemic and which caused a lot of attention when two tweets from President Donald Trump on postal voting were pinned in May – where Facebook, conversely, exempts politicians, except for their advertisements. And when the publications appear “dangerous”Twitter claims to go as far as deleting them – barely 1,100 in two weeks end of March (in English). This is the case of two tweets from Brazilian President Jair Bolsonaro, removed from the social network in late March for their opposition to the sanitary confinement rules.
Finally, Twitter continued its campaign to delete accounts linked to states for the purpose of manipulation. In a press release from June 12 (in English), the social network claims to have deleted 23,750 Chinese accounts (and 150,000 others dedicated to the dissemination of their content) grouped in an archive fund available online. As the Stanford University Internet Observatory reveals (in English), the disinformation campaign attributed to mainland China concerned in particular the Covid-19. There was “praises China’s response to the virus“, sometimes compared to the measures taken by”the United States and Taiwan “.
We’re disclosing new state-linked information operations to our public archive – the only one of its kind in the industry. Originating from the People’s Republic of China (PRC), Russia, and Turkey, all associated accounts and content have been removed. https://t.co/obRqr96iYmAdvertisement
– Twitter Safety (@TwitterSafety) June 11, 2020
But these measures are not completely effective, according to several studies. This is the case of research conducted by the Reuters Institute for the Study of Journalism published in April (in English). Among 225 infoxes or partially false information, the research team found that 59% of them did not contain any warning on Twitter. Facebook and YouTube get better results with 24% and 27% respectively.
Center Against Online Hate (CCDH) also points to flaws in report released in June (PDF in English) : The British company gathered 649 publications from Facebook, Instagram and Twitter sharing information on the coronavirus. And their observation is clear: 90.6% of them were neither labeled nor deleted – Twitter, too, seems to lag behind, with only 3.3% of messages deleted.
▪️41 (6.3%) were removed
▪️13 (2%) of accounts posting them were taken down
▪️7 (1.1%) were flagged as false
▪️90.6% remain online pic.twitter.com/jdVvU6FPjy
– Center for Countering Digital Hate (@CCDHate) June 4, 2020Advertisement
“These measures are based on good intentions (…) but social networks should apply them correctly”, analysis Imran Ahmed, director of the CCDH. “Either these are rules introduced to look after their image, or their system is not made for that. But in both cases, the public loses out”, He accuses. Imran Ahmed also points to the greatest use of automatic information detection chosen by the platforms, which may be less efficient, as the moderation teams cannot carry out the operations at their home – which was also reported The world in March.
NewsGuard and Avaaz (in English) have also produced imperfect results. But Cristina Tardáguila tempers: “There can be a big difference in these figures according to what one considers or not to be a fake news. There is no single definition today and accepted by all of the expression ‘fake news’. How , therefore, can we measure that? ” wonders the director of International fact-checking network.
Alerted in March about the multiplication of information around the Covid-19 by European leaders (PDF), the European Union responded on June 10, calling on online platforms to “step up their efforts” despite “positive steps”, as requested by Věra Jourová, Vice-President of the European Commission responsible for values and transparency.
Established in September 2018, the European Code of Good Practice against disinformation (in English) had already been signed by Facebook, Google, Twitter or even Microsoft. The objective? Accompany and coordinate platforms in their effort against fake news, with reports to submit. And precisely: from June, these same companies will be invited to publish, under the Code, “monthly reports on their policies and actions to combat related disinformation [au Covid-19]“.
“Fact checkers, researchers and civil society organizations have an essential role to play, says a communication from the European Commission. But some platforms did not give them enough resources to do so during the current public health crisis. “
Social media platforms therefore need to step up their efforts, share more information and be more transparent and accountable.June 10, 2020
Statements far from being a first. In October 2019, the Commission was already calling on online platforms to establish “constructive cooperation with more organizations” and asked for a better “data access” for independent researchers. Created in June 2020, the European Digital Media Observatory (EDMO) will therefore have the task of cooperating with social networks. The objective: to make their data accessible to researchers and fact-checkers.