Voice messages on Whatsapp, Corona channels on Telegram: Network expert Ann Cathrin Riedel has investigated how lies and propaganda spread via messenger. She considers the problem to be underestimated.
In March the voice message of a woman called “Elisabeth, the mother of Poldi” spread in Germany. The University of Vienna allegedly found that ibuprofen increased the risk of developing severe Covid-19. Although the researchers denied it, the news unsettled many people. The channel on which the false allegations circulated was Whatsapp. For Ann Cathrin Riedel, Chair of the Association for Liberal Network Policy Load, this was just the beginning: “It is foreseeable that disinformation will not only spread more quickly to messengers, but also in higher numbers in Germany,” she writes in an analysis, which she wrote for the Friedrich Naumann Foundation, a party-affiliated foundation of the FDP. In the interview, Riedel explains which messengers are particularly affected, why she avoids the term fake news and what users can do.
SZ: When it comes to disinformation, it is almost always about Facebook or YouTube. Why do you only deal with messengers on 30 pages?
Ann Cathrin Riedel: We have so far had little attention to Messenger in this context. This is a mistake. In recent years, countries such as India and Brazil have shown how easily and quickly disinformation can spread via messenger. With the corona pandemic, awareness of this problem is slowly arriving in Germany. There was, for example, the voice message from “Elisabeth, Mama von Poldi”, which warned of Ibuprofen and was sent via Whatsapp. Billions of people use Whatsapp, Telegram or Facebook Messenger every day, but what happens there is hardly visible from the outside. There are different group dynamics there than in public spaces such as Facebook or YouTube. Messenger provides a dangerous breeding ground for disinformation.
How do the operators react? Is there a messenger that you think is particularly dangerous?
Facebook is at least trying to prevent Whatsapp from being misused for disinformation, propaganda and hate campaigns. Messages can only be forwarded to a limited extent and are marked. The content itself is encrypted, so Whatsapp cannot do anything about it. It should stay that way. Telegram shows less good will. Many rights and conspiracy ideologists have withdrawn there and use groups and channels to spread their messages in large numbers. This became particularly clear during the Corona crisis. Telegram rarely deletes such open channels, even if obviously illegal content is shared.
You avoid the term fake news in your paper, even though it is widely used. What is bothering you about it?
The term is used inflationarily and lightly. It is often only a question of defaming the political opponent. Donald Trump discredits the free press, for example, I think that’s dangerous. The term fake news doesn’t even fit the phenomenon I’m investigating. It’s not about sloppy research, but about targeted manipulation. The spreaders want to sow discord and split societies. If you want to find strategies against disinformation, you have to be precise in the language.
In what forms does disinformation come about?
When we hear false messages, we often think of texts. Much of the widespread disinformation is visual, including photos, graphics, videos or memes. These formats can be consumed and shared much faster, sometimes they don’t even have to be translated. A Brazilian journalistic project, Comprova, analyzed disinformation on Whatsapp in Brazil. It turned out that mainly pictures, then videos and more and more voice messages are shared. Often, not everything is wrong, but essential information is left out or taken out of context.
Who spreads disinformation and with what motives?
In principle, everyone can become a disseminator. Especially with memes, we often see that people share them because they think the montages are funny. They often spread racist and anti-Semitic stereotypes without realizing it. But of course states also get involved. They rely on orchestrated campaigns to destabilize other companies. This has always been the case, but such actions work even more perfidently and efficiently in the digital age. Russia and China are very active there. And not just in the USA, but also in Germany, for example, through state-funded propaganda channels such as RT Deutsch or camouflaged social media channels on all platforms.
What can politics do? Are stricter laws needed, should false claims be prohibited?
Bans are often requested, but they are nonsense. Lying is not punishable per se, and that’s a good thing. We do not need stricter laws, but sensible regulation. It shouldn’t be about content, but about the framework for the operator. Technical measures can also help, Whatsapp has already started. I hope that a debate on Messenger increases political and social pressure so that companies become aware of their responsibilities. Education and training are further starting points. I would like to see a federal center for digital education. We have to reach people of all ages with educational offers for the digital world and not always just think about school.
Network communication is increasingly shifting to closed spaces such as groups and messengers. There, nobody can contradict, clarify or, in case of doubt, also report. What can companies like Facebook do to prevent massive disinformation?
Awareness campaigns can help. In the wake of the Corona pandemic, services such as Instagram, Telegram and Spotify have started to publish information about Covid-19. You can see requests to wash your hands and wear masks. To point out the danger of disinformation at this point would be a start. However, we must not only pass responsibility on to companies. It is also up to the users themselves to make a contribution. Disagree if nonsense is spread in the Whatsapp group of the sports club. Say anti-Semitic memes are not funny, they are misanthropic. Proactively inform family, friends and acquaintances of disinformation that is currently being disseminated. In closed groups in particular, members have to intervene themselves. Criminal content that is disseminated there can be displayed as well as public contributions. And you should do that too.