Communicating via messenger services opens many doors: keeping close contact with international friends and family, organising club outings or political parties and subscribing to media outlets. The possibilities are endless.
The end-to-end encryption of many messenger services provides not only a protected space, it also protects political dissidents and human rights activists. A look at Hong Kong highlights the vital importance of encrypted messenger communication.
However, the messenger services have a dark side. The worldwide Corona pandemic has been accompanied by what the World Health Organization calls an infodemic. Disinformation and conspiracy theories are not only shared via Facebook, Twitter, YouTube and increasingly Instagram: messenger services and the protected spaces they offer are ideal channels for mis- and disinformation precisely because they make it so hard to be viewed by outsiders. The voice messages from “Poldi’s mother” – Poldi is a German nickname for footballer Lukas Podolski – that were widely shared at the beginning of the pandemic and the Telegram channel of the conspiracy theorist Attila Hildmann show that in Germany a debate around messenger services and the role they play in mis- and disinformation is of crucial importance.
If anyone should be aware of the disastrous consequences of disinformation on messenger services, it should be WhatsApp. Brazil and India have faced significant problems already. Be it the lynch mobs that led to violence and murder in India, or the political disinformation campaigns that helped to elect Jair Bolsonaro in Brazil. The spread of false information on messenger services has repercussions that differ significantly from public forums like Twitter or Facebook, where third parties can dissent and the platforms themselves can intervene by deleting or blocking fake content, as has become common practice for Facebook and Twitter in regards to COVID-19 information. This study highlights the issues of disinformation on messenger services.
Combatting disinformation and conspiracy theories, as well as educating people of their danger cannot be the responsibility of social media platforms alone. Comprehensive solutions are required, from regulation through to all-ages education – for instance through a federal agency for digital education. However, this does not absolve platforms and messengers of responsibility. Following the events in India and Brazil, WhatsApp introduced a feature that lets users see if a message has been forwarded. In response to the Corona infodemic, limits have been imposed on messages that were forwarded five times: these can only be forwarded to individual chats, not five chats at the same time. WhatsApp says that large-scale message-sharing has decreased by 70 percent as a result. All of this works without WhatsApp ever seeing the content of the messages themselves, keeping end-to-end encryption intact.
WhatsApp has recently announced a new feature designed to combat the spread of disinformation on its service: Internet research. From now on, users in Brazil, Italy, Ireland, Mexico, Spain, the UK and the USA can check the veracity of a message that’s been forwarded five times or more. These messages will be tagged with a magnifying glass symbol and the user will be asked if they want to perform a Google search on the message.
So far, so good. In principal anyone could perform the search themselves, but as the study above shows, this is often easier said than done. Messenger services are conducive to quickly sharing messages without reflection. Some nudging via symbols is surely a good idea, since users can be discouraged from sharing disinformation and conspiracy theories through competent technological design.
However, this approach can be a problem if users cannot afford a basic web search. One of the reasons why disinformation spreads especially well over messenger services in Brazil – one of the countries where the new feature is being rolled out – is the so-called zero rating. This is a common feature of phone contracts in Brazil where data isn’t used for certain services that providers have contracts with, WhatsApp often being amongst these. Visiting websites or services that do not have a zero rating requires data or a Wifi connection. From a German perspective, the cost of a Google search or Wikipedia visit may sound trivial – even though not everyone can afford a data plan here either. However, the scope of the problem becomes clear when comparing
the monthly cost of data in Brazil and Germany: where the average household spends 0.5 percent of their netto income on mobile data in Germany, in Brazil a broadband connection can cost as much as 15 percent of the household’s netto income.
Another issue the recently announced WhatsApp feature doesn’t address is that only links can be fact-checked. The function isn’t available for pictures or videos. And yet it’s audio-visual content that is most easily consumed and understood. Often, disinformation in visual form doesn’t even need translating from one language to another, since pictures speak their own language.
Platforms, messenger services and societies need stronger debates around combatting disinformation and conspiracy theories, as well as digital education. The current infodemic surrounding the coronavirus highlights how crucial the issue is. Not only in regards to public health and politics, but also elections. The problem is here to stay and affects everyone. Any approach to the problem must take into account that accessing information must not be restricted to those who can afford it. Features and solutions from platforms and messenger services need to be judged according to the circumstances of the countries they operate in. For Brazil and other countries with zero rating contracts the new WhatsApp feature is unlikely to have a measurable impact.