Disinformation is threatening democracy and governments are only half listening


Posted on

Bundle of social media icons.

2024 will break records for the year with the most participants in democratic elections in world history. With these elections across the world, the threat of online disinformation presents a new threat that policymakers are still yet to have effectively responded to. An established pattern of disinformation campaigns surrounding elections have exposed the need for legislators to go further to preserve our democratic values and institutions. Legislators can do this by placing more responsibilities on platforms to act, supporting the education of their own populations and being bold in tackling prominent sources of disinformation more head on. The stakes are high. Democracy relies on people being able to pass fair judgement on politicians and parties. Without accurate information, how can anyone fairly assess someone’s record, the most important issues to a country or the best way to deal with a problem?

Foreign disinformation has become a prominent concern for the safety of democratic elections in the last decade. During the 2016 US election, the realisation that misinformation surrounding the US election could drive traffic and create revenue through adverts sparked an industry in North Macedonia driven by opportunists trying to make profit. During that same election, the threat of state driven foreign interference was also revealed when the Russian ‘troll farm’ the Internet Research Agency created fake American accounts to successfully spread false information about Hillary Clinton. Online Chinese disinformation campaigns in the most recent Canadian election has also been attributed to the defeat of a candidate critical of China in a constituency with a high Asian population.

Disinformation has also been driven by domestic sources, so treating it as a problem created by foreign actors would be unfair. During the 2016 EU membership referendum in Britain, the leave.eu campaign bought targeted Facebook ads that used false information in order to persuade voters with claims that Turkey was going to join the EU and inaccurate figures for the cost of EU membership. The high stakes of online disinformation became clear following the 2020 US election when the spread of false information regarding the results of that election incited violence by mobilising people to storm the US capitol building in order to change the result. Importantly regarding both of those two examples, domestic political elites such as British Brexit supporting politicians and Donald Trump played key roles in legitimizing false information – indicating that disinformation is not a threat made entirely from invisible sources.

The landmark Online Safety Act in Britain designed to revolutionise regulations regarding online platforms has fallen short on its ability to act effectively on disinformation. Despite the prominence of the disinformation threat and the campaigning from groups such as FullFact during the policy formulation process, the online safety act does little to deal with election disinformation, health disinformation or crisis situation disinformation. Policymakers have instead only focused on dealing with foreign disinformation, despite the prominence of domestically driven disinformation.

A national security framing has been key to the success of regulation regarding foreign disinformation. For instance, while the Online Safety Act did little for other types of disinformation, it created a new offence for foreign interference through dissemination of disinformation on behalf of another state which undermines British democracy. Another example of national security being used to justify action was the EU’s decision to ban Russian state sponsored news sources following the invasion of Ukraine to reduce the spread of Russian disinformation. One study of social media disinformation on social media following these bans found an immediate downtick in the amount of disinformation spreading across social media – indicating their success. This decline in the propagation of false information can largely be attributed to the fact that while there can be lots of accounts spreading disinformation, official accounts and accounts with large followings appear both more legitimate and have larger reach. This means that they are more likely to be viewed and more likely to be shared and forwarded.

Social Media disinformation is difficult to respond to for three key reasons. The hundreds of millions of active users on a wide number of sites; the significant number of bots which spread disinformation across these sites, and the will from a range of different harmful actors (governments, news sites and trolls) to spread disinformation. For this reason, policies designed to tackle the supply of disinformation are difficult to envisage because of the sheer scale of the challenge. Because of this, much of the focus on responses would benefit from focusing on the recipients of misinformation which would allow false information not to take root despite malicious attempts.

Policies which are designed to help users identify and respond to misinformation are the most effective and least controversial. A study by the Turing Institute on health misinformation found that people who had better access to accurate information on a subject were less susceptible to believing the false information presented. Another study by Soetekouw and Angelopoulos showed that people who had received training in identifying fake news were much better in doing so when presented with accurate and inaccurate news. This research indicates that policy surrounding education could be key. For instance, some social media sites such as TikTok prompt users about vaccine information on related posts. This could be expanded across more platforms and subject matters. Certain employers and schools could also be required to provide misinformation training to better equip people with a key skillset in a modern internet age. Finally supply shouldn’t be entirely dismissed. Repeat offenders with large followings ought to be banned more routinely, with the success of the RT ban showing its effectiveness. Ultimately, disinformation requires both government and platforms to be willing to commit to supporting social media users identify inaccurate information.

Related Blogs


Disclaimer

The opinions expressed by our bloggers and those providing comments are personal, and may not necessarily reflect the opinions of Lancaster University. Responsibility for the accuracy of any of the information contained within blog posts belongs to the blogger.


Back to blog listing