Misinformation poses a bigger threat to democracy than you might think (2024)

Misinformation poses a bigger threat to democracy than you might think (1)

Around four billion people have the opportunity to cast their votes in a series of major elections this year. However, the threat to democratic integrity posed by misinformation and disinformation looms large. An effective democracy relies on evidence-based discourse and informed citizens. Concern about the expected blizzard of election-related misinformation is warranted, given the capacity of false information to boost polarization and undermine trust in electoral processes13.

Specifically, there is widespread worry about malign influence on voters, be it through conventional propaganda (including warmongering and xenophobic disinformation), unsubstantiated claims about candidates or AI-generated deepfakes (digitally altered visual media). Anti-democratic agents might also attack the electoral process directly, as was, for instance, seen in Spain in July last year, when malicious foreign actors set up an imitation of the Regional Government of Madrid’s website just before an election to falsely claim that terrorists planned to attack polling stations.

Several mechanisms to protect the public against misinformation exist — from general educational interventions to specific attempts to counter misleading messages with evidence-based campaigns4. But the deployment of these mechanisms requires the resolution of three issues by scholars and practitioners: recognition of the seriousness of the problem; acceptance that classifying information as false or misleading is often warranted; and an assurance that interventions against misinformation uphold democratic principles, including freedom of expression.

What we do — and don’t — know about how misinformation spreads online

As misinformation researchers, we have witnessed an undermining of all these pre-conditions over the past few years. With the rise of populist political movements, along with a general attitude of suspicion towards ‘experts’ in some communities, misinformation researchers — like climate scientists and public-health authorities before them — have at times been portrayed as unelected arbiters of truth and subjected to harsh criticism.

Some critics, even in the scholarly community, have claimed that concerns related to the spread of misinformation reflect a type of ‘moral panic’. They think that the threat has been overblown; that classifying information as false is generally problematic because the truth is difficult to determine; and that countermeasures might violate democratic principles because people have a right to believe and express what they want510. This trend must be reversed, because it is based on selective reading of the available evidence.

We encourage researchers all over the world to redouble their efforts to combat misinformation, and we offer evidence to show that the deployment of countermeasures is valid and warranted.

Why truth matters

The Holocaust did happen. COVID-19 vaccines have saved millions of lives. There was no widespread fraud in the 2020 US presidential election. The evidence for each of these facts has been established beyond reasonable doubt, but false beliefs on each of these topics remain widespread.

For example, a survey in July 2023 found that almost 40% of the US public — and almost 70% of Republicans — negated the legitimacy of the 2020 presidential election outcome (see go.nature.com/4e43ps2). These beliefs have real-world consequences, such as hate campaigns that target poll workers and their families.

We argue that acquiescence in the face of widespread misinformation and dismissal of the prospect that information can ever be confidently classified as true or false5,9,10 are morally troubling choices. The fact that veracity is often conceptualized better as a continuum than as a dichotomy5, and that some claims cannot be unambiguously classified as true or false, must not detract from the reality that there are many incontrovertible historical and scientific facts.

Is AI misinformation influencing elections in India?

The interplay between evidence-based reasoning and disinformation cannot be illustrated better than with scientifically informed issues, in which disinformation is often organized rather than haphazard, and historians (including some of us) have carefully examined how evidence is accumulated and knowledge is formed. Scientific knowledge cannot be understood as absolute, but this does not imply that scientific findings are arbitrary or unreliable, or that there are no valid standards for adjudicating scientific claims11,12.

Similar standards exist for domains outside science, in which knowledge can be accumulated through processes such as investigative journalism, legal proceedings, corporate investigations and formal public inquiries. A blanket reluctance to assign labels of credibility to information is therefore unwarranted, despite the real difficulties that can sometimes arise with classifying information as true or false, especially in real time.

The promotion of opinions that go against expert consensus is often done by individuals who present themselves as heroic rebels. But in many instances in which an expert consensus is questioned, there is evidence that the opposing arguments are untenable or deny fundamental knowledge and are driven by political or ideological motivations13.

How to counter falsehoods

The idea that misinformation cannot be reliably identified is often accompanied by claims that it is premature to conclude that there is a problem, and therefore premature to act5,7. This argument is familiar, it was used in the decades-long campaigns led by the tobacco and fossil-fuel industries to delay regulation and mitigative action. However, there is sufficient firmly established scientific knowledge to warrant both concern about misinformation and widespread deployment of countermeasures.

For example, researchers know that false and misleading claims are believed more when they are repeated and can have measurable impacts on beliefs, attitudes and behaviours, both directly and indirectly by shaping ideological narratives, and that corrections and fact checks are only partially effective, even when people have no motivation to cling to a false piece of information1416.

Communicators also have access to a number of tools that are able to protect people against being misinformed or misled4. If the objective falsity of a claim is known and the misinformation has already been communicated, fact-checking and refuting a specific claim retroactively — or ‘debunking’ — is the intervention of choice. However, it requires the targeted misinformation to be identifiable and falsifiable, which limits scalability. By definition, the intervention is also reactive rather than proactive.

Misinformation poses a bigger threat to democracy than you might think (4)

To be proactive — for example, if the misinformation is anticipated but not yet disseminated — psychological inoculation is a strong option that governments and public authorities can consider using. Inoculation involves a forewarning and a pre-emptive correction — or ‘prebunking’ — and it can be fact-based or logic-based.

To illustrate the former, the US administration led by President Joe Biden pre-empted Russian President Vladimir Putin’s justification for invading Ukraine in February 2022. In a public communication, citizens in several countries were forewarned. The administration explained how Putin would seek to make misleading claims about Ukrainian aggression in the Donbas region to rationalize his decision to invade, which might have served to limit the international community’s willingness to believe Putin’s claims when they were subsequently made.

How online misinformation exploits ‘information voids’ — and what to do about it

Logic-based inoculation, by contrast, is useful even when false claims are not specifically known, because it aims to educate citizens more generally about misleading argumentation. The intervention focuses on whether arguments contain logical flaws (such as false dilemmas or incoherence) or misleading techniques (such as fearmongering or conspiratorial reasoning) rather than attempting to provide verificatory evidence for or against a specific claim.

As well as proving successful in the laboratory, large-scale field experiments (on YouTube, for example) have shown that brief inoculation games and videos can improve people’s ability to identify information that is likely to be of poor quality17. Although some critics think that such interventions aim to “limit public discourse ... without consent” and do so “paternalistically” and “stealthily”7, this is a misrepresentation of the interventions, which seek merely to educate and empower people to make informed judgements free from manipulation.

Further countermeasures that are compatible with democratic norms include accuracy prompts, which aim to focus users’ attention on the veracity of the information to reduce the sharing of misleading material online, and the implementation of friction elements, which briefly delay a person when they are interacting with information online to avoid them sharing the content without reading it first4.

The promotion of social norms (such as not making claims without evidence) or more general educational interventions such as teaching information- and source-verification techniques are also useful. Although some of these interventions will have small effects, might be resisted by platforms that prefer friction-less sharing or require strong engagement from information consumers, they all enrich the toolbox of communicators4.

The disinformation sleuths: a key role for scientists in impending elections

Of course, there are complexities. For example, combating misinformation across multicultural and multilingual settings can be onerous. Ahead of the elections for the European Parliament that start on 6 June, a set of inoculation videos have been made available on Google Jigsaw — a unit in Google that explores and confronts threats to open societies — forewarning voters about narrative techniques that often feature in misinformation-laden messages (see go.nature.com/4bpjkbg).

Although the information provided by Jigsaw can be translated into 24 languages, users in many smaller EU countries will be able to access these videos only in English. And although one might hope that the techniques necessary to identify misleading content are universal, the relevant topics and themes, as well as the levels of trust in government and the media, will differ between countries.

Ideally, generic interventions will therefore need to be complemented with local initiatives, using locally trusted sources and relevant examples. This illustrates the inherent difficulty of countering misinformation and disinformation on a big scale. Unfounded criticism that is directed at misinformation researchers provides intellectual cover for disinformers and makes this work even harder.

A call to defend democracy

The deployment of any countermeasure must satisfy some critics who think that misinformation interventions are generally unethical and antidemocratic because they tell people what to think6,7. This thinking is fundamentally flawed for at least two reasons. First, it misrepresents what the interventions aim to achieve: seeking to influence people’s thinking in good faith through evidence-based information is not telling people what to think. Second, it misrepresents how both disinformation and interventions interact with democratic principles.

The new Twitter is changing rapidly — study it before it’s too late

Democracy relies on authentic deliberation and open debate that transparently shape decision-making processes. Misinformation disturbs this fundamental mechanism of democracy. In general, it is unethical and antidemocratic for anyone — including scientists and those in power — to deceive and disinform the public on important matters that affect them, such as public-health issues or the risks from climate change. However, it does not follow that it is unethical or problematic to counter such disinformation.

On the contrary, identifying and countering misinformation so that a discerning public can choose to ignore it is upholding democracy. Empowering people to seek the truth — to evaluate evidence and spot manipulation — is an essential building block of this pursuit and the antithesis of deception. To illustrate, correcting election-fraud misinformation in the United States has been shown to positively affect trust in electoral processes, thereby exerting a protective effect on democracy18.

Critics have also argued that misinformation countermeasures curtail individual freedom by constraining the exchange of ideas. We think it is important to scrutinize which individual freedoms are in fact being threatened and by what or whom. Such scrutiny will recognize that disinformation impinges on a person’s right to be accurately informed about the risks they are facing (be it from tobacco, climate change or long COVID, for example). Accordingly, the public is concerned mainly about freedom from manipulation and misdirection and is therefore broadly supportive of interventions that aim to reduce misinformation spread, susceptibility or impacts19.

What can be done

Public-facing communicators at all levels — including governments, non-governmental organizations, the media and the research community — should be encouraged to distribute evidence-based information and counter misinformation when it is deemed likely to be harmful14,16.

To illustrate, false claims about climate change, the efficacy of proven public-health measures, and the ‘big lie’ about the 2020 US presidential election have all had clear detrimental impacts that could have been at least partially mitigated in a healthier information environment. We specifically urge academics to not be silenced by voices that push back against evidence-informed argumentation under the guise of free speech. Because the truth can be vexed, difficult to pin down and sometimes impossible to prove, many scholars have become wary of defending facts and even of the concept of factualness.

Read the paper: Misunderstanding the harms of online misinformation

Simply declaring that ‘facts are facts’ is not sufficient, particularly given that people’s processing of evidence and knowledge claims is to some extent determined by social factors. It is precisely because truth is not self-evident that malicious actors can easily create confusion. Therefore academics, intellectuals and editors need to promote evidence-based information and stand firm against false or fraudulent claims, unafraid to call them out as such. We are aware, from first-hand experience, that this can be a frustrating experience — as climate scientists who have been actively countering climate disinformation for decades can confirm.

But given the number of elections this year and the impact they are set to have on such a large proportion of humanity, the need to fight back against mistruths has never been more urgent. Not every claim can be unambiguously classified as true or false, but many can be. Not all misleading claims are harmful, but many are. If scholarly debate ignores this body of evidence, it might inadvertently play into the hands of malicious agents with antidemocratic and antiscientific agendas. These actors will welcome academic disputes about the existence of ground truths and the ethical justification of interventions, as they pursue ideologically motivated goals.

Crucially, efforts to keep public discourse grounded in evidence will not only help to protect citizens from manipulation and the formation of false beliefs but also safeguard democracy more generally. Governments have access to a sufficient array of research-informed tools to make a difference. Policymakers and elected officials can do their part by listening more keenly to the evidence. Online platforms can also contribute to this endeavour, either voluntarily or in response to regulatory pressures, such as the EU’s Digital Services Act, which regulates online platforms to prevent the spread of disinformation. The time to act is now.

References

  1. Lewandowsky, S. et al. Technology and Democracy: Understanding the Influence of Online Technologies on Political Behaviour and Decision-Making (Publications Office of the European Union, 2020).

    Google Scholar

  2. Lewandowsky, S. et al. Curr. Opin. Psychol. 54, 101711 (2023).

    Article PubMed Google Scholar

  3. van der Linden, S. et al. Using Psychological Science to Understand and Fight Health Misinformation: An APA Consensus Statement (American Psychological Association, 2023).

    Google Scholar

  4. Kozyreva, A. et al. Nature Hum. Behav. https://doi.org/10.1038/s41562-024-01881-0 (2024).

    Article Google Scholar

  5. Adams, Z., Osman, M., Bechlivanidis, C. & Meder, B. Perspect. Psychol. Sci. 18, 1436–1463 (2023).

    Article PubMed Google Scholar

  6. Bretter, C. & Schulz, F. Proc. Natl Acad. Sci. USA 120, e2217716120 (2023).

    Article PubMed Google Scholar

  7. Freiling, I., Krause, N. M. & Scheufele, D. A. AMA J. Ethics 25, 228–237 (2023).

    Article Google Scholar

  8. Krause, N. M., Freiling, I. & Scheufele, D. A. Ann. Am. Acad. Polit. Soc. Sci. 700, 112–123 (2022).

    Article Google Scholar

  9. Uscinski, J., Littrell, S. & Klofstad, C. Curr. Opin. Psychol. 57, 101789 (2024).

    Article PubMed Google Scholar

  10. van Doorn, M. Inquiry https://doi.org/10.1080/0020174X.2023.2289137 (2023).

    Article Google Scholar

  11. Oreskes, N. Why Trust Science? (Princeton Univ. Press, 2019).

    Google Scholar

  12. Vickers, P. Identifying Future-Proof Science (Oxford Univ. Press, 2022).

    Google Scholar

  13. Lewandowsky, S. et al. Preprint at PsyArXiv https://doi.org/10.31234/osf.io/q7vbr (2024).

  14. Ecker, U. K. H. et al. Nature Rev. Psychol. 1, 13–29 (2022).

    Article Google Scholar

  15. Henkel, I. Destructive Storytelling: Disinformation and the Eurosceptic Myth That Shaped Brexit (Palgrave Macmillan Cham, 2021).

    Google Scholar

  16. Tay, L. Q., Lewandowsky, S., Hurlstone, M. J., Kurz, T. & Ecker, U. K. H. Commun. Psychol. 2, 4 (2024).

    Article Google Scholar

  17. Roozenbeek, J., van der Linden, S., Goldberg, B., Rathje, S. & Lewandowsky, S. Sci. Adv. 8, eabo6254 (2022).

    Article PubMed Google Scholar

  18. Bailard, C. S., Porter, E. & Gross, K. Harv. Kennedy Sch. Misinformation Rev. https://doi.org/10.37016/mr-2020-109 (2022).

    Article Google Scholar

  19. Kozyreva, A. et al. Proc. Natl Acad. Sci. USA 120, e2210666120 (2023).

    Article PubMed Google Scholar

Download references

Misinformation poses a bigger threat to democracy than you might think (2024)

References

Top Articles
Latest Posts
Article information

Author: Laurine Ryan

Last Updated:

Views: 6544

Rating: 4.7 / 5 (57 voted)

Reviews: 88% of readers found this page helpful

Author information

Name: Laurine Ryan

Birthday: 1994-12-23

Address: Suite 751 871 Lissette Throughway, West Kittie, NH 41603

Phone: +2366831109631

Job: Sales Producer

Hobby: Creative writing, Motor sports, Do it yourself, Skateboarding, Coffee roasting, Calligraphy, Stand-up comedy

Introduction: My name is Laurine Ryan, I am a adorable, fair, graceful, spotless, gorgeous, homely, cooperative person who loves writing and wants to share my knowledge and understanding with you.