Online Misinformation in The Year of Elections: Between Lies & Truth

← Blog Home
Table of Contents

2024 will see more voters than ever in history heading to the polls, with at least 64 countries, plus the European Union, having elections

Meanwhile, AI-generated disinformation is on the rise, with the volume of election-related deepfakes increasing by an average of 130% per month on X. [source]

Put advanced technology and the scale of upcoming elections together, and we have fertile ground for dangerously misleading information to spread and cause real harm.

As we wrap up our series on misinformation in the year of all elections on the Click to Trust Podcast, Tom Siegel shares his insights on the evolution, impact, and potential measures to counteract the spread of misinfo. Tom spent 14 years at Google and founded its Global Trust & Safety team, bringing invaluable insight into the topic. 

Today I’ll share some of the insights from that conversation (which you can stream here). 

We’ll start with understanding the differences between misinformation and disinformation – two very different beasts.

Misinformation vs Disinformation

"Misinformation may very well be the most used term in elections this year and going forward."

As we navigate the current political and social landscape, understanding the distinction between misinformation and disinformation is key.

Misinformation is the spread of false information without malicious intent, often leading to public confusion or manipulation. Think of your mom sharing an outdated video of a natural disaster in your town, believing it happened recently. 

Disinformation refers to false or misleading information that spreads to deceive or gain an advantage. During the COVID-19 pandemic, multiple disinformation campaigns spread misleading information about the virus, its origins, preventive measures, and vaccines – many of which led to actual harm. [source]

By understanding the motives behind the spread of information, we can recognize misinformation as a real threat and make informed decisions. 

Infographic explaining the difference between misinformation and disinformation

As users of these exciting online spaces where millions of pieces of content are posted every minute, it’s crucial that we learn how to distinguish truth from fiction. This is especially important as AI-generated content accelerates the rate at which new information is created and shared. 

However, identifying who is spreading misinformation online remains a significant challenge. One of the biggest hurdles for companies is tracking down the usual suspects responsible for spreading misinformation, a task that is far from easy.

Who spreads misinformation?

“You have everything from some small-time fraudster to state-sponsored actors who are getting involved in foreign elections” 

From someone who accidentally shares a doctored image because they think it's legit to state-backed powerhouses that meticulously craft narratives to tilt elections their way, sources of misinformation are varied, but, essentially, there are three categories of mis/disinfo spreaders: 

1. Solo Misinformers: People scrolling through their feeds and stumbling upon a juicy piece of "news." They pass it on without double-checking to their circle, believing they’re in the know. They might not have ill intentions, but they’ve added misinfo to the echo chamber.

2. Organized Groups: From tightly-knit organizations to loose bands of allies, these groups pump out skewed stories to push a particular angle or cause. They use every tool in the digital toolbox to create and spread false stories that undermine political adversaries—from fake accounts to sophisticated bot networks.

3. State-Sponsored Heavyweights: Meet the big guns. Supported by their governments, these actors aim to shake up public opinion or meddle in the democratic processes of rival nations. For example, a country might deploy an army of trolls to spread divisive content about another country’s upcoming election, trying to create chaos and distrust.

In Thailand, state-sponsored disinformation has been employed to influence electoral outcomes and undermine political opposition. State resources were used to promote favorable narratives about the establishment while discrediting opposition parties and activists (see examples here

These campaigns often target marginalized groups, contributing to political polarization and obstructing democratic processes​ 

This diversity of actors complicates efforts to keep truth at bay. 

It’s often a game of whac-a-mole where you catch misinformation pieces as they rise, but not the actors who continuously spread it.

Artificial Intelligence and Misinformation

"AI is on that same level as misinformation. You put the two together, and honestly, it could get really scary."

AI has become a pivotal player, and a double-edged sword, in the misinformation saga. 

On one hand, AI can rapidly generate convincing false content that aims to influence public opinion. On the other, it's a vital tool in detecting and fighting misinfo. As AI evolves, so does the sophistication of the misinformation it helps create, as well as the ways in which it helps to squash it.

The Bad: How AI Helps Spread Misinformation

In 2023, the number of Unreliable AI-generated News Sites (UAINS) identified by NewsGuard skyrocketed, going from 49 domains in May to over 600 by December.

AI has completely changed how fake news, doctored images, and deepfake videos are made. During the 2020 U.S. elections, AI-generated deepfakes showing politicians making false statements were widely shared.

One infamous example is a manipulated video of Nancy Pelosi, edited to make her seem drunk during a public speech. One post of the altered video of the The White House Speaker was shared by more than 100 thousand people on Facebook. 

AI manipulated video of Nancy Pelosi screenshot

Besides generating content at an alarming pace, AI can also help automate the rapid spread of content across multiple platforms and accounts simultaneously. This scalability means misinformation can reach a large audience quickly, often outpacing efforts to fact-check and debunk it. 

Thankfully, AI isn’t fighting fact-checkers alone, and the very technology that can spread misinformation, can also help counter it. 

AI as a Defense Mechanism

AI can also play a critical role in combating misinformation, leveraging advanced algorithms to detect and debunk false information. 

During the COVID-19 pandemic, these systems were key for maintaining the integrity of public health information. 

A great example of this is UC Riverside’s AI system, which identified unique COVID-19 symptoms using Google Trends data, helping debunk related misinformation​. 

Two other examples of AI being used to fight misinformation: 

MIT's RIO program which detects disinformation narratives and identifies accounts spreading them on social media​​. 

Macquarie University's AI model which recommends verified news articles over fake news, steering users towards accurate information​. 

This dual role of detection and education highlights AI's essential function in ensuring truth and transparency.

The double-edged sword of AI—both spreading and combating misinformation—highlights the need for robust strategies to combat misinfo. 

As AI continues to evolve, we have to stay vigilant and prepared to tackle the challenges it brings, and learn how to use this technology for good. 

How to fight misinformation

Fighting misinformation requires a multifaceted approach. In our chat, Tom Siegel laid out a few key strategies: 

1. Fact-checking organizations serve as authoritative sources, assessing claims and statements to verify their truthfulness. With over a hundred such organizations globally, they play a pivotal role in ensuring the public has access to accurate information. 

According to Karishma Shah, Program Manager in charge of News Integrity Partnerships at Meta, fact checkers are essential in dealing with manipulated media, including AI-generated content. 

From July to December 2023, over 68 million pieces of content viewed in the EU on Facebook and Instagram had fact-check labels. 

2. Media Reliability Rating Services, such as Newsguard focus on rating the reliability and credibility of news and information sources. They use trained journalists to evaluate websites responsible for 95% of online engagement in the countries they operate. 

By applying nine journalistic criteria, they create a "nutrition label" for each site, which helps users identify reliable sources. 

source: GSQi

NewsGuard has rated over 10,000 sites, and more than 2,000 have improved their scores based on the feedback. This approach not only informs users but also incentivizes news sites to adhere to higher standards.

3. Regulators and governments set the legal guardrails for what can and cannot be done, providing a structured framework to combat misinformation. 

For example, the European Union's Code of Practice on misinformation monitors and reports on the levels of misinformation on platforms and how quickly it gets removed. This kind of regulatory oversight is crucial in keeping misinformation in check. 

4. Trust & Safety Service Providers, such as TrustLab, use a combination of classifiers, publicly available models, and commercial solutions to identify and address misinformation. 

Their independence allows them to respond quickly to new trends and provide unbiased solutions, making them an essential part of this strategy.

Companies like TrustLab are pioneering in this space, providing cutting-edge tools to detect and mitigate misinformation. You can find our more about what we’re doing, here

5. Individual Responsibility and Education is another key strategy, and perhaps the most important. 

Media literacy programs in schools have been shown to reduce susceptibility to fake news by 26%.

Platforms and advertisers play a huge role, so do governments, but at the end of the day, users also have a role to play when it comes to fighting misinformation. Staying informed, staying up to date, informing others in your circles (especially those most susceptible to misinformation) is a good way to navigate this year, and the coming years with AI.

The challenge is dynamic and requires that our tools and strategies continuously adapt and evolve.

I’ll finish this blog post with some resources that, you, user of these online spaces, can explore and use to equip yourself against misinformation. 

Resources for fighting misinformation

Media Literacy Resources

  • Media Literacy Now: A national organization focused on ensuring media literacy education is included in school curricula.
  • The News Literacy Project: A nonpartisan national education nonprofit offering programs that help people of all ages learn how to identify credible news and information.
  • CyberWise: Dedicated to helping parents and educators teach digital citizenship and online safety.

Understanding AI and Deepfakes

General Resources on Staying Informed

  • Reuters Fact Check: Provides accurate, unbiased fact-checking to help users understand what is true and what is not.
  • AllSides: Shows news coverage from different perspectives to help readers understand bias. 
  • Pew Research Center: Offers in-depth research on media, technology, and many other topics to help users stay informed. 

Finally, make sure you tune in to Click to Trust – you can check out our series on Misinformation and Elections to get the full scoop at https://clicktotrust.transistor.fm/ 

If you have any questions or topics you’d like us to cover on the podcast, send us a message on LinkedIn.

You can watch this full episode of Click To Trust, here: 

Meet the Author

Carmo Braga da Costa

Head of Content at TrustLab

Let's Work Together

Partner with us to make the internet a safer place.

Get in Touch