top of page
  • Writer's pictureTom Siegel

The Moderation Paradox: Free Speech vs. Misinformation & Harmful Content

In the ever-evolving landscape of the digital age, where the internet has become a world within our world, the balance between safeguarding freedom of speech and protecting individuals from harm has become an intricate challenge.


TrustLab has recently made headlines for tackling online misinformation, revealing that X (previously Twitter) has the most significant ratio of mis/disinformation content and actors among mainstream social media platforms.


Just last week, a violent explosion rocked the Al-Ahli Baptist Hospital in Gaza City, killing and injuring hundreds. Amidst all the chaos, false information spread quickly online, adding to the confusion and anger and leaving us with more questions than answers.


As we grapple with this delicate equilibrium, it is essential to understand the complex interplay between online content and the preservation of democratic values, like freedom of speech.


As new online regulations are introduced, the scope of speech limitations varies from one government to another. However, the primary objective for regulating content is universal: to mitigate harmful speech while safeguarding free expression.


This is central to new online regulations such as the Digital Services Act (EU), the Online Safety Bill (UK), and Australia's Online Safety Act.


However, as we advance with online safety regulations, the very essence of freedom of speech is called into question.


Where do we draw the line? How do we categorize something as misinformation? And who should decide? I’ll try to answer some of these questions today.


The Freedom of Speech Dilemma


This is the paradox we must navigate: you cannot remove content without impacting the right to free expression.


The right to freedom of expression is a cornerstone of many democratic nations. This is enshrined in the US Constitution (as the First Amendment) and in the principles upheld by the European Union. Allowing people to express themselves freely is a fundamental right.


However, such liberty carries its own set of obligations, particularly when it comes to causing harm or jeopardizing the safety of others, and the rise of harmful online content and misinformation is an issue that has unfolded since the inception of the World Wide Web.


Extreme forms of speech, such as the sharing of child sexual abuse content or the insitement of violence, can cause severe harm to others. However, issues like online bullying and hate speech present a complex web of differing opinions on how far freedom of speech should extend. It’s this gray area that poses a more complex challenge.


Over the years, governments have increasingly stepped in to address the need for online safety laws. The COVID-19 health crisis elevated the danger of misleading information circulating online (also known as infomedic), which led to governments and social media companies worldwide implementing various measures to counter misinformation - some of which posed limitations on speech.


Before The UK’s Online Safety Bill was finalized, sections that would have mandated big tech firms to delete "legal but harmful" material got cut from the final bill. Those against these sections claimed they risked free speech and gave too much control to large tech corporations. (Source)


As a society, we must strike a balance between free expression and preventing direct and severe harm caused by speech. Drawing this line is an ongoing debate that will continue for years to come.


The Road to Regulation


Understanding how we arrived at this point is key.


The early days of the Internet saw limited government involvement, with the U.S. Constitution emphasizing non-interference in free speech. Section 230 further solidified this principle by protecting internet service providers from content liability.


However, this hands-off approach eventually gave way to a more proactive stance from regulators.


Challenges like terrorism recruitment, illegal product sales, and extreme hate speech led governments worldwide to push platforms to take more responsibility in protecting users. And TrustLab, among others, recognized the need to create a safer online environment.

While I acknowledge that governments regulating speech is far from ideal, it is a reasonable outcome when platforms have failed to self-regulate.


A more effective approach would be a scenario where the industry sets common standards and benchmarks for what is acceptable online.


This consistency would help ensure a safer digital space for all users. Additionally, separating commercial interests from content regulation is essential to prevent conflicts of interest and prioritize user safety.


Measurement is also important. We can't manage what we don't measure. There is a need for independent, unbiased measurement, such as the study we recently conducted with the European Union on the prevalence of misinformation.


Visualizing data in this way allows us to make informed decisions on setting online standards and bring awareness to the spread of misinformation.


Our content moderation tools ensure that your company will always be compliant with new regulatory requirements.


How we identify misinformation


Identifying harmful content, particularly misinformation, is a challenging task. TrustLab's guiding principle is simple: anchor our assessments in facts.


We leverage multiple sources for this:


1. Fact-Checking Information: Turning to fact-checking reliable and unbiased organizations.


2. Widely Trusted Government Resources: These often serve as credible, albeit not infallible, reference points.


3. Reputable News Outlets: Online sources or other authoritative publishers that are broadly credible and support different viewpoints.


We synthesize these sources to make the most educated assessment possible about a claim's likelihood of being true or false. In addition, we aim to understand the sentiment towards certain claims across large groups across the internet and reduce the impact of bias.


We also categorize claims into three types:


1. Verifiable Facts: These can be checked against data and historical records.


2. Likely Scenarios: We use technology to assess the likely truth based on available information.


3. Perceptions: These are subjective views that can differ significantly among individuals.


Categorizing claims into three types helps us understand the nature of these claims and the challenges they present.


Some claims are straightforward, with clear, verifiable data to confirm accuracy. Others are more nuanced, with a majority agreeing on their likelihood. The most challenging category is where claims are based on perception, making it crucial to separate opinions from facts.


💡 Note: Misinformation vs Disinformation


Misinformation: Misinformation is false or misleading content shared without harmful intent, though the effects can still be harmful, such as claiming that the pizza shop in your street is the best in the world (when it’s not)


Disinformation: False or misleading content that is spread with an intention to deceive or secure economic or political gain and which may cause public harm, such as medical information, socially divisive topics and elections.






Opinions Have Their Place


We must allow space for opinions and even divisive discourse online. After all, our differing perspectives are what make us human.


But it’s equally important to distinguish between opinions and facts in an era where the lines between them are increasingly blurred.


As we engage in ongoing debates and online regulations evolve, let us remain committed to this critical dialogue and work towards developing industry standards, measuring online misinformation, and giving people more control over what they want to see and not see online.


Our commitment at TrustLab is to make this complex web a little easier to navigate - by working with social media platforms, government institutions, and others - all while respecting the rich tapestry of human experience.


This is a big challenge for society that we can only solve together.



Further reading:




747 views5 comments
bottom of page