Lost in Translation? How Local Knowledge Makes or Breaks Trust & Safety

← Blog Home
Table of Contents

I have worked in the technology industry for over 20 years, having held a variety of roles. As a journalist, and also as a Trust & Safety professional, something I am always concerned about is language. How do we write professional texts, short or long, so that messages are communicated clearly? Have terms and conditions, advertising policies and community guidelines, often originally written in English, been translated correctly into other languages? And I always keep the audience in mind: who will read, process and interpret that content? In many cases, a well-written text is meaningless if the reader does not have more context about what is being said.

In the world of Trust & Safety, the need for context and local knowledge is crucial so that text, videos, audios, images and other formats are reviewed and labeled correctly (this also applies to ads and user-generated content). The incorrect approval of certain harmful content (ads for the sale of weapons, for example, in countries where this practice is illegal; or hate speech), as well as the disapproval of content that is perfectly valid and in compliance with community guidelines, poses a threat to the reputation of technology companies and may also bring legal repercussions.

Key challenges

Trust & Safety teams deal with daily challenges in both policy creation and content moderation. These include:

  • Volume: the amount of material to be reviewed is often huge. Even though processes are more automated and agile nowadays, the volume of daily content continues to be a challenge for Trust & Safety. 
  • Content types: text, image, video and audio - not to mention content that needs to be moderated in real-time, like livestreams. Policies need to cover all of these nuances and complexities, and moderators must be trained to identify prohibited content in each of these formats.
  • New technologies: it is part of the daily life of a Trust & Safety team to think about the benefits and harms of AI. For Trust & Safety, the popularity of AI usage in content creation represents yet another challenge and demands constant updating of policies to keep up with new trends.
  • Freedom of expression and ethical dilemmas: how do we balance safeguarding users and freedom of expression? It is a complex discussion that permeates many principles of content moderation on digital platforms, and also requires taking into account compliance with local legislation and social responsibility.
  • Transparency: platforms need to have clear policies so that both advertisers and users know what type of content they can promote or post. However, externally sharing fine details of how content is reviewed can provide clues as to how fraudsters can bypass moderation altogether.

Why local knowledge matters

We should certainly add cultural and local knowledge issues to this list of challenges. In an ideal world, we would always have native-speaker moderators responsible for reviewing content for each language, in order to minimize interpretation errors. Unfortunately, this scenario is far from the reality of many companies (either for budget or logistical reasons, as many platforms outsource content moderation to companies with professionals located in different parts of the world). The use of AI in Trust & Safety is certainly an ally in the search for perfect moderation, however, it is still not capable of fully replacing the closer look of a professional who experiences day-to-day life within a certain country.

Being based in Brazil and usually being the only Trust & Safety specialist based in the market for the company, and often the only native Portuguese speaker, I have experienced this need for local knowledge many times during my career and have always tried to help minimize this difficulty. I spent hours working to create lists of Portuguese words that could easily demonstrate that an online ad promoted illegal practices or sold products and services that violated the platform's ad policies. For a content reviewer based in another country, with no knowledge of the local culture and no fluency in the language, it is easy to miss more sophisticated and deliberately hidden signals that demonstrate there was something wrong with a particular ad, even if they had been trained on all the policies. 

Acronyms, swear words, slang, irony, sarcasm, pejorative terms used to address a person, group or nationality are not always easy to identify without cultural knowledge. Some words may seem harmless, but are extremely harmful if linked to certain events and contexts. The opposite can also occur: a word described as offensive by dictionaries or translation tools may have been incorporated by certain communities and acquired a different, non-offensive meaning, without there being official documentation about it. Imagine the challenges faced by a moderator who reviews content in different languages ​​to detect potential hate speech violations. Translation tools are not always able to interpret the context correctly (or the nuances between countries) and provide accurate results. If we think about Portuguese, for example, there are differences between the language spoken in Brazil and the language spoken in Portugal. ‘Bicha’ is a slur in Brazil and can be considered hate speech (the word is used against the LGBTQIA+ community) but is not offensive in Portugal as it can be the synonym of line/queue.

Elections, local and viral content

A topic that always dominates digital platforms and that represents major challenges for Trust & Safety are elections. It applies to all sizes of countries and all types of governments: it could be the USA, India, Bangladesh, Japan, Brazil, Venezuela or El Salvador, just to name a few examples of elections that were held in 2024. Platforms deal with political content in different ways: some prohibit it completely, others partially allow it, while others are more permissive. The way users consume political content on platforms also varies. It doesn't matter what approach you take in relation to such a controversial subject: local knowledge is essential to more accurately identify and monitor posts, comments and ads relating to the topic. Therefore, it is advisable to establish processes to minimize these difficulties (you can learn more about how to fight election misinformation in this blog post) and provide as many local materials as possible to help moderators make more informed and faster decisions. Creating documents that list all the names of candidates for a certain political position and their possible nicknames, including recent and old photos that make identification easier, as well as adding information about the political parties to which they are affiliated and also about family members who have some type of political activity, can definitely help improve the quality of the reviews. 

This discussion also applies to topics that are not as controversial as hate speech or elections. Consider how many TV shows, celebrities, influencers, musicians, bands, sports competitions, and local holidays there are. Also consider a topic or video that goes viral in a country for a specific reason and starts to dominate the platforms for a few days. Moderators based elsewhere are unlikely to understand the reason for this flood of content related to the local event, which can also affect their decision-making regarding the content they’re reviewing, resulting in slower or even incorrect decisions. Take for example the terrorist attacks that took place in Christchurch, New Zealand, in 2019, when a gunman broadcast the attack live on Facebook. The videos began to spread across platforms at a frightening speed and, initially, moderators had little information at their disposal to make decisions about the content. Was the video real or a work of fiction? Was it gratuitous violence? Was it terrorism? A hate crime? New Zealanders certainly had more information about what was happening. When it comes to cases with global repercussions, those initial hours are crucial for decision making and preventing the content from spreading uncontrollably.

Bias and policy design

Another example of how cultural aspects can influence moderation decisions is bias. We all have biases. We are human beings and we make daily judgments about everything that surrounds us - and I say that with no judgment. It's cultural. We live based on the rules and values ​​of our country and the community in which we live; we incorporate learnings and rely on them when making decisions. It's no different with content moderators, who can make decisions influenced by unconscious bias, even if they have the community guidelines of a given platform at their fingertips. Vetting content that is 'too explicit' according to their country's rules, even if the platform allows it, is something that can certainly happen. Since this bias can also be reproduced in models trained by agents, investing in unconscious bias training for all Trust & Safety professionals is crucial when it comes to minimizing mistakes and improving metrics.

So far we've considered the importance of content moderators with local knowledge, but we need to take a step back and also discuss the process of creating community guidelines and ad policies, something that is within the scope of Trust & Safety. It is perfectly understandable that platforms create rules that can be applied globally, as this facilitates training and scalability. However, some policies may not make much sense in certain countries. Let's take sports betting sites as an example, a subject that has been gaining more and more attention due to the potential financial and psychological implications for users (e.g. excessive debt, addiction, etc.). While there are countries where legislation is already well advanced when it comes to this matter, with clear rules and definitions about what can and cannot be done, some are still struggling in this sphere. What to do in these cases? Create a global policy that prohibits any content (user-generated content or ads) on this topic, or work on something more granular, with more detailed analysis for each region? The second approach seems the most reasonable, as it can make platforms more attractive and safe to users, even if it represents more work for Trust & Safety teams.

Let’s build safer digital spaces together

Trust & Safety is not an exact science. We will continue to come across gray area cases in which even people on the same team may disagree about an enforcement decision. Not everything is 100% right or 100% wrong, but that doesn't mean we shouldn't continue trying to improve processes to provide safer platforms for users. And this definitely includes relying on the expertise of global teams with different linguistic and cultural backgrounds. If you are facing these challenges, reach out – TrustLab would love to help!

Meet the Author

Viviane Rozolen

Trust and Safety Specialist at TrustLab

Related Posts & Articles

No items found.

Let's Work Together

Partner with us to make the internet a safer place.

Get in Touch