The Ripple Effects of War on Trust & Safety Work

← Blog Home
Table of Contents

War fundamentally reshapes the landscape of Trust & Safety work, demanding rapid adaptation, clear thinking, and a deep sense of responsibility. Its impact doesn’t stop at borders or battlefields: it reaches deep into digital spaces, disrupting the platforms we strive to keep safe. Sitting at my desk in Buenos Aires, I often feel the weight of conflicts unfolding thousands of kilometers away. I think about the people living through it: those forced to flee, those injured or killed, families torn apart. War’s devastation reaches far beyond the battlefield—disrupting global trade, driving up prices, and fueling a quiet anxiety that creeps into daily life, even in places untouched by violence.

Since the war in Ukraine began, Argentina (especially Buenos Aires) has seen a surge in Russian immigration, a quiet but powerful reminder that the effects of war can appear on the other side of the world. This piece reflects on what I’ve seen and learned: how conflict alters the nature of online harm, how it tests the limits of our systems, and how Trust & Safety must evolve, not just to respond to crises, but to anticipate them, building resilient, proactive strategies that protect people before harm unfolds.

From Moderation to Mitigation: Trust & Safety Under Fire

War demands immediate adaptations in policies related to content moderation, misinformation control, platform abuse, and user protection. Teams operate under intense pressure with limited or evolving information, making decisions that can have far-reaching consequences. Platforms are frequently caught between conflicting demands: protecting free expression while preventing harm; complying with government mandates while maintaining neutrality. For example, TikTok took the unprecedented step of suspending livestreaming in Russia alongside halting new content uploads after a controversial "fake news" law was passed, demonstrating how quickly platforms must disable features that pose risks during periods of conflict or legal volatility.

For those of us working in T&S, these disruptions aren’t abstract. They surface directly on the platforms we help safeguard, and the line between online and offline harm blurs fast. 

The Hidden Cost: Emotional Strain on T&S Teams

Platforms must balance the need for documenting atrocities with policies against graphic content. Moderating graphic war content (such as violent imagery or footage documenting potential war crimes) presents an ethical dilemma. Removing it can erase evidence, but leaving it up may retraumatize viewers or violate platform standards. The rise of AI-generated images further complicates moderation, blurring the line between real evidence and synthetic propaganda. And let us not forget the emotional burden on T&S teams themselves: prolonged exposure to violent or distressing content, particularly during conflicts, can lead to trauma and burnout.

Systemic Challenges in Trust & Safety During War and Conflict

War doesn’t just create new forms of digital harm, it exposes the structural limitations of the systems meant to contain it. These aren’t isolated issues: they’re systemic stress tests that reveal where policies, tools, and global consistency fall short. In this section, we examine the multifaceted challenges facing Trust & Safety during times of war and why a more adaptive, equitable, and resilient approach is urgently needed.

1. Legal Challenges and Sanctions, Privacy, Security  

Armed conflict transforms the legal and regulatory environment for online platforms almost overnight. T&S teams are often thrust into a high-stakes balancing act, navigating rapidly shifting laws while trying to protect users and preserve access to vital information.

  • Sanctions – Navigating Legal and Ethical Minefields:
    One of the most complex challenges during war is the imposition of legal, economic and trade sanctions. These legal restrictions require platforms to adapt rapidly, often in real-time, to changing sanctions and regimes that limit access, transactions, or services in certain regions or for specific individuals or organizations. Teams must ensure compliance without unjustly penalizing civilians or restricting critical information flow that can lead civilians to safety.

    For example, during the Russia-Ukraine conflict, tech companies had to enforce sanctions on Russian entities, which impacted everything from ad serving and payment processing to account access. These decisions come with difficult trade-offs, and teams face pressure from governments, activists, and users alike –highlighting the challenge of balancing legal obligations with fairness and human rights.
  • Privacy Concerns:
    As tensions rise, so do government demands for user data—often under the banner of “security.” These requests can put civilians, journalists, and activists at risk. Trust & Safety teams must rigorously assess the privacy implications of data compliance and advocate for protective safeguards in conflict regions. For instance, the Iran-Israel conflict highlighted a harsh truth: end-to-end encryption does not guarantee digital privacy. Despite the widespread use of encrypted messaging apps like WhatsApp, Signal, and Telegram, users remain vulnerable to surveillance, hacking, and metadata exploitation.
  • Security Concerns:
    Seemingly benign user-generated content can be weaponized for military intelligence, underscoring the need for stronger content governance in sensitive environments. For example, public social media posts by Israeli soldiers helped Hamas map and target the Nahal Oz base during the October 7 attack, prompting a ban on photos and social media use within IDF facilities to prevent future intelligence leaks. Amid rising tensions with Israel, Iran has resumed its practice of restricting internet access during unrest, causing prolonged blackouts that not only hinder communication and digital rights but also endanger civilians seeking safety and critical information during crises.
  • Phishing and Cybercrime Targeting Conflict Zones:
    Cyberattacks against governments, NGOs, journalists, and civilians increase dramatically during conflicts, often aimed at stealing sensitive information or disrupting communications. The recent cyber warfare incidents linked to the Ukraine conflict illustrate this surge . For example, Google’s Threat Analysis Group actively monitored and countered state-backed threats (e.g., Fancy Bear, Ghostwriter), expanded Project Shield to protect hundreds of Ukrainian government, news, and humanitarian sites from DDoS attacks, and donated 50,000 Workspace licenses and 5,000 security keys to bolster digital resilience.

2. Abuse, Scams, and Disinformation Ecosystem

Trust & Safety teams also have to brace for the inevitable surge of abuse and scams that arise in desperate times. Conflicts create fertile ground for bad actors exploiting fear, confusion, and goodwill.

3. Vulnerable Populations and Moderation Gaps

In times of war, the most vulnerable are often the most exposed. Not just to physical danger, but to digital exploitation. Refugees, children, and communities in under-resourced regions face unique online threats that are too often overlooked in platform safety strategies.

  • Fake Refugee and Migration Services:
    Scammers exploit displaced populations by offering fraudulent relocation assistance, fake travel documents, or bogus shelter options, often resulting in financial loss or exploitation. In more alarming cases, these scams can serve as gateways to human trafficking, with individuals lured under false pretenses and then coerced or abused. Trends include content impersonating legitimate non-profit organizations or government agencies, or offers to “fast track” immigration services. As underscored in the last UNHCR Global Trends 2024 Report, Iran is the largest refugee-hosting country in the world with some 3.5 million refugees, mainly from Afghanistan. If the conflict persists, existing refugee populations would also face renewed uncertainty and yet more hardship.
  • Child Safety:
    Conflict-related themes are increasingly surfacing in digital environments frequented by children—including gaming platforms, social media apps, and educational livestreams. In some cases, young users are unknowingly exposed to politically charged content or drawn into virtual protests, forums, or simulations designed to promote extremist ideologies. Kids as young as 12 joined virtual pro‑Palestinian protests on Roblox. Minecraft servers and mods have also been used to recreate war zones or simulate ideological battles. These virtual spaces illustrate how even children can be swept into political dynamics, raising serious questions about moderation, digital literacy, and safeguarding youth online.
  • Local Moderation Gaps:
    Despite global platforms operating in nearly every region, moderation capabilities are still highly uneven. Conflicts in the Global South (such as those in Ethiopia or Myanmar) often received slower responses or fewer dedicated resources than crises in the West. This discrepancy raises difficult questions about equity, prioritization, and whose safety truly matters online.

Technology's Role in War

In the midst of war and instability, technology has been weaponized, but it also has served as a lifeline for civilians caught in the crossfire. Social media has become a vital tool for civilians in conflict zones to share real-time information, coordinate aid, and document their experiences, often countering official narratives and supplementing traditional media. Mapping platforms like Google Maps are sharing information about shelters. Encrypted messaging apps such as Signal and WhatsApp help civilians communicate securely and organize evacuations or aid distribution. But the openness of information sharing also allows hostile groups, including enemy forces and armed non-state actors, to monitor and access the information.

The Role of AI

Artificial Intelligence is becoming an essential tool in the Trust & Safety response to conflict. AI systems help scale content moderation when human capacity is stretched, flagging potential violations faster than manual review alone. In volatile environments, machine learning models can detect patterns of coordinated disinformation, identify emerging threats, and identify harmful content in multiple languages. Beyond enforcement, AI also has the potential to support vulnerable populations—by enhancing their cybersecurity, translating critical safety information, or signaling safe evacuation routes in real time. But with this power comes responsibility. These systems must be designed with care, grounded in local context, and continuously audited to prevent bias or unintended harm. As conflicts evolve, so too must the AI tools we rely on, ensuring they not only enforce rules but actively contribute to safer, more resilient digital spaces.

From Reactive to Proactive Strategies

From adapting enforcement policies in real time to navigating sanctions, misinformation, scams, and abuse, Trust & Safety teams are operating in increasingly high-stakes environments. These challenges reveal a hard truth: many platforms are still reacting to conflict rather than preparing for it. Just as election integrity teams have embraced long-term planning, it’s time for conflict readiness to become a baseline expectation, not an afterthought.

This means building institutional muscle: scenario planning, simulations, multilingual escalation protocols, and sustained partnerships with local governments and humanitarian organizations. It also requires a deep culture change: training teams not only in moderation, but in crisis response, trauma literacy, geopolitical awareness, and ethical decision-making under pressure. Global equity must be part of the conversation. Too often, response capacity is dictated by market size and familiarity rather than urgency or harm. Platforms will need to rethink how they allocate resources, especially in regions facing chronic instability and underrepresentation.


Looking forward, Trust & Safety must evolve to meet a world where conflict is a constant backdrop. Crisis-focused Trust & Safety is emerging as its own specialization, requiring dedicated skills and tools. Beyond the technical and policy challenges, we must not overlook the human toll on our peers doing this work. Constantly reviewing graphic and traumatic content, while making difficult decisions with life-altering consequences, takes a significant emotional toll that can lead to burnout.That's why investing in robust support systems—adequate staffing, counseling services, peer networks, flexible schedules, and a culture that prioritizes mental health—is not optional. It’s essential to keep teams effective, resilient, and compassionate when the pressure is greatest.

Meet the Author

Cecilia Rodriguez

TrustLab Policy Team Member

Related Posts & Articles

No items found.

Let's Work Together

Partner with us to make the internet a safer place.

Get in Touch