top of page
  • Writer's pictureTom Siegel

The Big Problem that Big Tech cannot solve


We post, ping, gram, snap, tweet, and search millions of times a day. But a lot of what we consume is misleading, inflammatory, fraudulent—simply bad content. Conversely, content that is objectionable or controversial for some may disappear without warning or recourse. The combined conundrum of too much bad and not enough good is a big problem that Big Tech cannot solve.


The bad news? It’s getting worse every day, and as a society, we’re paying a big price. The good news? It is fixable, and we have the means to do it. But Big Tech alone has not, cannot, and will not solve the problem. As Google’s senior Trust & Safety executive who was in charge of these efforts over many years, I am as responsible as anyone for the misses. Trust & Safety teams across the industry are well-intentioned and try hard. But the results just aren’t good enough. Easy to fix? No. Possible? Yes. Here is how.


 

The Big Tech problem

It is unhealthy and dangerous for society when a small number of large social media companies control the vast majority of public conversation online. Small apps developed by young founders in their dorm rooms quickly became giant corporations whose power exceeds that of any single country or organization many times over. They influence the outcome of elections, revolutions, wars, and what we talk, hear, and think about every day. With such great power comes great responsibility which Big Tech sadly can’t live up to. In today’s world, Big Tech decides unilaterally who speaks, what can be said, and who hears it. With a few lines of code, they impact how billions of people communicate, interact and think. As more and more of our public conversation shifts online, the stakes are higher than ever.


Big Tech is unable to fix its big content problem alone. A solution will require a coordinated effort across many partners to make this a global priority. Similar to climate change, the future of our collective wellbeing is at stake.


The Digital Wild West: Consistent, predictable, and fair application of content moderation standards doesn’t exist. Bad content is easily found on any platform at any time, and good content constantly disappears without notice. The inadequacy of humans and machines to apply content rules correctly, and the staggering volumes and speed of content distribution have created a Digital Wild West with no consistent enforcement to speak of. For example, Facebook readily admits 10% of its 2M daily enforcement decisions could be wrong — 200,000 wrong decisions a day! And they are among the most effective moderators around. The average error rate across industry is certainly much higher. Users have little recourse if they have been wronged. Policies are fuzzy at best and constantly changing, removals and inaction go unexplained, appeals and complaints are mostly ignored. This leaves the internet in a state of lawlessness where everyone stands alone.


The Unwilling New Kings: Deciding unilaterally what online content stays up or comes down is deeply flawed and undemocratic, best compared to monarchs ruling kingdoms. With most public discourse happening on only a handful of platforms, each run by a now middle-aged techie man, the failure of judgment, abuse of power, and bias toward their own interests and values at the expense of society are not just likely, but certain. A small group of digital kings make unilateral decisions about which speech is allowed, who can speak, and how rules are enforced, with little transparency and no accountability. They didn’t ask, plan or prepare for such enormous power, but they are wielding it nevertheless. Societies in the 21st century grant large powers to their leaders through democratic elections, holding them publicly accountable. For speech on Big Tech platforms, there are no notable checks and balances to speak of.


Safeguarding the Pot of Gold: Social media companies and other tech platforms are private enterprises that have built amazing for-profit businesses which are among the largest and most profitable in human history. For all their power, fixing society's problems is not their primary concern. Platforms optimize for growth in user engagement and revenue, and increasingly to protect their profitable business models from disruption. And for the status quo, unhealthy content means big money. Silencing inconvenient voices means less brand risk. Algorithmic amplification and filter bubbles mean more user engagement. Why change?

Free World Governments on the sidelines

Governments are missing in action in one of the most consequential societal challenges of our time. They play no meaningful role in effecting change for the better or shaping the future. While seemingly every other aspect of public life gets increasingly regulated -- from consumer goods to nuclear power generation to selling new cereal bars -- the Internet remains largely regulation-free despite the huge risks to citizens' health and wellbeing. Government inaction and politicization have led to unchecked growth of harmful content of all types and Big Tech censorship creep.

Regulation lags as harm increases. Governments in the free world have side-stepped their responsibility to keep their citizens safe online -- evident in rampant abuse and ubiquity of unsafe content. No meaningful regulation or standardization has made the web safer and social media platform harm has only increased, while the debate over political nuances continues


Regulation is used to justify censorship. Where regulations are being implemented, they often result in significant overreach and the watering down of basic rights. Internet regulation designed to suppress dissent rather than keep citizens safe is widespread and easily spotted, with “safety” becoming the leading excuse for restricting online speech. Oppressive governments are unsurprisingly wielding this power with ever-growing zeal. And increasingly, it’s the moderate states and those that had been role models for free speech following this dangerous path, most recently under the disguise of the Covid pandemic. India’s new Internet Law mandating platforms remove anything the government deems illegal is just one example. Overreach is just as harmful as inaction. Those two can’t be the only options at our disposal.


Politicization kills progress. Policymakers are more divided than ever over how to fix the problem. In the US, many left-leaning policymakers believe there is too much bad content online, while right-leaning politicians think freedom of speech is being unjustly limited. An unwillingness to prioritize public welfare over party politics makes broadly supported and sustainable legislative compromise unlikely. This failure may be the biggest tragedy of all - even when faced with one of the greatest collective challenges of our lifetime, politicians are unable to come together for a solution. And in the unlikely event that they do, the sheer complexity of legislating safety and freedom online amid new technological advances, increasing dependence on machine automation, and rapidly changing trends, makes the creation of effective regulation or standards an elusive goal. Adding to the challenges, policymakers currently don’t have the expertise, time, data, or long-term incentives to fully assess the trade-offs, opportunities, and threats to develop effective legal safety frameworks by themselves.


Users as collateral

Future generations may look back on social media technology the way we now see smoking or drunk driving. What was once viewed as normal or acceptable behavior is now known to be deadly. Big tech has normalized our collective access to unhealthy content and consumption patterns, continuously fed by unchecked algorithms that get better at manipulation and amplification every day. The dangers are widely known, yet most people behave no differently despite the enormous risk. The cost to society is immeasurable, as evidenced by the increased polarization, threats, and violence online (and increasingly offline) which is undermining the very foundation of our democracies and values as a free world.

TIME FOR ACTION

The path to a healthy internet means keeping users safe from harm, protecting their right to free expression, and helping Big Tech rebuild trust. The current system of internet health and safety is destructive and deeply broken. But the situation is not hopeless. As a society, we have a path and means to fix it. But we can’t rely on Big Tech to fix itself. Industry standards & Collaboration, effective government involvement, and independent Trust & Safety services are a viable path forward. Here is a 5 step action plan.


1. Create TRANSPARENCY to build trust

Given the importance of online media and user generated content in our lives, we deserve full and objective transparency about how bad things are. Currently, platforms have no idea how much badness goes undetected or how much good content they remove, and users don’t know how to judge the risk on those platforms. People engaged in public discourse online have mostly no means of appealing the removal of their content to a neutral party (a notable exception is Facebook’s Oversight Board attempting this on a small scale). This blindness is toxic and prevents the necessary discussion about how to fix it. Often touted “successes”, like the removal of millions or even billions of pieces of bad content, are meaningless without knowing how much bad content still exists and how much good content was wrongly removed in the process. Big Tech self-auditing (enacted in the EU) and self-serving ‘transparency’ reports by Big Tech, are meaningless. For regulators and the media to accept those measures as sufficient is appeasement at best and negligent at worst.


Solution: Regular independent reporting. A regularly conducted, openly shared and scientifically validated reporting mechanism of the health of social media and user generated content done by reputable 3rd parties. The process should be based on proven data science principles, conducted by domain experts, and explained in language the public understands. This transparency and baseline is fundamental to any meaningful change.



2. DEMOCRATIZE the mechanisms of control

Effective content moderation is critical for a well-functioning society. It’s irresponsible to leave this task to a small group of people inside Big Tech, accountable only to shareholders and hidden from the public. That’s what we’ve done to-date. It hasn’t worked.

Solution: Independent 3rd party content moderation. Responsibility for content moderation should be taken away from Big Tech and given to licensed 3rd parties beholden to principles and standards jointly defined by industry and public institutions. Independent 3rd parties should be accountable to enforce consistent standards across platforms, and ensure transparency and due process. Unlike private social media platforms, independent entities have the incentives and long term perspective to fix this problem holistically. However, this will only work if they are truly independent from Big Tech and operate on principles audited by democratic institutions.

3. Establish common sense REGULATIONS, INDUSTRY STANDARDS and TESTING

Thoughtful and balanced regulation is imperative to better define the rules of speech on the web, the same way it is done in physical settings. We can’t let individual companies be their own judge, and no private entity should assume this responsibility alone. Governments need to be held accountable for providing guardrails to protect their citizens. Some of the recent proposals made in the EU are a promising start. Leading democracies need to rally and create models for others to follow suit.

Equally important, Big Tech needs to rally together to define standards for the tech industry that are then universally accepted and followed by industry participants. Based on the concept of user harm, these guidelines and expectations, developed in public-private partnerships and championed by industry groups and practitioners, are common in many other industries. They have proven to be effective in finding common ground and ensuring accountability without the need for heavy-handed government regulation.

As a particularly risky development, content recommendation and removal algorithms have a huge influence on how users experience content and the harm that follows. As such, they need to be monitored through careful vetting prior to launch to test if user harm is mitigated. Algorithms' power to amplify content needs to be restricted to features where impact is understood, auditable and intervention after release is possible. This common sense approach is already implemented in almost every other sector of society where harm occurs, such as Consumer Goods, Finance or Food Safety.


Solution: A Dedicated Government Agency to promote healthy choices and trade-offs in content moderation, encourage standards for Big Tech and 3rd party Trust & Safety Service providers to follow, and verify and enforce compliance. It would oversee efforts to create algorithm testing frameworks and auditing capabilities for machine-based content recommendation and amplification. It would generously fund the necessary technical advances for technical tools, systems and more effective policies. It would be overseen by leaders who are appointed by democratically elected officials and accountable to the public. Standards should be developed through public-private partnerships under the umbrella of an industry association by individuals committed to the public good. Sharing of data among Big Tech to benefit improved abuse detection will enable consistent enforcement, broader coverage and fairness.

4. Create INCENTIVE alignment for Big Tech

User and societal wellbeing need to be the guiding principles for healthy online content practices. Currently, those aren’t aligned with what Big Tech is optimizing for, making intervention and rebalancing of incentives a necessity. An independent safety infrastructure that can deliver transparency, democratize the mechanisms of control, establish widely accepted industry standards and conduct safety testing will require significant and long term financial resourcing. This should be provided by Big Tech which currently operates a highly profitable business model while exposing its users to widespread harm and safety problems.

Solution: Big Tech Taxes and Penalties to fund independent internet infrastructure. Platforms need to contribute the funding to build a stable, capable and independent content moderation ecosystem without exercising any control or influence on it. Independent control agencies and oversight should be financed through a corporate tax on monetized user generated content. Ongoing serious infractions against public standards happening under Big Tech’s watch should carry stiff penalties including loss of control to manage the affected businesses independently. Such actions are appropriate and in line with the risk and harm to users and society at large from operating unsafe consumer products.


5. Make individual RESPONSIBILITY count

None of us individually should tolerate toxicity online and the disregard of good neighbor standards. Norms of expression online should be aligned with our long-established standards for in-person communications and social interactions. This requires humanizing online communications and a level playing field for online and offline behavior. Each of us individually has a responsibility and needs to act accordingly - through awareness of the online risks and willingness to exercise the controls and choice we all have, to protect ourselves from online harm and censorship overreach. We need to identify and reject failure of common sense norms we use in physical interactions, filter bubbles, echo chambers and cancel culture, and replace it with the respect and kindness that is a hallmark of interpersonal relationships. Each of us plays a role and our choices matter.


Solution: Education, awareness and role modelling in schools, public forums, entertainment and popular culture that promote online health and safety as an important public good.


The path forward

Bad content, indiscriminate removal, opaque policies and inconsistent enforcement have made online content moderation one of the most dangerous threats to societal health. But it isn’t too late to make changes, and what we need to do is clear. Instead of Big Tech resting on their limited achievements while everyone else points fingers and deflects responsibility, we should act together. And we don’t have a minute left to waste.

 

Trust Lab was founded 2 years ago by senior Trust & Safety executives from Google, YouTube Reddit and TikTok with a mission to make the web safer for everyone. As leading Trust & Safety executives for Engineering, Product and Ops for over a decade each, they build enterprise systems and tools to identify high risk and unsafe content, accounts and transactions at scale. Trust Lab’s machine learning based classifiers and rules engines are combined with human insights to help clients better assess fraud and safety risks for content, identities and transactions on their platforms. Trust Lab has deployed its tech with a broad range of clients including some of the leading social networks, messaging companies and marketplaces.


Tom Siegel is the CEO and Co-Founder of Trust Lab. Previously the VP of Trust & Safety at Google for over 14 years, Tom built its global team through all stages of growth into an industry-leading user protection and abuse fighting organization with thousands of team members all around the world. Tom’s portfolio included content safety, privacy and security protections for most of Google’s products including websearch, ads, payments, accounts, Gmail and cloud. As the company’s most senior Trust & Safety executive he set the company strategy and was responsible for its execution. Tom serves on a number of Trust & Safety advisory boards and coordinates events to build cross-industry collaboration on key Trust & Safety issues.


2,823 views3 comments
bottom of page