• Tom Siegel

The Big Problem that Big Tech cannot solve

We post, ping, gram, snap, tweet and search millions of times a day. But a lot of what we consume is misleading, inflammatory, fraudulent—simply bad content..Conversely, content that is objectionable or controversial for some may disappear without warning or recourse. The combined conundrum of too much bad and not enough good is a big problem made bigger by big tech. The bad news? It’s getting worse everyday, and as a society we’re paying a big price. The good news? It is fixable, and we have the means to do it. But Big Tech has not, cannot and will not solve the problem alone. Easy? No. Possible? Yes. Here is how.



The Big Tech problem

It is unhealthy and dangerous for a society when a small number of large social media companies control the vast majority of public conversation online. Their power exceeds that of any single country or organization many times over. They influence the outcome of elections, revolutions, wars, and what we talk, hear and think about every day. With such great power comes great responsibility which Big Tech sadly doesn’t live up to. They decide unilaterally who gets to speak, what can be said, and who hears it. With a few lines of code, they impact how billions of people communicate, interact and think. As more and more of our public conversation shifts online, the stakes are higher than ever. Big Tech has created a big content problem, and they’re unable to fix it. A solution will require a coordinated effort across many stakeholders to make this a global priority. Similar to climate change, the future of our collective wellbeing is at stake.


The Digital Wild West: Consistent, predictable and fair application of content moderation standards doesn’t exist. Bad content is easily found on any platform at any time, and good content constantly disappears without notice. The inadequacy of humans and machines to apply content rules correctly, and the staggering volumes and speed of content distribution have created a Digital Wild West with no consistent enforcement to speak of. For example, Facebook readily admits 10% of its 2M daily enforcement decisions could be wrong —200,000 wrong decisions a day! And they are among the best enforcers around, while the average error rate across industry is much higher. Users have little recourse if they have been wronged. Policies are fuzzy at best and constantly changing, removals and inaction go unexplained, and appeals and complaints mostly ignored, leaving the internet in a state of lawlessness where everyone stands alone.


The New Age Kings: Deciding what online content stays up or comes down is deeply flawed and undemocratic, best compared to monarchs ruling kingdoms. With most public discourse happening on only a handful of platforms, each run by a middle-aged techie man, the failure of judgment, abuse of power, and bias toward their own interests and values at the expense of society are not just likely, but certain. A small group of unchallenged digital kings make unilateral decisions about which speech is allowed, who can speak, and how rules are enforced, with little transparency and no accountability. Societies in the 21st century grant large powers to their leaders through democratic elections, holding them publicly accountable. For Big Tech there are no notable checks and balances to speak of.


The Pot of Gold: Social media companies and other tech platforms are private enterprises that have built amazing for-profit businesses which are among the largest and most profitable in human history. For all their power, fixing society's problems is not their primary concern and it’s evident in how they act. Platforms optimize for growth in user engagement and revenue, and increasingly to protect their profitable business models from disruption. And for the status quo, bad content means big money. Silencing inconvenient voices means less brand risk. Algorithmic amplification and filter bubbles mean more user engagement. Why change?

Free World Governments on the sidelines

Governments are missing in action in one of the most consequential societal challenges of our time. They play no meaningful role in effecting change for the better or shaping the future. While seemingly every other aspect of public life is now highly regulated -- from consumer goods to nuclear power generation to selling a new cereal bar, the Internet remains largely regulation-free despite the huge risks to citizens' health and wellbeing. Government inaction has led to unchecked growth of harmful content of all types and Big Tech censorship creep.

Regulation lags as harm increases. Governments in the free world have side-stepped their responsibility to keep their citizens safe online -- evident in rampant abuse and ubiquity of unsafe content. No meaningful regulation has made the web safer and social media platform harm has only increased, while the debate over political nuances continues.


Regulation is used to justify censorship. Where regulations are being implemented, they often result in significant overreach and the watering down of basic rights. Internet regulation designed to suppress dissent rather than keep citizens safe is widespread and easily spotted, with “safety” becoming the leading excuse for restricting online speech. Oppressive governments are unsurprisingly wielding this power with ever-growing zeal. And increasingly, it’s the moderate states and those that had been role models for free speech following this dangerous path, most recently under the disguise of the Covid pandemic. India’s new Internet Law mandating platforms remove anything the government deems illegal is just one example. Overreach is just as harmful as inaction. Those two can’t be the only options at our disposal.


Politicization kills progress. Policymakers are more divided than ever over how to fix the problem. In the US, many left-leaning policymakers believe there is too much bad content online, while right-leaning politicians think freedom of speech is being unjustly limited. An unwillingness to prioritize public welfare over party politics makes broadly supported and sustainable legislative compromise unlikely. This failure may be the biggest tragedy of all - even when faced with one of the greatest collective challenges of our lifetime, politicians are unable to come together for a solution. And in the unlikely event that they do find common ground, the sheer complexity of legislating safety and freedom online (amid new technological advances, increasing dependence on machine automation, and rapidly changing trends) makes the creation of effective regulation an elusive goal. Adding to the challenges, policymakers currently don’t have the expertise, time, data, or long-term incentives to fully assess the trade-offs, opportunities and threats to develop effective legal safety frameworks by themselves.

Users as collateral

Future generations may look back on social media technology the way we now see smoking or drunk driving. What was once viewed as impolite or irresponsible behavior is now known to be deadly. Big tech has us addicted to unhealthy content and consumption patterns, continuously fed by out-of-control algorithms that get better at manipulation and amplification every day. The dangers are widely known, yet most people behave no differently despite the enormous risk. The cost to society is immeasurable, as evidenced by the increased polarization, tension and violence online which is undermining the very foundation of our democracies and values as a free world.

TIME FOR ACTION

The path to a healthy internet means keeping users safe from harm, protecting their right to free expression, and helping Big Tech rebuild trust. The current system of internet health and safety is destructive and deeply broken. But the situation is not hopeless. As a society, we have a path and means to fix it. But we can’t rely on big tech to fix itself. Industry standards, effective government involvement, and independent Trust & Safety services in charge of content moderation are a viable path forward. Here is a 5 step action plan.


1. Create TRANSPARENCY to build trust

Given the importance of online media and user generated content in our lives, we deserve full and objective transparency about how bad things are. Currently, platforms have no idea how much badness goes undetected or how much good content they remove, and users don’t know how to judge the risk on those platforms. People engaged in public discourse online have mostly no means of appealing the removal of their content. This blindness is toxic and prevents the necessary discussion about how to fix it. Often touted “successes”, like the removal of millions or even billions of pieces of bad content, are meaningless without knowing how much bad content still exists and how much good content was wrongly removed in the process. Self-auditing (currently practiced by the EU) and self congratulatory reporting bi Big Tech (“transparency reports”), are meaningless. For regulators and the media to accept those measures as sufficient is appeasement at best and negligent at worst.

Solution: Regular independent reporting. A regularly conducted, openly shared and scientifically validated reporting mechanism of the health of social media and user generated content done by reputable 3rd parties. The process should be based on proven data science principles, conducted by domain experts, and explained in language the public understands. This transparency and baseline is fundamental to any meaningful change.



2. DEMOCRATIZE the mechanisms of control

Effective content moderation is critical for a well-functioning society. It’s irresponsible to leave this task to a small group of people inside Big Tech, accountable only to shareholders and opaque to the public. That’s what we’ve done to-date. It hasn’t worked.

Solution: Independent 3rd party content moderation. Responsibility for content moderation needs to be taken away from Big Tech and given to 3rd parties beholden only principles for user safety and protection of free speech as defined by society. Independent 3rd parties should enforce consistent standards across platforms and ensure transparency and due process. Unlike private social media platforms, independent entities have the incentives and long term perspective to fix this problem holistically. However, this will only work if they are truly independent from Big Tech and operate on principles defined by democratic institutions.

3. COMMON SENSE REGULATION AND INDUSTRY STANDARDS

Thoughtful and balanced regulation is imperative to better define the rules of speech on the web, the same way it is done in physical settings. We can’t let big tech be its own judge, and no private entity should assume this responsibility. Governments need to be held accountable for writing the playbook on behalf of their citizens. Some of the recent proposals made in the EU are a promising start, but nascent at best. Leading democracies need to rally and create models for others to follow suit.

Standards need to be defined in public-private partnerships such as industry associations, and include public discourse and participation from think tanks, advocacy groups and the public at large.

As a particularly dangerous development, content recommendation and removal algorithms have a huge influence on how users experience content and the harm that follows. As such, they need to be regulated through careful vetting prior to launch to test if user harm is mitigated. Testing criteria and results need to be publicly explained. Algorithms' power to amplify content needs to be restricted to features where impact is fully understood, auditable and intervention in the wild is possible. This common sense approach is already implemented in almost every other sectors of society where harm occurs, such as Consumer Goods, Finance or Food Safety.

Solution: A Dedicated Government Agency to promote healthy choices and trade-offs in content moderation, mandate standards for Big Tech and 3rd party Trust & Safety Service providers to follow, and verify and enforce compliance. It would oversee efforts to create content moderation playbooks, algorithms testing frameworks and auditing capabilities for machine-based content recommendation and amplification. It would generously fund the necessary technical advances for technical tools, systems and more effective policies. It would be overseen by leaders who are appointed by democratically elected officials and accountable to the public. Standards should be developed through a public-private partnership under the umbrella of an industry association by individuals committed to the public good. Ground truth data sharing across the industry will enable consistent enforcement, broader coverage and fairness.

4. Force INCENTIVE alignment for Big Tech

User and societal wellbeing need to be the guiding principles for healthy online content practices. Currently, those aren’t aligned with what Big Tech is optimizing for, making intervention and rebalancing of incentives a necessity.

Solution: Taxes and Penalties. Platforms need to contribute the funding to build a stable, capable and independent content moderation ecosystem without exercising any control or influence on it. It should be financed through a corporate tax on user generated content that is monetized, but not independently monitored. Ongoing serious infractions against public standards happening under Big Tech’s watch should carry stiff penalties including loss of control to manage the affected businesses independently. Such actions are appropriate and in line with the risk and harm to users and society at large. They will drive the behavioral change inside Big Tech that’s needed.


5. Make individual RESPONSIBILITY count

None of us individually should tolerate toxicity online and the disregard of good neighbor standards. Norms of expression online should be aligned with our long-established standards for in-person communications and social interactions. This requires humanizing online communications and a level playing field for online and offline behavior. Each of us individually has a responsibility and needs to act accordingly - through awareness of the online risks and willingness to exercise the controls and choice we all have, to protect ourselves from online harm and censorship overreach. We need to identify and reject failure of common sense norms we use in physical interactions, filter bubbles, echo chambers and cancel culture, and replace it with the respect and kindness that is a hallmark of most in-person interaction in public spaces. Each of us plays a role and our choices matter.


Solution: Education, awareness and role modelling in schools, public forums, entertainment and popular culture that promote online health and safety as an important public good.


The path forward

Bad content, indiscriminate removal, opaque policies and inconsistent enforcement have made online content moderation one of the most dangerous threats to societal health. But it isn’t too late to make changes, and what we need to do is clear. Instead of Big Tech celebrating their limited achievements while everyone else points fingers and deflects responsibility, we should act together. And we don’t have a minute left to waste.

Trust & Safety Laboratory was founded 2 years ago by senior executives from Google and YouTube with a mission to make the web safer for everyone. As leading Trust & Safety executives for Engineering, Product and Ops for over a decade each, they build enterprise systems and tools to identify high risk and unsafe content, accounts and transactions at scale. Trust Lab’s machine learning based classifiers and rules engines are combined with human insights to help clients better assess fraud and safety risks for content, identities and transactions on their platforms. Trust Lab has deployed its tech with a broad range of clients incl. some of the leading social networks, messaging companies and marketplaces.

Tom Siegel is the CEO and Co-Founder of Trust Lab. Previously the VP of Trust & Safety at Google for over 12 years, Tom built its global team through all stages of growth into an industry-leading user protection and abuse fighting organization with hundreds of millions of annual budget and thousands of team members all around the world. Tom’s portfolio included content safety, privacy and security protections for most of Google’s products including websearch, ads, payments, accounts, gmail and cloud. As the company’s most senior Trust & Safety executive, he set the company strategy and presented it to the board. Tom serves on a number of Trust & Safety advisory boards and coordinates events to build cross-industry collaboration on key Trust & Safety issues.


25 views0 comments