Think back to the last time you were scrolling through social media and reading or listening to the news. Perhaps it was just a few minutes ago (or even right now). Most likely you were turning to a trusted news source–whether it is Huffpost or Breitbart, the New York Times or Fox, we all have a site we default to. We practically accept everything to be true from that source. But when did we become so trusting of that source?
That question about our news consumption habits is actually not so different from the classic question,“Which came first, the chicken or the egg?”
Similar to how chickens lay eggs, news outlets make claims in articles. That news platform’s credibility will imbue the claim with some inherent sense of correctness. Then, in the same way that an egg will eventually grow into a chicken itself, those claims will gain attention and thereby influence the news outlet’s own reputation. And then, the cycle repeats.
This leads us to the question, “Which came first, the reputable source or the truthful claim?”
Why does this question matter?
Sources are reputable when they have a history of telling verified facts, but facts are true when they come from a reputable source. Research  finds that people are more likely to believe in something when it comes from a source they trust. Yet, on the other hand, studies  also find that we are more likely to trust someone who agrees with our views or, in other words, who we perceive to be telling the truth. It's virtually impossible to define one without using the other.
This puzzling paradox is compounded by its ramifications: Too often, misinformation spreads because people with authority choose to spread false claims. Politicians cast doubt on the integrity of elections [3, 4]. Doctors remain skeptical of the effectiveness of vaccines . Official papers report fabricated hate crime and harassment stories  . How can these things be false when reputable sources–based on profession, level of authority, and number of followers–are claiming otherwise?
Moreover, in the digital age, users and media platforms are flooded with so much information that resorting to evaluating sources rather than fact checking every claim becomes the path of least resistance. For example, The Washington Post came out with an article claiming, “Republicans are increasingly sharing misinformation,” which used news source credibility ratings as an approximation for level of misinformation . This reliance on source credibility draws merit from work that finds most false claims come from a consistent group of sites, suggesting that determining unreliable sites could stop a great deal of false information from reaching readers .
Given that user behavior around interpreting news resembles a feedback loop and media organizations are increasingly equating source credibility with claim validity, perhaps answering the chicken and egg question could help us figure out how to stymie false information.
Resolving the Conundrum
So, which came first, the reputable source or the truthful claims?
“Neither.” Dr. Ronald Robertson, a researcher and postdoctoral fellow at the Stanford Internet Observatory explains, “A reputable source can publish unreliable information and a disreputable source can publish verifiable claims.”
This echoes the response from Dr. Alessandro Vecchiato, a postdoctoral scholar working at the Stanford Program on Democracy and the Internet. Vecchiato points out that “it is a logical fallacy, resorting to authority.” Similarly, Robertson acknowledges, “When a source is widely considered to be reputable, that can be a useful heuristic for evaluating the claims they make, but it doesn’t guarantee anything.”
Both Robertson and Vecchiato have conducted extensive research into understanding how to design better internet media using social, behavioral, and network sciences. Their blunt answers point to a major cause of the spread of misinformation: our own heuristics play to our demise. Often, as users, our quest to process and share information quickly causes us to lose focus on verification. A growing body of research focuses on how to fix this weakness.
Recently, Google has announced that they will be piloting a program that centers on ‘pre-bunking,’ the idea that reminding users of what to look out for before they begin surfing the web helps arm them against misinformation. This branch of work is based on inoculation theory, the idea that exposing people to how misinformation works, using harmless, fictional examples, could boost their defenses to false claims .
Google started testing this solution after a paper published in the journal Science Advances came out, detailing how short online videos that teach basic critical thinking skills can make people better at resisting misinformation. Before that, researchers at MIT Sloan School of Management also conducted similar experiments. Their study comprising a series of surveys and field experiments found that people generally were able to correctly evaluate the accuracy of headlines when asked in isolation, but when it came to behavior on social media, that accuracy tapered. The group proposed the use of “accuracy prompts,” which are notifications that shift users’ attention towards the reliability of the content they read before they share it online .
For tech companies and individual users alike, this body of work offers a promising path forward. We can mitigate misinformation by focusing more on educating and prompting users to think more critically while browsing online.
Addressing Ongoing Challenges
While helpful, clearly debunking this one logical fallacy will not single-handedly solve all of misinformation. As Robertson points out, there are other common heuristics that lead to susceptibility to false information. Surveys have found that people generally trust web search engines to deliver unbiased facts. Additionally, eye-tracking studies and behavioral studies have shown that people pay disproportionately more attention to items that appear at the top of their search rankings. “Together, those findings suggest that [...] sources that search engines place at the top of their rankings will get a boost in trust,” Robertson summarizes.
Taking a step back, we also need to remember that information dynamics often vary drastically depending on context. “Media environments differ significantly across the world, so misinformation dynamics need to be evaluated globally and not just in the U.S. (as it is now).” Vecchiato elaborates, “Italy [for example] has a strong government influence on the media, especially television, so propaganda narratives require special attention.”
As tech companies, governments, and other major institutions weigh in on the issue of content moderation, we have to continuously assess our gaps, question our reliance on heuristics, and pay attention to information trends across the world.
Returning to the question at the beginning of this post: While inevitably a platform’s reputation and the validity of their articles are popularly intertwined, we cannot mistake one for the other. The chicken is not the same as the egg.
No matter what, we must always check claims based on first principles. Everyone—users, journalists, fact-checkers alike—must resist leaning on lazy metaphors and heuristics to establish truth.
Here is a list of best practices for readers based on Robertson and Vecchiato’s suggestions:
Do not scroll mindlessly. Log off from time to time.
Accept uncertainty. No one is immune to mistakes or cognitive biases, so you are not going to get it right 100% of the time. Being aware and skeptical of your own beliefs can help.
Be aware of the algorithm. Make the algorithm learn what you like accurately and do not let it feed you automatically. If you see a post that you don’t like, signal it. If you see an ad that is irrelevant to you, say it. Offer input that is meaningful, and it will improve beyond base parameters.
Here are some additional resources:
Ted Ed video series: Hone your media literacy skills Playlist
NewsGuard plugin tool: Introducing: NewsGuard
Curated list of resources based on the book Detecting Fake News on Social Media: https://github.com/mdepak/fake-news-detection-resources
 ‘Who shared it?’ How Americans decide what news to trust on social media from American Press Institute
 Why we believe alternative facts from American Psychological Association
 Fact Check-Re-examining how and why voter fraud is exceedingly rare in the U.S. ahead of the 2022 midterms from Reuters Fact Check
 Nine Election Fraud Claims, None Credible from FactCheck.org
 What Proportion of Doctors Are Vaccine Hesitant? from MedPage Today
 The left's emerging 'fake news' problem from Insider
 Republicans are increasingly sharing misinformation, research finds from Washington Post
 'Pre-bunking' shows promise in fight against misinformation from AP News
Trust Lab was founded 3 years ago by senior Trust & Safety executives from Google, YouTube Reddit and TikTok with a mission to make the web safer for everyone. As leading Trust & Safety executives for Engineering, Product and Ops for over a decade each, they build enterprise systems and tools to identify high risk and unsafe content, accounts and transactions at scale. Trust Lab’s machine learning based classifiers and rules engines are combined with human insights to help clients better assess fraud and safety risks for content, identities and transactions on their platforms. Trust Lab has deployed its tech with a broad range of clients including some of the leading social networks, messaging companies and marketplaces.
Annie Zhu is a policy and product analyst intern at Trust Lab. She is currently an undergrad, studying Data Science and Social Systems at Stanford University. Aligned with her interests in the trust and safety field, Annie has assisted with research work at Stanford Digital Civil Society Lab and will be embarking on a project with the Stanford Internet Observatory. On campus, she is also a director of the Stanford Public Interest Technology Lab, where she organizes opportunities to make the space accessible to more students.