Culture

Getting the Public Behind the Fight on Misinformation

Why is the public so hesitant to support efforts to combat misinformation?
By
Timothy Rich, Madelynn Einhorn, misinformation, social media news, First Amendment, social media misinformation, public attitudes to misinformation, combatting misinformation, how to combat misinformation, Facebook misinformation

© MilletStudio / Shutterstock

February 18, 2022 06:09 EDT
Print

Misinformation is false or inaccurate information communicated regardless of intention to deceive. The spread of misinformation undermines trust in politics and the media, exacerbated by social media that encourages emotional responses, with users often only reading the headlines and engaging with false posts while sharing credible sources less. Once hesitant to respond, social media companies are increasingly enacting steps to stop the spread of misinformation. But why have these efforts failed to gain greater public support? 

A 2021 poll from the Pearson Institute found that 95% of Americans believed that the spread of misinformation was concerning, with over 70% blaming, among others, social media companies. Though Americans overwhelmingly agree that misinformation must be addressed, why is there little public consensus on the appropriate solution? 


Social Media and the Cold War Around Free Speech

READ MORE


To address this, we ran a national web survey with 1,050 respondents via Qualtrics, using gender, age and regional quota sampling. Our research suggests several challenges to combating misinformation

First, there are often misconceptions about what social media companies can do. As private entities, they have the legal right to moderate content on their platform, whereas the First Amendment applies only to government restriction of speech. When asked to evaluate the statement “social media companies have a right to remove posts on their platform,” a clear majority of 58.7% agreed. Yet a partisan divide emerges, where 74.3% of Democrats agreed with the statement compared to only 43.5% of Republicans.  

Ignorance of the scope of the First Amendment may partially explain these findings, as well as respondents believing that, even if companies have the legal right, they should not engage in removal. Yet a history of tech companies initially couching policies as consistent with free speech principles only to later backtrack adds to the confusion. For example, Twitter once maintained “a devotion to a fundamental free speech standard” of content neutrality, but by 2017 had shifted to a policy where not only posts could be removed but even accounts without offensive tweets

Embed from Getty Images

Second, while most acknowledge that social media companies should do something, there is little agreement on what that something should be. Overall, 70% of respondents, including a majority of both Democrats (84%) and Republicans (57.6%), agreed with the statement that “social media companies should take steps to restrict false information online, even if it limits freedom of information.”

We then asked respondents if they would support five different means to combat misinformation. Here, none of the five proposed means mentioned in the survey found majority support, with the most popular option — providing factual information directly under posts labeled as misinformation — supported only by 46.6% of respondents. This was also the only option that a majority of Democrats supported (56.4%).

Moreover, over a fifth of respondents (20.6%) did not support any of the options. Even focusing just on respondents that stated that social media companies should take steps failed to find broad support for most options. 

So what might increase public buy-in to these efforts? Transparent policies are necessary so that responses do not appear ad hoc or inconsistent. While many users may not pay attention to terms of services, consistent policies may serve to counter perceptions that efforts selectively enforce or only target certain ideological viewpoints.

Recent research finds that while almost half of Americans have seen posts labeled as potentially being misinformation on social media, they are wary of trusting fact-checks because they are unsure how information is identified as inaccurate. Greater explanation of the fact-checking process, including using multiple third-party services, may also help address this concern.

Social media companies, rather than relying solely on moderating content, may also wish to include subtle efforts that encourage users to evaluate posting behavior. Twitter and Facebook have already nodded in this direction with prompts to suggest users should read articles before sharing them. 

Various crowdsourcing efforts may also serve to signal the accuracy of posts or the frequency with which they are being fact-checked. These efforts attempt to address the underlying hesitancy to combat misinformation while providing an alternative to content moderation that users may not see as transparent. While Americans overwhelmingly agree that misinformation is a problem, designing an effective solution requires a multi-faceted approach. 

*[Funding for this survey was provided by the Institute for Humane Studies.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

Comment

Only Fair Observer members can comment. Please login to comment.

Leave a comment

Support Fair Observer

We rely on your support for our independence, diversity and quality.

For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.

In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.

We publish 2,500+ voices from 90+ countries. We also conduct education and training programs on subjects ranging from digital media and journalism to writing and critical thinking. This doesn’t come cheap. Servers, editors, trainers and web developers cost money.
Please consider supporting us on a regular basis as a recurring donor or a sustaining member.

Will you support FO’s journalism?

We rely on your support for our independence, diversity and quality.

Donation Cycle

Donation Amount

The IRS recognizes Fair Observer as a section 501(c)(3) registered public charity (EIN: 46-4070943), enabling you to claim a tax deduction.

Make Sense of the World

Unique Insights from 2,500+ Contributors in 90+ Countries

Support Fair Observer

Support Fair Observer by becoming a sustaining member

Become a Member