Policing online speech risks amplifying misinformation, not solving it
By: Lukas Wellesley
Social media platforms host an astonishing 5.2 billion active users, with posts ranging from casual updates to global news. However, the growing demand for platforms like Facebook and Instagram to act as the internet’s fact-checkers presents a difficult conundrum: their business models prioritize engagement rather than truth. By expecting social media companies to moderate content for accuracy, society risks normalizing censorship, algorithmic biases, and corporate-controlled interests and narratives, which raises concerns about who gets to define truth in an increasingly digital world.
First of all, the scale of content versus the limited capacity of human fact-checkers makes accurate, consistent fact-checking impractical, if not impossible. Social media companies simply lack the resources to fact-check hundreds of millions of posts daily without error. Facebook alone sees over 3 billion active users every month, generating a staggering amount of data. Algorithms, while helpful, are far from perfect. In fact, a Massachusetts Institute of Technology study found that false news spreads six times faster than verified information because it often appeals to emotions rather than facts. Human fact-checkers, while more discerning, cannot possibly keep up with such an enormous volume of posts. In December of 2024, when Meta removed millions of pieces of content each day, the company estimated that one in ten of those removals was a mistake. As data from Meta and other corporations has shown, efforts to fact-check are likely to be inconsistent, either missing crucial content or silencing harmless posts due to the overwhelming number of posts to review.
https://ourworldindata.org/rise-of-social-media
Additionally, enforcing a fact-checking policy could transform social media into an environment of overreach and censorship. While social media companies are private entities, they have become central spaces for public discourse. Opponents argue that these companies have the right to moderate content as they see fit, but mandating companies to arbitrate truth introduces the risk of overreach, where dissenting opinions could be flagged or removed not for spreading false information, but rather merely for being controversial. For example, debates surrounding issues such as abortion or immigration often hinge on moral or philosophical perspectives rather than empirical absolutes. In such cases, excessive fact-checking could suppress valuable discourse and create an environment where only dominant narratives persist, undermining the diversity of thought that drives societal progress.
Most significantly, tasking companies like Meta with determining the validity of content puts the power to define “truth” into the hands of largely profit-driven corporations and their third-party fact checkers. This raises ethical concerns, as these platforms are ultimately profit-driven and may shape moderation policies in ways that align with their business interests. For example, according to the Wall Street Journal, Facebook has a history of promoting polarizing content because it generates higher engagement, which translates to higher advertising revenue. Beyond financial incentives, corporate influence over content moderation also creates the risk of suppressing viewpoints that challenge their own regulatory or business agendas. If social media companies selectively restrict discussions on issues like antitrust regulation, they could shape public discourse in ways to preserve their power. Without greater transparency, this dynamic fosters a world where corporate profits and narratives dictate what information is most visible.
Holding social media companies responsible for fact-checking would not only induce overregulation but also create a logistical burden and disproportionately concentrate power in the hands of a few corporations. Instead of having social media companies conduct fact-checking, the responsibility for navigating misinformation should rest with informed users. Fundamentally, if society allows social media platforms to police the truth, we risk a future where public discourse is shaped not by facts but by profits, eroding the very freedoms we seek to protect.