Skip to main content
Markkula Center for Applied Ethics

Regulating Disinformation? We’ve Been Debating the Wrong Question

social media icons and a megaphone

social media icons and a megaphone

Anita Varma

Anita Varma is the program manager for Journalism & Media Ethics as well as Business, Leadership, and Social Sector Ethics at the Markkula Center for Applied Ethics. Views are her own.

Governments around the world have proposed urgent regulations to curb the spread of disinformation (also termed “fake news”) online. In response, freedom of expression advocates have recoiled at the notion of government officials positioning themselves as arbiters of truth. Yet efforts outside of state regulation have also done little to put an end to disinformation.

Why have so many attempts to end disinformation failed to resolve the issue?

Because the terms of debate – starting with “disinformation” – have missed the forest for the trees. What’s sorely missing from regulatory debates on disinformation is a specific framework for deciding and justifying which perspectives are eligible for public dissemination. Democratic discourse provides such a vocabulary and set of criteria.

Disinformation is a construct that was developed to describe false claims that are willfully harmful. These claims travel the social web like wildfire, deliciously confirm people’s deep-seated biases, and are often plain fabrications.

Yet it is no mistake that disseminators of what we call disinformation are uninterested in chipping away at public conviction in mundane truths, and instead prefer to deal in charged political questions such as: what does a presidential candidate stand for? What does Brexit mean? How will climate change affect us? Why are people seeking asylum? Who is a terrorist, who is a revolutionary, and who is a refugee? These are all, at their core, questions of meaning.

Striving to cleanse the Internet of unauthoritative claims unsurprisingly sets the stage for enshrining authoritative distortions. Instead, the questions that regulators should ask are: which perspectives should be legitimated by being eligible for public inclusion on platforms? Which perspectives should be rejected? And what criteria should shape these judgments?

Asking these questions ideally would draw attention to also asking what, precisely, the point of Internet platforms should be. Beyond a space for self-satisfied declarations of our latest individual achievements, personal preferences, and memes with sardonic captions that oversimplify complex issues, platforms have the capacity to be much more. These platforms have always held the potential to be unprecedented spaces for democratic discourse about unresolved issues. At promising moments, they have served this purpose, but have not been structured to encourage such a role – though they could be.

Democratic discourse requires three ingredients:

  1. The setting needs to be inclusive and safe – such that dehumanizing language and incitements to violence are prohibited – even (and especially) if it comes from elected officials. Hostility is silencing, and therefore incompatible with democratic discourse.

  2. People need to enter the conversation willing to have their mind changed. In other words, fundamentally inert people can’t participate. Letting these folks enter democratic discourse would be like playing a game of soccer with people who insist on standing with their feet planted on the field while refusing to move: they end the game for everyone.

  3. Participants strive for consensus, not just self-expression. Rather than parallel streams of assertions, participants engage with each other to arrive at new ideas about how to understand and address unresolved issues.

Instituting such parameters would, by design, exclude people who prefer to declare their perspectives and not hear others. This is not inherently grounds for regret, though. The whole point of regulation is to move away from prior agnosticism by instituting a vision of what a good society needs.

Other industries have been regulated in ways that narrowed their scope yet improved civic health by prioritizing the public interest. For example, the advertising industry offers a useful, yet little-referenced, precedent: in the US, advertising regulations prevent the widest breadth of possible (deceptive) advertising from reaching consumers.

Just as it is a laughable proposition that regulating advertising meant the end of expression through advertising, regulation need not signal the end of freedom of expression online. On the contrary, regulations can set us on a better trajectory for genuine inclusion ahead.

In the aftermath of brutality wrought through digital platforms, we need to grapple with the perplexing problem of which perspectives we no longer believe in giving a platform to, in order to publicly justify why. With this in mind, regulators should seek not just to clean up the (dis)information ecosystem, but also to create the conditions for inclusive debates about persistent issues that affect us all.

Apr 26, 2019
--

Subscribe to Our Blogs

* indicates required
Subscribe me to the following blogs: