Skip to main content
Markkula Center for Applied Ethics

The Ethics of Facebook’s Justifications to Exempt Hate and Lies by Politicians

Nick Clegg

Nick Clegg

Subramaniam Vincent

Nick Clegg, vice president of Global Affairs and Communications at Facebook (Photo Credit: AP Images)

Subramaniam Vincent is the director of Journalism & Media Ethics at the Markkula Center for Applied Ethics at Santa Clara University. Views are his own. 

When it comes to political conversations on social media, we know that humanity has been on a slippery slope. But how we make ethical justifications for new decisions while on this slope determines whether we’re going to slide further down or climb up. With Nick Clegg’s speech on Facebook’s new exemption for “politicians,” we slid. 

Clegg, former deputy prime minister of the UK and now vice president of Global Affairs and Communications at Facebook gave a speech on September 24th in Washington DC that included announcements about Facebook’s exemptions for politicians’ “speech.” Clegg’s own speech was part announcement, part pronouncement, part emphasis and part justification, all in one smooth act. Central to his statement was that speech by “politicians” has been added to Facebook’s newsworthiness exemption even when it violates their community standards. That Facebook chose a former politician himself to make this announcement is an irony lost on no one. As I write this, the U.S. media was going with the narrative that Facebook has decided to give politicians a free pass—a pass to politicians of various hues to spread fake news, conspiracy, and lies to further their own narratives.

Before we make sense of all this, let’s pretend that Nick Clegg is giving us a ghost interview. On your behalf, I’ll be the ghost and will lay out a set of easy questions. We will use his speech to carve up the answers, and then we’ll get to what is spectacularly worrisome about several justifications Clegg laid out. (Note again: This is not a real interview. The answers are real, the questions are not.)

What did Facebook announce?

Clegg: “Today, I announced that from now on we will treat speech from politicians as newsworthy content that should, as a general rule, be seen and heard.”

Yes, but what is the newsworthiness exemption?

Clegg: “This means that if someone makes a statement or shares a post which breaks our community standards we will still allow it on our platform if we believe the public interest in seeing it outweighs the risk of harm.”

Wait…really? Are you saying that disinformation, when shared by politicians now gets a free pass?

Clegg: “However, when a politician shares previously debunked content including links, videos, and photos, we plan to demote that content, display related information from fact-checkers, and reject its inclusion in advertisements.” 

What was your policy before this?

Facebook already exempts politicians from their third-party fact-checking program. 

Clegg: “We do not submit speech by politicians to our independent fact-checkers, and we generally allow it on the platform even when it would otherwise breach our normal content rules.” 

“We have had this policy on the books for over a year now, posted publicly on our site under our eligibility guidelines.”

Now let’s get to the justifications. Mr. Clegg, why is Facebook doing this?

Clegg: “We don’t believe, however, that it’s an appropriate role for us to referee political debates and prevent a politician’s speech from reaching its audience and being subject to public debate and scrutiny.”

“To use tennis as an analogy, our job is to make sure the court is ready—the surface is flat, the lines painted, and the net at the correct height. But we don’t pick up a racket and start playing. How the players play the game is up to them, not us.”

“Would it be acceptable to society at large to have a private company in effect become a self-appointed referee for everything that politicians say? I don’t believe it would be. In open democracies, voters rightly believe that, as a general rule, they should be able to judge what politicians say themselves.”

But what if politicians say something that incites violence or runs that risk? 

Clegg: “It’s not new that politicians say nasty things about each other—that wasn’t invented by Facebook. What is new is that now they can reach people with far greater speed and at a far greater scale. That’s why we draw the line at any speech which can lead to real-world violence and harm.”

Ghost interview ended.

Unpacking the Justifications

Justifications around moral foundations are welcome. They are part of ethical decision making and some would say even accountability. So we must first laud Clegg for having done so. This is better off than not offering any. 

Having said that, Clegg is oversimplifying many things in his reasoning.

One, what if a political leader, instead of sharing a debunked article, simply took material from it and shared the plain text and meme promoting a disproven allegation or theory implicating a group of people? This offer of demoting debunked content only when it is a shared article and not when it is directly the speech of the politician is a false and harmful distinction.

Second, Clegg’s tennis analogy to democracy convinces no one. Tennis is not democracy. It is a wonderful, enjoyable, rule-based game, with clear lines, two or four players, and sometimes an umpire. If democracy were this easy, we would not be in the situation we are in today with the tech platforms. Officials with the U.S. intelligence community have issued multiple warnings to Congress that they expect foreign-actor interference on an ongoing basis. Some of that will show as seeded disinformation content that select politicians may share or paraphrase into their own posts. Invoking the analogy of being a referee not only does not make the grade, it is rank disingenuous. 

The most incredible justification though is the one about open democracies and voters’ judgment. “They should be able to judge what politicians say themselves,” says Clegg. In ethics language, the word “should” is often the signal that a norm is coming. The key word there is “judge.” Clegg wants social media to be fully transparent to politicians’ speech so that the people can see their leaders for themselves. But the act of judging is not done by quick feelings or emotion-driven decisions. It is a slower process. When we are called to judge, we are considerate of multiple factors, we trade off pros and cons and arrive at a decision. That is not the behavior our social platforms uniformly engender in us when we encounter political speech on our feeds.

We select posts, share, like and exchange barbs on social media, driven often by our biases and implicit attitudes. Our platforms are designed to stimulate rapid emotion-based clicks and shares, all of the time. We did not design a switch to throw all our social media activity into “slow, deliberate and judge the posts you see” mode whenever elections come around, and then back to frictionless, overload mode otherwise.

There are volumes of literature in psychology and neuroscience on how people filter, reject, and accept information in the context of fear or anger or identity and when we have set beliefs about others. To add, for better or worse, our brains respond best to the sources we trust the most. When these are our leaders and when they spew disinformation, the evidence is that we as voters are not able to “judge” such speech for itself, before we engage in “sharing” or “liking.”

So the idea that somehow, we as humans will always be able to quickly and spontaneously call out our leaders as bad actors never held water. We can all do it some of the time, but not all of the people, all of the time, on social media.

Therein lies the final flaw in Clegg’s argument. Admitting that technology running in social media allows politicians to “reach people with far greater speed and at a far greater scale,” he makes what appears to be a concession of thoughtfulness. He cites that risk as the reason Facebook will still draw a line, but only at a politician’s speech which can lead to real world violence and harm. But he overlooks the simple fact that the speed and scale of technology offered by affirmative action to politicians for their lies and disinformation does and will slowly poison our societies. That is harm too. Clegg’s distinction about different types of harm is a convenient and false one.

Still, what is truly worrisome is not all of this.

Just a week ago, Facebook announced that it’s taking its first steps to form an external content oversight board, after nearly 8 months of global consultations. (Disclosure: I participated and contributed at Facebook’s New York consultation.) The company was lauded for having imagined a bold vision for content moderation beyond a mechanistic, rubric-led and failing approach. An external board of independent appointees will make binding appellate decisions on its content takedowns, demotions and related decisions. A more consistent approach for Clegg would have been to hand off this deeper question to the new oversight board as soon as it formed, along with a basket of cases. But Clegg makes no mention of the oversight board in his speech.

More Reading

  1. Facebook, Elections and Political Speech, Facebook, Sept. 24, 2019.
  2. Facebook says it won't remove posts from politicians even if they violate community rules, The Hill, Sept. 24, 2019
  3. Facebook promises not to stop politicians’ lies & hate, TechCrunch, Sept. 24, 2019
  4. Second Circuit Issues Powerful Section 230 Win to Facebook in “Material Support for Terrorists” Case–Force v. Facebook, Technology & Marketing Law Blog, July 31, 2019.
Sep 26, 2019
--