Skip to main content
Markkula Center for Applied Ethics

What Does it Mean to Reinstate Donald J. Trump’s Account on Facebook?

Courtney Davis ’21 & Subramaniam Vincent

"Trump facebook" by Book Catalog is licensed under CC BY 2.0

Courtney Davis ‘21 is a Hackworth Fellow with the Markkula Center for Applied Ethics at Santa Clara University and Subramaniam Vincent (@subbuvincent) is the director of journalism and media ethics, also at the Markkula Center for Applied Ethics. Views are their own.

(The following statement was submitted to the Facebook Oversight Board regarding case 2021-001-FB-FBR.)

What does it mean to reinstate Donald J. Trump’s Facebook page? It should come as no surprise that Facebook’s Community Standards are informed by utilitarianism—especially the ones that involve their imminent harm threshold. Even their Dangerous Individuals & Organizations policy rationale begins: “In an effort to prevent and disrupt real-world harm, we do not allow … .” Utilitarianism is a moral framework that measures the goodness of an action based on the consequences it produces. The classic utilitarian believes that we ought to maximize happiness, that is, act in such a way that produces the most good or overall welfare for all of the people involved. When Facebook makes content moderation decisions, then, they are often conducting a utilitarian analysis. When presented with a problem, they must first identify all of the possible courses of action they could take. They then identify the harms and benefits that could result from taking these courses of action. Finally, they opt for the course of action that will produce the most benefits for the involved parties after the costs are calculated.

For this case, Facebook must decide whether they will restore Trump’s access to their services at all. There are three possible courses of action: (a) the path of not restoring Trump’s Facebook, (b) the path of restoring his Facebook, and (c) the path of prolonging his indefinite suspension. 

So, what exactly are the harms and benefits associated with each of these potential courses of action? Suppose Facebook chooses option B—the path of restoring Trump’s Facebook. Unlike options A and C, this path invites a secondary set of decision-making problems related to implementation. In other words, restoring his Facebook requires answering the “how” question. Will Facebook give Trump his old account or will they give him a new account?

Every Facebook account with a large following has a reach network baked in with AI, data, and algorithms. Trump’s account is no exception; it is nested in a web of interconnections with people, groups and organizations with reciprocal amplification power that create leader-follower feedback loops. Imagine if Trump was given a new account altogether. It will not take long for this network to re-establish itself. There could be a brief period of lower reach, during which he re-builds that network. Also, restoring his access so close to the January 6 insurrection is problematic. He continues to claim the election was stolen from him, and his political party has not cut ties with him.

His most ardent followers could claim vindication of his narrative, and the rebuilding of his network will happen swiftly and with zeal. A new “seditious and network conspiracy” may unfold. So giving Trump a blank slate risks producing the same amount of harm in the long run. From the perspective of utilitarianism, then, giving him his old account is largely the same as giving him a new account. But what harms and benefits will ensue if Facebook chooses not to restore his Facebook at all, or, if they choose to prolong the current suspension?

While utilitarianism is an influential and widely deployed normative ethical theory, it has some serious problems. The most common critique of utilitarianism has to do with justice suffering. Determining which course of action will produce the most good does not require considering who must carry the burden of harm. In their discussion of utilitarianism, the Markkula Center for Applied Ethics at Santa Clara University offered the following example: “During the apartheid regime in South Africa in the last century, South African whites, for example, sometimes claimed that all South Africans—including Blacks—were better off under white rule. These whites claimed that in those African nations that have traded a whites-only government for a Black or mixed one, social conditions have rapidly deteriorated. Civil wars, economic decline, famine, and unrest, they predicted, will be the result of allowing the Black majority of South Africa to run the government.” According to this utilitarian analysis, the predicted costs of adopting a multiracial government (civil wars, unrest, etc.) far outweighed the predicted benefits (inclusion). And even though these predictions were eventually proven false, apartheid was morally justified by a utilitarian framework. This critique of utilitarianism has profound and relevant implications. Is there something morally wrong with forcing some to suffer in order to produce the highest “net” social welfare?

However, in this case, our argument is that Facebook is not designed with built-in checks and balances against abuse by politicians who violate the democratic values of truth and justice. Too much harm is done before Facebook’s team can catch up. We acknowledge the design certainly helps the human rights activist who has no other platform to document and criticize injustice, especially in countries where the press itself excludes truth-speaking voices of the marginalized and where no free press exists at all. It helps the whistleblower threatened by the ultra-powerful. At Georgetown University in October 2019, Mark Zuckerberg said, “While the world’s attention focuses on major events and institutions, the bigger story is that most progress in our lives comes from regular people having more of a voice.” Shortly after, he added, “giving everyone a voice empowers the powerless and pushes society to be better over time.” Voice and inclusion—this is the original promise of Facebook. But by offering equal access to everyone, particular politicians have leveraged social media technology to confuse, disorient and divide people into a culture war, and use it for personal gain. Though Facebook was not designed for politicians, it best serves politicians.  

Is this reason enough for Facebook to reevaluate their technology? We recommend that the oversight board ask Facebook the following questions: 

  1. What has Facebook changed in its design, data, and algorithms that may prevent the harms that led to Trump’s de-platforming from happening again? 
  2. How might Facebook ensure that such changes will not have new unintended and harmful consequences?  

We recommend that the board include Facebook’s answers to the above questions in your deliberations, and cite those answers in the justifications for your final decision. 

Apr 27, 2021
--

Subscribe to Our Blogs

* indicates required
Subscribe me to the following blogs: