Skip to main content
Markkula Center for Applied Ethics

The Ethics of Social Media Decision Making on Handling the New York Post “October Surprise”

Courtney Davis ’21

Courtney Davis ’21 is an undergraduate philosophy senior at Santa Clara University and Hackworth Fellow 2020-21 at the Markkula Center for Applied Ethics, SCU. 

In October 2020, the New York Post published an article carrying allegations that Joe Biden’s son leveraged his father’s political position for personal gain. Social media firms’, particularly Facebook and Twitter’s handling (i.e. content moderation) of this news story came under intense scrutiny. [Download a PDF version of this case for use in class.]

This ethics case features two parts:

  1. A recap of what the platforms did with the New York Post’s “October Surprise” and the related ethical questions for discussion. 
  2. Supplementary moral analysis from the author. After the questions in part 1 are discussed in a group/class setting, part 2 can be given out for reading and further discussion.

Part I: What happened

At 5:00 am EST on October 14th, the New York Post (NY Post) published an article that many politicians and news analysts would soon qualify as this election’s October surprise. In politics, October surprise refers to any newsworthy event—planned or unplanned—that has the potential to impact the outcome of the election. The Post article, “Smoking-gun email reveals how Hunter Biden introduced Ukranian Businessman to VP dad,” contains claims more captivating than even the title suggests. Citing the content of a recovered hard drive, the article alleges that Joe Biden’s son leveraged his father’s political position for personal gain. 

At 8:10 AM on the same day, a spokesperson for Facebook stated that they intended to fact-check the article and would reduce its distribution on their platform. Twitter also locked the NY Post’s account and disallowed its users from sharing links to the article on their platform, alerting users that the link had “been identified by Twitter or [Twitter’s] partners as being potentially harmful.” Not unexpectedly, backlash ensued. The common criticism levied against both companies deals with internal inconsistency. Facebook doesn’t have any clear policies permitting preemptive distribution moderation if a post is simply eligible to be fact-checked. And aren’t all posts eligible for fact-checking? And apparently, Twitter doesn’t have any third-party fact-checking partnerships at all, at least according to the editor-in-chief of PolitiFact who voiced her concerns over social media. “Has Twitter partnered with fact-checkers without telling anyone? It would be news to me.” (Poynter). 

But despite both companies’ efforts to limit its spread, the Post article attracted significant attention. Axios reported that the NY Post article received 2.59 million interactions on Facebook and Twitter during the week of its publication (83% on Facebook and 17% on Twitter). On October 20th, it became the 6th most popular news article of the month (Axios).

Using the words historic and unprecedented to describe the circumstances of the 2020 presidential election has become somewhat cliché. The sheer number of potentially false news stories exchanged online is only one of the factors informing this cliché. Although they may not have said so publicly, Facebook and Twitter’s decisions reflect the unique election-time pressure afflicting the entire journalistic and social media enterprise. How can media institutions facilitate the free flow of information and promote the truth during an election cycle shrouded in misinformation?

Questions: 
  1. Who is accountable for the harms produced by the NY Post article, especially considering that it was widely disseminated online? 
  2. Who ought to be the “arbiter of truth”: social media companies, journalists, the government or individual users? 
  3. How should we hold Facebook accountable if they violate their own policies? Should it depend on the policy? 
  4. Is the unique election-time context morally relevant to Facebook’s content moderation practices? 
  5. How should Facebook address issues related to administrative transparency, especially considering their evolving role in democratic processes? 

Part 2: After initial discussion of questions in part 1, review this analysis in a 2nd round of discussion. 

In analyzing this case, the value of neutrality comes to mind. Within the context of journalism, neutrality functions as a value that individual journalists embody when pursuing a story. Even the Society of Professional Journalists urges journalists to “avoid conflicts of interest” and to “support the open and civil exchange of views, even views they find repugnant.” Neutrality, then, implies the absence of bias. Social media companies similarly invoke ‘neutrality’ when faced with content moderation decisions. In other words, we would not accept Facebook as a legitimate institution if they removed articles from their platforms simply because they did not agree with their claims. Facebook’s unparalleled popularity is actually a product of this expectation. Maximizing profits involves attracting the largest possible customer-base. Facebook knows that the largest crowds are also the most diverse crowds. Sustaining such a base requires remaining impartial in the face of opposing viewpoints—it requires neutrality. 

Facebook declares their devotion to neutrality every time they claim not to be an “arbiter of truth” in our democracy. Recall earlier this year when Mark Zuckerberg defended his decision not to remove Donald Trump’s blatantly false posts about the illegality of mail-in voting by saying, “I just strongly believe that Facebook shouldn’t be the arbiter of truth of everything people say online.” His response highlights an important tension between neutrality and our media ecosystem as it presently operates. It is generally acceptable for media institutions to take a neutral stance with respect to opposing opinions, but it is not acceptable for media institutions to take a neutral stance when faced with opposing facts. And if it isn't obvious, there’s actually no such thing as opposing facts, just facts. Why, then, did so many people disagree with Facebook’s decision to reduce the distribution of the NY Post article? Weren’t they correcting the mistake they made during the mail-in-voting debacle?

First of all, Facebook didn’t leave Trump’s false mail-in-voting posts on their platforms for the sake of neutrality. In fact, the opposite is true. Facebook left Trump’s false posts on their platforms for the sake of some value. Facebook might tout neutrality in an attempt to maximize satisfaction in their enormous user-base, but any company that references concrete policies when making decisions is promoting a set of values—usually at the expense of other values. This tension is manifest in the systematic misapplication of Facebook’s company policies. Donald Trump’s posts about the purported illegality of mail-in ballots blatantly violated their voter interference policy, which prohibits misrepresentations of the “methods for voting or voter registration” and “what information and/or materials must be provided in order to vote” (Civil Rights Audit 38). Analysts criticized Facebook for enabling voter suppression on a mass scale and for blatantly disregarding their own policies. Facebook simply disagreed by saying that the content of Trump’s posts didn’t warrant removal. And if neutrality didn’t motivate this decision, which values were actually at play? 

Our media ecosystem runs on more than just fact and opinion. More often than we like to acknowledge, journalists are not reporting the objective end-all-be-all of truths—they’re reporting a body of evidence composed of claims that point toward some conclusion. For example, reporting that Trump tweeted about the fraudulence of mail-in-voting is different than reporting that mail-in-voting is fraudulent by nature. Though both reports might contain claims, only one of them contains claims that are verifiable, and if true, factual claims.

Why is this distinction relevant? By reducing the distribution of the article before it was fact-checked (the claims were not yet verified), Facebook effectively “picked a side”—another value-laden decision. If neutrality requires impartiality in the face of opposing claims, Zuckerberg’s “never the arbiters of truth” philosophy would certainly advise against interfering with the article before a third-party dubbed it factual. So the question remains: which values did Facebook actually deploy when making this call? Perhaps neutrality isn’t as clean as Zuckerberg makes it sound. 

Facebook’s response to Trump’s mail-in ballot claims and to the NY Post article have something in common: they both reflect Facebook's systematic internal inconsistency. In the first phase of the NY Post controversy, Facebook had three options. They could have: (a) let the post circulate until their fact-checking partners delivered a verdict, (b) limited distribution while waiting for their fact-checking partners to deliver a verdict, or (c) removed all posts containing the link including the NY Post’s own post. And while there are several options available, Facebook’s company policies make only one recommendation: only deprioritize content that independent fact-checkers have flagged as false or misleading.

Without reference to what’s ethical, or what’s considered morally right, Facebook should not have limited the article’s distribution before hearing back from their fact checkers--that is, of course, assuming that their policies aren’t purely symbolic. Instead, Facebook cited a phantom policy that deals with “doubtful content.” The policy they cited, which apparently enables Facebook to limit the spread of potential disinformation, hasn’t received public attention in other content moderation cases, causing many to question whether or not the policy actually exists. 

Eventually, Facebook’s fact-checkers did deliver a verdict. The story, they claimed, originated from an unreliable source. Without metadata to verify the provenance of the emails cited as the source in the NY Post’s article, and without corroboration from other news sites (ie. Wall Street Journal, New York Times, Washington Post, Buzzfeed News), they urged Facebook to recognize the article’s potential to sow unfounded confusion. And while it is worth acknowledging that Facebook once again ignored their policies, it’s perhaps more important to emphasize that Facebook’s company code is not the holy grail of corporate ethics. Facebook is inconsistent, but they are often inconsistent with reference to policies that are already morally flawed.

Moral Lenses Analysis:

The Utilitarian tradition measures the morality of an action based on the consequences it produces. In other words, we ought to act in such a way that generates the most good, or overall welfare, for the people involved. Here, the utilitarian would recommend that Facebook consider the harms and benefits that would result from leaving the article up undeterred. Clearly Facebook felt that the risks (i.e., undermine election integrity, amplify a false October Surprise, etc.) far outweighed the benefits (i.e,. promote free speech, avoid censorship and accusations of bias, etc.) of allowing the article to circulate. But because much of the controversy surrounding the NY Post article had to do with social media company response, it seems Facebook’s (and Twitter’s) decision to demote the piece actually drew more attention to its content. Importantly, we cannot always reliably predict the consequences of our actions. 

The Common Good approach defines morality in terms of communal flourishing. According to this tradition, we ought to act in accordance with values that promote the good of the community. These values include respect and compassion--particularly for those individuals who aren’t afforded the same opportunities in society on the basis of difference. In this case, promoting the good of the community has a lot to do with protecting the integrity of our democracy. Facebook has the power to disincentivize forces that use its platform to thwart a free and fair election. They also have the power to facilitate the exchange of ideas among a diverse citizenry. True communal flourishing hinges on the idea that our collective conception of the good never infringes upon any individual person’s flourishing. Did Facebook’s decision to reduce the article’s distribution promote or hinder communal flourishing? Did Facebook’s decision exclude any individual from enjoying this communal flourishing? Perhaps Facebook ought to protect speech only insofar as it does not hinder any individual’s ability to pursue their good life. 

The Rights Approach points out that all individuals are free and have dignity and are thus deserving of respect. On the basis of this freedom, individuals have a set of rights that should never be infringed upon. Actions that infringe upon these rights are unethical and actions that don’t are ethical. This approach is complicated, as it involves weighing the rights of each of the affected stakeholders. What are Facebook’s rights? What are the NY Post’s rights? What are the users’ rights? Of these rights, which conflict with each other? Which ought to take priority?

The Virtue Approach revolves around good character. The virtue theorist asks what kinds of traits a good person has, and attempts to emulate those traits in circumstances that require their use. Virtues are commendable traits of character manifested in habitual action. This approach encourages us to conceive of Facebook as a locus for virtue cultivation. But which virtues are most relevant to this particular content moderation decision? Consider the virtue of honesty. The truly honest person would likely have a reputation for transparency. Given Facebook’s systematic misapplication of their own policies, it’s quite difficult to gauge how much they conceal from the public about their decision-making processes. When making any decision that could impact the outcome of a democratic election, Facebook, at their most virtuous, ought to make their administrative processes transparent. Any Facebook user (or U.S. citizen for that matter) should be able to understand and appreciate the Community Standards logic that informed their decision. If Facebook does not have a policy that would yield the desired outcome—in this case, reducing the distribution of the NY Post article—then they ought to transparently explain why their existing policies don’t capture the complexity and weight of the case at hand. If such a policy does exist, then it should be readily accessible to all Facebook users. The virtue of honesty, when deployed in this context, requires that Facebook step out from behind the opaque curtain that too-often conceals their most consequential decision-making procedures.   

Justice is also extremely relevant in this case. In an article on justice and fairness, the Markkula Center for Applied Ethics encourages aspiring virtuous actors to ask themselves whether or not their actions treat all persons equally—for such is the crux of justice. When people are not treated equally, another important question arises. Is the difference in treatment justified? Or, “are the criteria we are using relevant to the situation at hand?” (Velasquez et al.). Facebook’s decision makers had no choice but to consider all of the stakeholders involved, some of whom were the NY Post, Facebook users, U.S. citizens, the U.S. government and other major media publications (ie. NYT, WSJ, etc.). A superficial application of justice in this case might raise questions about Facebook’s unfair treatment of the NY Post. By disregarding their own policies to reduce the distribution of their article, we might think that Facebook made an exception for the publication. But more importantly, we have to ask whether or not this exception was justified. In more familiar terms, justice is about giving people what they deserve. When the NY Post published their October Surprise “smoking-gun,” they became susceptible to widespread scrutiny. This happens to all media institutions that publish new content, but especially to institutions that publish inflammatory content written with virality as a primary goal of its authors. There is a reason that no other major news sources regarded the emails on the found laptop as credible source material. There is also a reason that very little time passed between the Post’s apprehension of the material and their publication of the “smoking-gun.” Perhaps knowingly disseminating potentially false information about a presidential candidate one month before a major election would be a flagrant injustice to American voters. 

The practice of virtue ethics is an ongoing balancing act. There is no single virtue that is more important than the others. There is no handbook that captures the complexity of every single moral crossroad that might require virtuous action. The application of this theory happens on a case-by-case basis, and no case is exactly like another. While honesty and justice might be relevant tools for Facebook when handling this content decision, patience and temperance might be relevant tools when handling another. But how can we know for sure which virtues we ought to apply in a given case? And what happens when competing virtues recommend solutions that do not agree with each other? 

[Download a PDF version of this case for use in class.]

Citations

Axios. “NY Post story goes massive on social media despite crackdowns.” Axios, 20 October 2020, https://www.axios.com/new-york-post-hunter-biden-facebook-twitter-censor-bf8d9f32-f8cb-444e-bc12-c3b5e8694e84.html.

Civil Rights Audit. “Facebook’s Civil Rights Audit – Final Report.” July 2020, https://about.fb.com/wp-content/uploads/2020/07/Civil-Rights-Audit-Final-Report.pdf.

Poynter. “Without methodology or transparency, Facebook and Twitter become the ‘arbiters of the truth.’” Poynter, Poynter Institute, 15 October 2020, https://www.poynter.org/fact-checking/2020/without-methodology-or-transparency-facebook-and-twitter-become-the-arbiters-of-the-truth/.

Velasquez, Manuel, et al. “Justice and Fairness.” Markkula Center for Applied Ethics, Santa Clara University, 1 August 2014, https://www.scu.edu/ethics/ethics-resources/ethical-decision-making/justice-and-fairness/.

 

 

Feb 5, 2021
--