Skip to main content
Markkula Center for Applied Ethics

Why Facebook Left up the “Drunk Pelosi” Video but YouTube Took It Down

Nancy Pelosi

Nancy Pelosi

Subramaniam Vincent

Associated Press

This article was originally published in Future Tense (a partnership of SlateNew America, and Arizona State University that examines emerging technologies, public policy, and society) on June 17, 2019. 

Subramaniam Vincent is the director of Journalism and Media Ethics at the Markkula Center for Applied Ethics. Views are his own.

Every week, it seems, social networks are faced with some new dilemma about the speech they permit (or don’t). It’s particularly confusing when different platforms come to different decisions about the same content. When it came to the doctored Nancy Pelosi video, Facebook decided to leave it up but add fact-checkers’ warnings, while YouTube took it down. Pinterest removes anti-vaccination content altogether, while Facebook argues that “cutting distribution” is enough. Facebook allows posts that say the Holocaust is a hoax, but as of June 5, YouTube does not. This now leads to a situation where a Facebook group calling itself Holocaust Hoax links to a YouTube video. But YouTube has deleted the video itself.

These differences come down to each platform’s community standards, which spell out rules about what content is permitted and what is not. The standards also outline enforcement strategies, which may mean removing content outright but could also mean reducing its distribution or hiding it from most users.

There are lots of types of content that the platforms agree should be removed. These include self-harm/suicide, threats to child safety, nudity (Tumblr only recently outlawed it), graphic violence, firearms, spam/scams, bullying and harassment, and exploitation. These almost always involve individual harm arising from borderline or real criminal activity. In some of these areas, the platforms also have a policy of cooperation or escalation to law enforcement.

In contrast, hate speech, misinformation, and disinformation are areas that researcher Robyn Caplan of Data & Society has called “ethically ambiguous.” (The central distinction between misinformation and disinformation is intent. Misinformation is an umbrella term referring to false or partially false information. Disinformation is its sinister cousin, intended to mislead, disorient, or sow division.) Much of the recent crackdowns (white nationalism and supremacy) and controversies (the Nancy Pelosi video) lurk in those areas. Given how confusing all of this can be, I thought it would be helpful to compare the community standards of Facebook (which also apply to Instagram), Twitter, YouTube, and Pinterest in these situations.

Hate speech involves characterizations and attacks at the group level, and decisions to keep or remove content require local, cultural, or historical context.

All the platforms identify several protected characteristics: race, ethnicity, national origin, religious affiliation, sexual orientation, caste, sex, gender, gender identity, and serious disease or disability. While Facebook, YouTube, and Pinterest also name immigration status and veterans as protected categories, Twitter does not.

Facebook’s standards attempt to define hate speech as a “direct attack on people” based on protected characteristics, with “attack” meaning “violent or dehumanizing speech, statements of inferiority, or calls for exclusion or segregation.” YouTube and Twitter do not get into definitions but go into substantive detail about the types of attacking activity that is violative of their policies, with examples. Pinterest has just a one-paragraph policy that names the protected groups and simply outlaws attacks. As of June 5, in a major revision, YouTube outlaws content that promotes discrimination. Despite Facebook outlining three tiers of severity for its hate speech violations, the word discrimination does not show up in its standards. Twitter and Pinterest do not bring up discrimination, either.

Let’s make this whole thing: Platforms are also grappling with how to handle groups that reject reality, like those that spew conspiracy theories and deny major events such as the Holocaust. Often, the goal is to recruit white nationalists to fuel the separatist movement. While denials of facts are related to misinformation and disinformation, in the case of hate and genocide events like the Holocaust, they are often fundamentally hate speech. Facebook and Twitter permit content denying that major violent events happened. Their general position is that people are allowed to say things that are false unless it is violative of some other policy. For instance, in Facebook’s case, bringing up denials of the Holocaust as part of incitement to violence against a group of people will likely be taken down, but denials alone will stand. But, in its recent aggressive move against hate speech, YouTube added that it would not allow denials of well-documented, violent events like the Sandy Hook shooting.

If one recent story is the poster child for the messy community standards on outright false content, it is the video doctored to make it seem like Nancy Pelosi was drunk. Facebook and Twitter did not take it down, despite it being clearly false. YouTube did. Just this week, U.K. artists released a fake video featuring Mark Zuckerberg on Instagram but one that was easy to identify as such. Facebook declined to take it down citing the same policy. This time, YouTube has also not taken it down, since this video is not a hate-implicated falsehood.

Facebook says the right to free expression inherently allows people to make false claims, so it will not take such content down. Its stated policy is to demote false content to reduce distribution, not delete it. It is only when personal harm, threats, hate speech, or incitement to violence is joined with a denial that deplatforming comes into play. Facebook also takes down posts on false election dates, times, locations, qualifications, and voting methods under its policy targeting voter suppression. But other types of election misinformation (such as claims on polling station closures) rated false by external fact-checkers are demoted, not removed. Facebook also notes that automated detection of false news and its separation from satire is a “challenging and sensitive” issue.

Twitter’s policy for misinformation is very similar to Facebook’s in terms of latitude for speech, but with its own set of exceptions. Some differ from Facebook’s and others are similar. The primary carve-out is for elections, through its election integrity policy, published in April. Twitter says it will take down election-related misinformation that could harm the integrity of elections—such as false information about where to vote, where to register to vote, the date of an election, threats, misleading claims about voting stations, etc.

Twitter also says that it will not take down “inaccurate statements about an elected official, candidate, or political party” or “organic content that is polarizing, biased, hyperpartisan, or contains controversial viewpoints expressed about elections or politics.” That’s why the Nancy Pelosi video stayed up on Twitter, even though it was expressly false and about a politician. Twitter has a special policy for France, where people can complain to Twitter about false information when it is a “threat to public order.” Twitter has done this in response to a clause in France’s 2018 law allowing a judge to hear complaints over false news “disturbing the peace” during elections.

Aside from its new position on the denial of major violent events, YouTube’s community standards for misinformation are very similar in spirit to Facebook’s and Twitter’s. For instance, on May 16, a user named Simon Cheung filed a public complaint about the YouTube channel Free USA News, which posts politics-related misinformation videos. YouTube did not respond. YouTube says it will only take down content implicated in other harms, including voter suppression and anti-vaccination.

So why did YouTube take down the Pelosi video last month? This had nothing to do with the misinformation aspect. YouTube removed the video because it was a violation of its spam and deceptive practices policy, USA Today reported. However, YouTube’s spam and deceptive practices policy does not specifically have a clause outlawing manipulated video.

Pinterest goes further than Facebook, YouTube, and Twitter. It was the first platform to clamp down on anti-vaccination content under health and public safety considerations. It is also the only platform that explicitly invokes the terms misinformation and disinformation.

One thing all the platforms have in common is that their policies are perennially in a state of reactive catch-up. One problem is that because of their scale, Facebook, Instagram, YouTube, Twitter, and Pinterest all use artificial intelligence to flag content for moderators who then make very quick decisions using community standards. A.I. is not great at detecting context and intent, especially for hate speech and political misinformation. For instance, when YouTube did an automated purge of hate content earlier this month, the net also caughteducational videos about the Holocaust and journalists’ coverage of denialism. Likewise, this A.I. lag causes major challenges to automate the detection of false news in an environment where users are sharing protected speech such as satire. In its false news policy, Facebook gives this game away: “There is also a fine line between false news and satire or opinion.” If the A.I. underperforms substantially in making this separation, millions of improperly flagged cases will land with the content moderation teams.

What is evident from these pressure points is this: The platforms appear to be letting their community standards be set by technological limitations. They are applying their technology development paradigm (design, deploy, iterate) to policy. Each major crisis exposes something, and they revise their policies in response. In March, Facebook banned white nationalism and separatism, whereas it had already banned white supremacy content earlier. “White nationalism and white separatism cannot be meaningfully separated from white supremacy and organized hate groups,” Facebook said in this statement. And by banning outright denials of major violence events, YouTube is saying that it can develop a specific response to hate-related falsehoods. This is similar to the platforms developing policies on election- and anti-vaccination-related misinformation.

If there is a trend, it seems to be that platforms are developing responses after sufficient public pressure has emerged where the level of discourse has elevated something to a widely acknowledged harm or risk of harm. In itself, responsiveness to evidence-based criticism from their users and the public is a good thing. But these responses often come after tens of millions of people have been exposed to disinformation and intimidated and attacked by hate, or public health outbreaks have occurred. For the moment, we will remain in crisis mode.

Jun 24, 2019
--

Subscribe to Our Blogs

* indicates required
Subscribe me to the following blogs: