Skip to main content
Markkula Center for Applied Ethics

A Change in Twitter’s “Private Information Policy”

buttons

buttons

Focusing on Images and Videos Posted Without Consent

Irina Raicu

Irina Raicu is the director of the Internet Ethics program (@IEthics) at the Markkula Center for Applied Ethics. Views are her own.

2021 has been a year full of social media content moderation policy changes. At the very beginning of the year, you might remember, Donald Trump was still president and still active on Twitter, Facebook, and YouTube. Much has changed since then—in an iterative process that played out differently on different platforms.

The most recent adjustment comes from Twitter, which on Monday announced that it is updating its “private information policy” to include, now, images and videos of a person that are posted without that person’s consent—whether or not the images feature “explicit abusive content.” In the announcement, the Twitter Safety team notes that “misuse of private media can affect everyone, but can have a disproportionate effect on women, activists, dissidents, and members of minority communities.” The company will now respond when users or their authorized representatives report “that a Tweet contains unauthorized private media.”

The announcement adds that the policy

is not applicable to media featuring public figures or individuals when media and accompanying Tweet text are shared in the public interest or add value to public discourse. … We will always try to assess the context in which the content is shared and, in [some] cases, we may allow the images or videos to remain on the service. For instance, we would take into consideration whether the  image is publicly available and/or is being covered by mainstream/traditional media (newspapers, TV channels, online news sites), or if a particular image and the accompanying tweet text adds value to the public discourse, is being shared in public interest, or is relevant to the community.

The new policy was greeted by a number of critics pointing out that it might lead to the suppression of valuable posts, and expressing concern about uneven implementation. As Professor Casey Fiesler observed (in a tweet), “After skimming a lot of quote-tweets, here are the two most common reactions I'm seeing to this policy: (1) ugh this is all about protecting social justice warriors and silencing conservatives (2) ugh this is all about protecting nazis and silencing activists.”

In Inc., Bill Murphy Jr. pointed out that a lot of the responses from Twitter users seemed to miss the exceptions noted in the policy. He wrote, “the good news is that based on my reading of the policy… it appears that a lot of the things Twitter users are concerned about… would probably still be allowed. … At least, I think so. There's a lot of discretion for Twitter baked into the policy.”

Writing in Gizmodo, journalist Shoshana Wodinsky also noted that “‘public interest’ or ‘add[ed] value’ are squishy phrases that Twitter will need to define for itself.” That concern regarding “squishy phrases” is applicable to all content moderation policies, however; less squishy phrasing would be clearer and easier to apply consistently but might well leave out unspecified harms—or benefits—that would be covered by broader language. There are trade-offs involved, not only in the policies themselves, but in the language that frames them, too.

The second concern, about the company getting to define those terms for itself, is the one that Facebook, for example, tried to address by creating its unique Oversight Board. 2021, however, also brought the Oversight Board’s first transparency reports, which concluded, among other things, that “Facebook has not been fully forthcoming with the Board on its ‘cross-check’ system, which the company uses to review content decisions relating to high-profile users.” So the question of who defines the terms remains a live one, involving its own trade-offs[*]; most activists concerned about the power of platforms to define their own content moderation terms are even more concerned when governments, for example, try to take on that role.

In its post about the transparency reports, the Oversight Board added that “a clear theme has emerged: Facebook isn’t being clear with the people who use its platforms. We’ve consistently seen users left guessing about why Facebook removed their content.” On that point, at least, when it comes to the removal of images or videos depicting non-public figures and posted without their consent, Twitter will be able to be clear.

It will be interesting to see, in turn, how transparent Twitter will be about circumstances in which the Safety team considers context, determines that an exception applies, and decides to allow an image or video to stay up even if a person depicted in it reports it as recorded without consent and requests its removal.

Photo: "Twitter Buttons at OSCON" by Garrett Heath is licensed under CC BY 2.0

[*] Most ethical decisions involve complex tradeoffs. MCAE’s “Framework for Ethical Decision Making” stresses the importance of considering a variety of  alternative actions, and assessing them through multiple ethical “lenses”:

  • Which option best respects the rights of all who have a stake? (The Rights Lens)
  • Which option treats people fairly, giving them each what they are due? (The Justice Lens)
  • Which option will produce the most good and do the least harm for as many stakeholders as possible? (The Utilitarian Lens)
  • Which option best serves the community as a whole, not just some members? (The Common Good Lens)
  • Which option leads me to act as the sort of person I want to be? (The Virtue Lens)

 

Dec 1, 2021
--

Subscribe to Our Blogs

* indicates required
Subscribe me to the following blogs: