Skip to main content
Markkula Center for Applied Ethics

Notes from a Content Moderation Conference

Santa Clara University

Santa Clara University

Notes from a Content Moderation Conference

Irina Raicu

Santa Clara University: Photo courtesy of Kate Klonick

Irina Raicu is the director of the Internet Ethics program at the Markkula Center for Applied Ethics at Santa Clara University.  Views are her own.

On the first Friday in February, the High Tech Law Institute (part of Santa Clara University’s School of Law) hosted a first-of-its-kind gathering: the Content Moderation and Removal at Scale Conference. (Videos, essays, and coverage are available at this link.)  Amid growing debate, congressional hearings, and new academic research about the role that online platforms play in the distribution of misinformation, trolling, terrorist and other political propaganda, and online harassment, this conference (the brainchild of Professor Eric Goldman) invited the representatives of those platforms to explain how they draw up their policies about which user-generated content to take down, as well as how they implement those policies.

The conference included panels on six broad topics: An overview of different companies’ content moderation operations (with representatives from Automattic, Dropbox, Facebook, Google, Medium, Pinterest, Reddit, Wikimedia, and Yelp); the history of content moderation; the hiring, training, and protection of the employees or contractors who do the actual work of content removal;  the interplay of humans and AI as the latter is increasingly used for content moderation; the question of “insourcing” content moderation to employees versus outsourcing it (to vendors or to users of the platforms themselves); and, finally, a panel on transparency and appeals—addressing what various companies are willing to disclose about the numbers and types of take-downs on their platform, as well as the policies and means that allow users to appeal the removal of their posts or the their entire accounts.

That there was a hunger for such information became clear from the sold-out audience, which included content moderator practitioners, lawyers representing various civil society groups, journalists, and academics.

The blog TechDirt has been publishing a series of articles by some of the speakers who participated in the conference, and media outlets like The Atlantic have also published articles about insights gained at the meeting. Here are some of my own notes, to add to those materials.

I work at an ethics center, so I might notice such language more—in any case, I was struck by the opening comments delivered (via tape) by Senator Ron Wyden, who argued that online platforms have a moral obligation to do more in this area, even though they have broad legal immunity for content posted by their users. And other speakers spoke in terms of ethics, too: Ted Dean, of Dropbox, for example, talked about content moderation functioning in support of the company’s values; Alex Feerst explained that at Medium content moderators are full-time employees in part because the company felt that was “the right thing to do.”

If there was one thing that all of the participants wanted to make clear (and did, especially in the first panel), it was that decisions about what content to take down and under what circumstances are really hard, and that there are no easy answers that apply to all companies.  In many ways, the companies gathered were more different than they were alike—from the number of users they had, to the kind of content the users post on each, to their various business models that do or don’t monetize content via advertising. It might be interesting to consider a smaller conversation among the subset of companies that do have more in common—say, Facebook, Twitter, and Reddit: would they be able to come up with some common answers? And, if so, would that be a good idea? During the conference, Kevin Bankston, the director of the Open Technology Institute, tweeted, “I for one think the idea of a standard ‘what is bad speech’ recommended ruleset for all companies to follow is a horrifying idea.”

On Twitter and in the room, a number of audience members noted the fact that several of the speakers compared their companies’ policies and practices to the drawing up and application of laws. Monica Bickert, of Facebook, for example, explained that multiple distinct teams meet at Facebook every two weeks (sometimes including outside experts) to review content moderation policies—and described those meetings as “mini legislative sessions.”  Alex Feerst, of Medium, described the development of an internal “common law” as part of the company’s content moderation process. Jessica Ashooh, of Reddit, explained that the volunteer moderators of various subreddits come up with their own rules, but that there are also some rules that apply to the entire site—and she analogized that to the federal  system.

I noticed that theme as well, but it did not strike me as particularly revelatory, given that many of the speakers were lawyers, speaking at a law school, to an audience that clearly included a good number of lawyers, too.  It also struck me less as an expression of (or grasp at) power, and more as an acknowledgment of reality. Of course companies have been defining, from the beginning, what we say, and how, on their platforms. They make policies (and design choices) and enforce them; whether we call those “rules” or describe them, metaphorically, as “laws,” the power is the same.  And yes, the platforms have and exercise vast power. Some critics argue that they exercise it too much; others argue that they’re trying to evade their responsibility and don’t exercise it enough; a third group would argue that the problem is that they take down the wrong things: too often, for example, taking down the posts and accounts of activists who are fighting abuses, while allowing to stand the posts and accounts of trolls and harassers. As a recent paper by the EFF’s Corynne McSherri, Jillian York, and Cindy Cohn argues,

We’ve seen prohibitions on hate speech used to shut down conversations among women of color about the harassment they receive online; rules against harassment employed to shut down the account of a prominent Egyptian anti-torture activist; and a ban on nudity used to censor women who share childbirth images in private groups. And we've seen false copyright and trademark allegations used to take down all kinds of lawful content, including time-sensitive political speech.

Their paper is titled “Private Censorship Is Not the Best Way to Fight Hate or Defend Democracy: Here Are Some Better Ideas.” Yet users, and various organizations, and governments (some of which actually represent their citizens), are telling platforms that they have an ethical obligation to control what they publish and amplify.

To do this well, consistently, at the scale at which some of these platforms operate, is a gargantuan task. Is it outright impossible? And, if so, should we accept the idea of ungovernable channels that allow for the instantaneous distribution and amplification of information around the world?

One thing on which there seemed to be consensus among the speakers was that effective content moderation requires diversity, both among the people who devise policies and among those who implement them.

Another point of consensus was that AI is increasingly helpful in addressing some content moderation challenges, but that it will not “save us.” Humans will continue to be involved in the process; it cannot work without them.

And that brings us to the panel that I found most striking—the one about the hiring, training, and mental well-being of the content moderators themselves. As the moderator of the panel, UCLA Professor Sara Roberts, points out in an article published in conjunction with the conference, commercial content moderation workers “are critical to social media’s operations and yet, until very recently, have often gone unknown to the vast majority of the world’s social media users.” Roberts is doing groundbreaking academic work about this aspect of the internet, and about the rights of the humans whose work allows us to have the internet that most of us use. The internet that they see is very different. As one of the panelists pointed out, most of the people who work in content moderation had never come across this kind of content before starting their jobs. It struck me, hearing that, that neither have the rest of us. How many of the folks concerned about what online platforms “censor” have actually seen the kind of things that content moderators take down every day? We worry about the hard cases, the gray areas, the fine lines, and yes, we should—but do we really understand the nature and the scope of what gets removed? Those questions speak to the importance of this conference, and to the need for follow-up.

There is a lot we don’t know; this conference was just a start. And we cannot make good policy, or assess effectively what companies are doing, in the absence of data. We need more data about each of the six topics delineated by the panels—and we need more context for that data, too. For example, after Google’s Nora Puckett pointed out that Google is aiming to have 10,000 people worldwide working on content moderation, internet rights activist Bankston tweeted, “Google’s content moderation staff is bigger than the entire staffs of the vast majority of all internet companies. If this is the norm expected of global platforms, I hope you really like the platforms we have now because they are all we will ever have.”  So we need context—because the norm may, in fact, be proportionality. Google’s content moderation staff may actually not be enough, given the size and impact of Google’s platforms. Maybe the norm should be that any startup that contemplates a product with a social media component needs to evaluate its concomitant content moderation needs, and staff accordingly.

On Twitter, Lisa Brewster wrote, “I'd love to see analysis that compares the size of various content moderation teams (FTE, contractor, volunteer) with the overall number of employees, volume of content published, number of end users, revenue…” And Bankston agrees on the need for more data. As he moderated the panel on transparency and appeals, he asked the speakers, “Are you guys feeling pressure to publish this data? And if not, what more can we do?"

There were a number of useful suggestions that came up during the conference.  One of them was that “Trust and Safety” team members (i.e. the content moderators) should be involved in/contribute to product development—and should be consulted before any new feature is released. Their review, based on the reality that they deal with every day, would be very different than the legal review, but just as necessary.

As to what makes a good content moderator? Alex Feerst, of Medium, had the line of the day: to be a good content moderator, he said, you need “the mind of a philosopher, the gut of a police officer, and the heart of a kindergarten teacher.” That is a lot to ask. One thing that we can all do is to demand more information about the work that content moderators do (and the cost that they bear for the rest of us); recognize their contributions; and demand that companies do much more to reflect the value of those contributions, as well.

A few days after the conference, the world found out that John Perry Barlow died. A leading internet rights activist, Barlow had been (among many other things) a co-founder of the Electronic Frontier Foundation. April Glazer, an ex-Eff staffer, recently wrote in Slate about Barlow’s legacy:

I can’t help but ask what might have happened had the pioneers of the open web given us a different vision—one that paired the insistence that we must defend cyberspace with a concern for justice, human rights, and open creativity, and not primarily personal liberty. What kind of internet would we have today?

The question of content moderation fits squarely in that area of tension between liberty and justice—with financial interests thrown in to complicate things even more. It involves a very difficult balancing of rights. Reasonable and well-meaning people will disagree on the decisions reached on this. And we’re likely to keep talking about it for a long time to come.

Feb 12, 2018
--

Subscribe to Our Blogs

* indicates required
Subscribe me to the following blogs: