Anita Varma, PhD
Anita Varma is the assistant director of Journalism & Media Ethics as well as Social Sector Ethics. Views are her own.
If a grocery store announced that 89% of its food was safe to eat, and the other 11% was assuredly toxic, would you shop there?
Now imagine that the grocery store is among the only stores in town, such that they hold a near-monopoly on acquiring food.
Faced with such a situation, many consumers would recognize their lack of alternative options, and would demand better than an 89% food safety rate. Risking their health on 11% of the food would be a major concern across the community. After all, even if you might be in the lucky 89% with your food selections, others in your community may not be—and the effects of unsafe food are both painful and lasting. Why would anyone accept such grim constraints?
Facebook has asked communities of color to be satisfied, time and again, with 11% toxicity on their behemoth platform. Most recently, in response to the #StopHateForProfit advertising boycott, Facebook executives sat down with activist groups such as Color for Change. Instead of offering new and substantial plans for addressing the amount of hate speech, endangerment, and hostility on their platforms, Facebook instead attempted to pivot to a line of reasoning that Vice President Pence used in a recent press conference on the growing spread of coronavirus that amounts to: “It could be worse.”
Institutional racism means that the structure and policies that constitute an organization prioritize the interests of a dominant race, and perpetuate the subjugation of all other races.
Operating with a baseline assumption aligned with neoliberal hyper-individualism, Mark Zuckerberg has continually displayed his unexamined elite white male privilege by touting the importance of freedom of expression, which led to the absurd conclusion for years that blatant disinformation had a rightful place on the largest platform in the world.
Paying minor homage to communities of color with “listening tours,” dead-end feedback opportunities, and meetings have done nothing to resolve the core problem at hand: Facebook’s policies permit and perpetuate racism.
Nearly thirty years ago, fierce debates took place about the meaning of the public sphere. Calling attention to the ways in which face-to-face discourse privileged elite white men, feminist and critical race scholars called for attention to the conditions of communication. Bringing everyone to a table to speak does nothing if some of the people at that table have reasonable fear of being attacked and demeaned, time and again.
Permitting hate speech, claims of racial superiority, and calls for violence on their platform signals to people the world over that Facebook is not a place where they can express themselves. Hate speech silences the participation of targets of hate, and for good reason: rebutting someone who declares your race’s inherent inferiority is not a competition on the merits of argument, but a dynamic built for psychological torment.
Regularly contradicting their aim of “free expression,” Facebook remains either willfully ignorant or troublingly uneducated about the ramifications of amplifying and accelerating a hateful climate for digital communication.
What might Facebook do differently, if they were to seek an ethical sea change? For starters, remove Mark Zuckerberg from being in a position to act as judge and jury of content-related decisions. Clearly, his ahistorical grasp of phrases that are well-documented as racist means he is not qualified for the responsibility of the position. Most CEOs would readily admit to not being able to perform all job functions across their organization. Zuckerberg’s reluctance to admit his shortcomings in this arena does not make them any less acute.
Then, create a team of antiracist people who understand history, civil liberties, and the meaning of free speech, and give them final decision-making power to address the problems and forms of hate speech that Facebook repeatedly “discovers.” Based on a recent employee walkout, Facebook already has employees who understand the ethical issues at hand better than their current leader.
Addressing institutional racism, in any institution, begins by recognizing it. Certainly, recognizing institutional racism requires learning about the history of race-based oppression and race relations. Where many institutions fall down is by offering lip service instead of doing the work of genuine dismantling. This is precisely what Facebook has been doing for years: they “listen,” “learn,” and then do nothing other than hand-wave at reports from affected communities that remain endangered, threatened, and systematically silenced on a supposedly “free” platform.
With rumblings of a potential ad ban in advance of the 2020 election, Facebook is yet again showing signs of its shortsightedness through piecemeal tinkering with an institutional problem. It would be laughable to suggest that the hateful climate on Facebook is limited to its ads and not its organic content.
The solution, however, is quite simple: Facebook needs to recognize that the harms it continues to wreak across marginalized communities are unacceptable, and empower a team of antiracist people to affect change across the institution of Facebook.
Alternatively, if Facebook continues to do nothing, their stance will be abundantly clear—namely, that they prefer to foster racism rather than fight it.