Skip to main content
Markkula Center for Applied Ethics

The Ethics of “Giving People a Voice” and Political Advertising on Facebook

Mark Zuckerberg

Mark Zuckerberg

Irina Raicu

(AP Photo/Noah Berger)

Irina Raicu is the director of Internet Ethics at the Markkula Center for Applied Ethics. Views are her own.

We are all subjects in an ongoing social media experiment. Periodically, we get interim findings.

In 2010, Facebook researchers published the results of a “randomized controlled trial of political mobilization messages delivered to 61 million Facebook users during the 2010 U.S. congressional elections.” What they found was that “the messages directly influenced political self-expression, information seeking and real-world voting behaviour of millions of people. Furthermore, the messages not only influenced the users who received them but also the users’ friends, and friends of friends.”

In 2014, Tarun Wadhwa published an article titled “How Facebook Is Shaping Who Will Win the Next Election.” He explained that “the design, policies, and algorithms chosen by the company are having a major impact on how elections are run and how the electorate gets their information”—and added that future political operatives “will need to know how to capitalize on the intricacies of targeting Facebook posts and ads.”

Political operatives did. And not just in the U.S.

After the 2016 election, when asked about Facebook’s impact on the results, Mark Zuckerberg addressed a narrower issue: he called the notion that fake news on Facebook had influenced the election “a pretty crazy idea.” He added, “Voters make decisions based on their lived experience.”

Contrast that with the claim by the Facebook researchers back in 2010, that political mobilization messages “directly influenced political self-expression, information seeking and real-world voting behaviour of millions of people.” For many voters, Facebook is a key part of “lived experience.”

About a year later, Zuckerberg published a post in which he apologized for his earlier comment about the “pretty crazy idea” but went on to argue that Facebook data showed that the company’s “broader impact—from giving people a voice to enabling candidates to communicate directly to helping millions of people vote—played a far bigger role [than misinformation] in this election.”

His statement failed to address the intersection of two of those elements: the misinformation coming from candidates who, whether in posts or in ads, are communicating directly with potential voters.

At the same time, as Business Insider reported, Facebook was touting its success in helping political candidates via finely targeted advertising. After the 2016 controversy, the company took down from its business page a “Success Stories” tab highlighting examples of its impact on elections around the world. One of them had detailed a campaign for Florida governor; in the story, a member of the winning candidate’s team offered a testimonial:

Facebook Ads provided us with unique targeting capabilities to look beyond broad demographics and specifically target messages in English and Spanish to specific groups of Cuban, Puerto Rican and other descent. This allowed us to reach different sub-groups of Hispanic voters in ways that were simply not feasible on TV and radio.

In the 2016 presidential election, some of the targeted messages uniquely able to reach particular sub-groups had been designed to suppress voting among those groups. As Bloomberg reported in an in-depth article on October 27, 2016, some messages from the Trump campaign were about to be …

delivered to certain African American voters through Facebook “dark posts”—nonpublic posts whose viewership the campaign controls so that… “only the people we want to see it, see it.” The aim is to depress Clinton’s vote total. “We know because we’ve modeled this,” says [a Trump campaign] official. “It will dramatically affect her ability to turn these people out.”

When Mark Zuckerberg later mentioned Facebook’s role in “helping millions of people vote,” he did not address the company’s role in discouraging particular constituencies from voting.

This October, almost 4 years to the day from the publication of that Bloomberg article, Zuckerberg gave a speech at Georgetown University. In it, he again reiterated his view of Facebook as a medium for “giving people voice.” He also said, “We recently clarified our policies to ensure people can see primary source speech from political figures that shapes civic discourse.” It is a rather strange statement: “primary source” suggests research. But what does that sentence actually mean? Had Facebook previously not allowed people to “see primary source speech from political figures”? Buried in the middle of the paragraph came the clarification: “We don’t fact-check political ads.”

A subsequent blog post by other Facebook executives, titled “Helping to Protect the 2020 US Elections,” makes no mention of protecting citizens and institutions from most kinds of misinformation—as long as it’s spread by politicians. It does however have a section titled “Fighting Voter Suppression and Intimidation,” which details various types of misrepresentations about voting and voter registration requirements, and flatly states, “We remove this type of content regardless of who it’s coming from.” So some forms of “primary source speech,” even from “political figures,” are not beyond removal—despite Zuckerberg’s broad claim. On the flip side, other messages described as “voter suppression efforts” by the people actually undertaking those efforts would fall outside the categories outlined in the Facebook blog post.

This history of statements—by Facebook researchers, various Facebook executives, and repeatedly from Facebook’s founder and CEO—highlights the hollowness of the claim about “giving people voice.” Some voices are louder than others. Some are amplified more. Some are more protected than others. Some are ignored, or discouraged, or censored. Some are manipulated. Some are fact-checked; some are not. There is no single equitable “giving of voice.”

Moreover, Facebook’s most recent policy on political advertising undermines the goal itself. Disinformation, whether spread by politicians or others, whether through paid ads or other posts, often leads to online abuse or outright violence against particular people, silencing their voices. As a report commissioned by Facebook itself explained last year, in Myanmar, for example, widespread misinformation spread via the platform “had a negative impact on freedom of expression, assembly and association for Myanmar’s most vulnerable users.”

Some Facebook policies give people voice (while controlling the volume button); some take it away. The policies occasionally change. The experiment continues, and elections are again coming up.

Dec 5, 2019
--