Skip to main content
Markkula Center for Applied Ethics

Metrics and Misdirection

Young people hands using smartphones. By neonshot/Adobe Stock.

Young people hands using smartphones. By neonshot/Adobe Stock.

Questions Left Unanswered at Another Hearing Lambasting Social Media CEOs

Irina Raicu

Last week, U.S. senators grilled the CEOs of 5 social media companies about the effects of their platforms on children and teens who use them. Remember the “Senator, we run ads” hearing, and the coverage that followed it, back in 2018? Things sure have changed.

Or at least some things. The tone of the hearing was very different. But other things haven’t changed—for example, we still don’t have a federal privacy law.

Some of the CEOs called and/or subpoenaed to testify yesterday also didn’t seem to have changed their tactics: for example, citing numbers that made their platforms look good, even as they were asked for different numbers—or just ignoring other numbers that make them look bad. A kind of metrics-based misinformation.

When the CEO of X was asked about the number of people that the company used to employ in trust and safety roles before massive layoffs, she answered instead that it had “increased the number of its trust and safety staff by 10 per cent over the past year.” As of last September, according to some media reports, Twitter/X’s "trust and safety team... [had] gone from what had been about 230 people to somewhere around 20.” In reports submitted to the EU in November, the number of X content moderators disclosed was still substantially lower than those of other social media platforms.

In the same hearing, Meta’s CEO said that his company employs 40,000 content moderators. But no one followed up to ask about the relationship between that seemingly large number and the number of users of Meta’s platforms. More than 3 billion people use one or more of the Meta platforms daily (Facebook, Instagram, WhatsApp). So maybe there should be 100,000 content moderators? Or 300,000? What would be the number needed for effective management of that massive information ecosystem?

Here is another large number: Meta apparently generated a 14 billion profit in the final quarter of 2023; as journalist Cyrus Farivar broke it down on Twitter/X, “that's over $155M per day, or ~$6.5M per hour, or ~$108,000 per minute during that period.” Given such profits, what would be the number of content moderators that the company should employ for the sake of its billions of users, around the world?

The claim by Meta’s CEO that his company is an “industry leader” in efforts to protect kids on its platforms doesn’t ring as it once might have, given that the whole industry is seen as failing at this task. When he says that the company’s investment in trust and safety has been “relatively consistent over the last couple of years,” that actually seems a self-own—given that the results suggest much more investment would have been needed (and given reports that in 2021 he specifically refused a request for more employees to be added “to focus on child safety by addressing self-harm, bullying, and harassment.”)

Oh, and one more number: According to the Pew Research Trust, the social media platform most widely used by teens in 2023 was YouTube. About 9 in 10 teens surveyed said they use it, and about 1 in 5 of them said they’re on it “almost constantly.” The CEO of YouTube did not participate in the latest hearing.

Content moderation is a very nuanced and difficult problem. Companies do have incentives to address it—it’s not in their interest to show kids harmful content or have their users face abuse. But they’ve expanded their user bases despite the documented harms not just to children but to all users of the platforms, and various laws have shielded them in the process. Some state laws proposed now would actually make the problem worse. What doesn’t seem to be discussed enough is the question of resources: what percentage of the companies’ revenues might actually make a substantive difference, if they were invested in content moderation? Even if we accept that some harmful content will always get past both AI and human moderators, what if the amount were to be vastly reduced? The numbers of people who would be protected would be massive, too.

Photo Credit:  "Young people hands using smartphones." By neonshot/Adobe Stock (cropped).

Feb 5, 2024
--

Subscribe to Our Blogs

* indicates required
Subscribe me to the following blogs: