Skip to main content
Markkula Center for Applied Ethics

‘The Social Dilemma’ Is a Half-Missed Opportunity

couple embracing on the beach, each looking at mobile device screens

couple embracing on the beach, each looking at mobile device screens

Irina Raicu
Roman Odintsov/Pexels 

Irina Raicu is the director of the Internet Ethics Program (@IEthics) at the Markkula Center for Applied Ethics, Santa Clara University. Views expressed are her own.

Much has been written by now about the documentary that first aired on Netflix at the end of August 2020—after months of pandemic restrictions in which people were complaining that they’ve watched all there was to watch, and were communicating with each other, more than ever, via technology. It was great timing for the release of an expose about life in the midst of the “attention economy” and the role that tech companies (especially social media ones) play in amplifying misinformation, polarization, loneliness, and a distorted sense of the world and of oneself—all the while manipulating the “users” they claim to serve.

The film features powerful interviews with tech insiders, many of whom were directly in charge of the choices that led us to the current state, and are now speaking about what they’ve realized since and what they think needs to change. It also presents researchers and other critics who have long warned about the negative impacts of technology: Shoshana Zuboff, Rashida Richardson, Renee DiResta, Cynthia Wang, Maria Ressa, and more. Had the film included simply the interviews, it might still have gotten criticism from some who have found it too one-sided, but it would have helped many non-experts understand some of the troubling realities of the attention economy and of dangers of the business model that relies on finely-targeted advertising. Unfortunately, choices made by the film-makers directly undermine the messages articulated by the speakers.

The experts interviewed address key topics such as what “free” means in the context of online services; the intentional exploitation, by some engineers and designers, of the known vulnerabilities of our human minds (in the service of maximizing the time that people spend on their products); the impact of constant, minute A/B testing; the current impact of AI on addictiveness (and the need to focus on such existing harms, rather than anticipating futuristic ones); the danger of “personalized” reflections of reality; the rabbit holes of algorithmic recommendations; and the way in which addictive technology pulls us away from in-person interactions, leaving us with relationship husks. Some commentators have said that there is nothing “new” here, nothing we didn’t already know—but I suspect that those who feel that way are people who work on or research such technologies for a living. Many fervent users of social media don’t even realize that the subset of posts that they see in their feeds are selected by algorithms. Many have surely never heard of Stanford’s Persuasive Technology Lab and its related courses. Terms like “growth hacking” might be familiar to folks living in Silicon Valley, but this film is not aimed at Silicon Valley.

In an hour and a half, “The Social Dilemma” gets across a lot of information from articulate people who know what they’re talking about. Unfortunately, their messages are blurred and even contradicted by choices made by another group of human beings—those who shaped the film. Perhaps the most egregious one of those is the choice to represent AI algorithms as three slightly different versions of the same human—a caricature of the tech bro embodied (in a stroke of typecasting) by the actor who played the bad apple on the TV show “Mad Men.” Even as the experts explain that ill intent is not required for algorithms to have terrible outcomes, and that AI that adapts based on the data that it ingests makes choices in a very different way than humans do, the film shows three guys talking to each other: “Shall we nudge? Nudge away!” Viewers might be forgiven if they don’t even realize, perhaps until the end when the three “glitch” (but maybe even then), that the “3 guys” (as articles commenting on the movie refer to them) were really “AI”s. Experts warn against anthropomorphizing AI; “The Social Dilemma” does that to the hilt.

It’s more than that, though; when an interviewer mentions feeling like a lab rat (after hearing one technologist describe the experimentation designed to change behavior), one of the interviewees answers “We are all lab rats.” Over and over again, leading technologists explain that they, too, are addicted; their families, too, are negatively impacted. But those three AI “guys” are not.

As one journalist later interviewing Tristan Harris about the movie put it, “it certainly seems like these dramatizations are sort of part of the problem that the film is trying to address. ... there’s no cabal, it’s these systems that we don’t know how they work. But then isn’t it a disconnect to portray it as these three guys standing there?” And Harris finds himself having to answer, “I see that point and I’m speaking to you as a subject in the film not the director or filmmaker.”

Those quotes come from an article titled, “’Social Dilemma’ Star Tristan Harris Responds to Criticisms of the Film, Netflix’s Algorithm, and More.” In it, Harris is also asked about a quote from one of his presentations, which is included in the movie: “It’s checkmate for humanity.” The reviewer pushes back on that as an exaggeration. Harris answers: “the checkmate point actually was following a different part of the presentation that was not there [in the film] which was actually around deepfakes….” Taking quotes out of context and making them seem to refer to something else is simply misleading.

Another strangely misleading moment comes much earlier in the film, in a collage of moments from various interviews in which some of the tech creators are asked, “So what is the problem?”—and are then shown taking a long time to answer. It is an overly broad question, to which there are many answers (and the rest of the film makes it clear that the technologists have lots to say in response). But that collage of initial hesitations makes it look like the interviewees don’t know, or haven’t thought about the question. Their hesitation, their taking time before answering, is presented as a statement in itself. But that is the opposite of the movie’s actual message: That there are lots of problems, there is much that needs to be explained, and there are no pat answers or solutions.

When they don’t contradict their own interviewees, the film-makers caricature them: while Tristan Harris explains how he first sounded the alarm at Google about the effects of addictive technology, he is depicted (literally) as a cartoon—sighing, pondering with a hand to his mouth, with his ideas floating around as paper planes. Distracted by the cartoonish figure, you might miss the content of what he’s actually saying—the fact that hundreds of Google employees wrote back that they agreed with him and had felt the same concerns he had. Those paper planes were actually messages distributed via technology, and answered by emails. Again, one of the key messages of the film is undermined by the visual choices.

Maybe it’s those choices that distracted some of the documentary’s critics to the point where they minimize some key points being made. In an article in The New Republic, for example, Elizabeth Pankova argues that the film “lacks any substantive political message other than a nod at ‘regulation.’” But the fact that so many Silicon Valley insiders and creators vehemently argue for the need for regulation is more than a “nod.” It is important, and undermines long-standing Silicon Valley narratives. Pankova also writes that the film suggests that the “titular ‘dilemma,’ then, isn’t Facebook or Twitter or Instagram’s to solve. It’s ours. Turn off your notifications! Delete some apps! Follow people you disagree with on Twitter! These are the solutions hastily fired off as the credits roll.” That claim doesn’t square with the call for regulation, or with Tristan Harris’ answer when he is asked who is responsible for addressing the harm: he says that the platforms are.

The suggestions made as the credits roll are not presented as solutions for the dilemma—they are self-defense moves that the technologists explain they deploy themselves, while we all wait (or push) for the regulators to do more.

The message that emanates from the words of those interviewed—technologists and business folk and academics and public health experts alike—is that we all need to become more informed about what drives the companies’ choices; educate others; call for regulation to protect individuals and the common good; uninstall the stuff that's not needed; and, whenever we can, make our own choices—rather than accepting the algorithms’ recommendations.

To add to those suggestions: If you can, ignore the movie’s embodied AI, the fictionalized family, the cartoons and other such devices that try to add entertainment value to the subject matter. Focus, instead, on Shoshana Zuboff’s comments about markets that should be outlawed; on Cathy O’Neill saying that AI can’t save us from disinformation; on Sean Parker arguing that the inventors and founders of some of the biggest platforms knew that they were exploiting vulnerabilities in human psychology, “understood this consciously—and we did it anyway.” Instead of quoting Sophocles, the movie might have started with a quote from Jaron Lanier: “We’ve put deceit and sneakiness at the absolute center of everything we do.”

Although many people didn’t already understand all of the things that “The Social Dilemma” addresses, even those who knew a lot might be surprised by another claim that Lanier makes—which, again, runs counter to much-accepted tech narratives: “It’s the critics,” Lanier says, “who are the true optimists.” He is not referring to the critics of the film, of course, but to those who both understand and criticize the current state of our tech-infused ecosystem, and who ask all of us to demand (and build) better. “The Social Dilemma” was a great opportunity to educate a broad public and underscore this point; it’s too bad that some stylistic choices undermine its substance.

Oct 26, 2020
--

8 Things You Can Do If You're Concerned about Social Media

  1. As you become more informed about what drives the companies’ choices, help educate others, too.
  2. Uninstall apps that you don't need or use often.
  3. Call for related regulation to protect individuals and the common good--but research, also, what specific proposed regulations would actually do (not all such proposals address real harms or offer meaningful solutions).
  4. Whenever possible, don't give permission for your data to be used for personalization of ads or recommendations regarding what to read or watch.
  5. Don't treat social media as a source of news.
  6. Don't let yourself become a source of misinformation; be careful what you share (read Kate Starbird's "Disinformation's Spread: Bots, Trolls, and All of Us".)
  7. Make a list of things you'd like to do that you don't have time for--then determine whether checking your favorite social media platform less (perhaps only every other day, if your work or family commitments don't require more frequent check-ins) would actually free up time for you to do those things. Be a good role model for your kids, friends, etc.
  8. While on social media, especially on Twitter, follow all of the technologists, academics, and other researchers and activists interviewed in "The Social Dilemma."

 

 

 

Subscribe to Our Blogs

* indicates required
Subscribe me to the following blogs: