Skip to main content
Markkula Center for Applied Ethics

Rethinking Ethics Training in Silicon Valley

Irina Raicu

Irina Raicu

Irina Raicu

This article was originally published in The Atlantic on May 26, 2017.

I work at an ethics center in Silicon Valley.

I know, I know, “ethics” is not the first word that comes to mind when most people think of Silicon Valley or the tech industry. It’s probably not even in the top 10. But given the outsized role that tech companies now play, it’s time to focus on the ethical responsibilities of the technologists who help shape our lives.

In a recent talk, technologist Maciej Ceglowski argued that “[t]his year especially there’s an uncomfortable feeling in the tech industry that we did something wrong, that in following our credo of ‘move fast and break things,’ some of what we knocked down were the load-bearing walls of our democracy.”

This was not unforeseeable—or even unforeseen. In 2014, for example, in an article titled “How Facebook Is Shaping Who Will Win the Next Election,” Tarun Wadhwa cited a study published in 2012: “A 61-Milion-Person Experiment in Social Influence and Political Mobilization.” The study’s authors reported on  “a randomized controlled trial of political mobilization messages delivered to 61 million Facebook users during the 2010 US congressional elections. The results show that the messages directly influenced political self-expression, information seeking and real-world voting behaviour of millions of people.”

So we already knew that tools for “maximizing engagement” can shape the political sphere.  In 2014, Wadhwa concluded, “Whether it wants this responsibility or not, Facebook has now become an integral part of the democratic process globally.”

We also know that technology can be harmful to our democracy. Privacy invasions and algorithmic manipulation, for example, can limit the ability to research and formulate opinions, and then in turn affect how people express views—even via voting. When companies implement practices that are good for targeted advertising but bad for individuals’ democratic engagement (like, for example, the practices involved in the use of “dark posts” on Facebook, tied to the creation of psychological profiles for hundreds of millions of Facebook users in the U.S.), the benefits-versus-harms balance tilts pretty sharply.

Who minds that balance?

You often hear the adage that law can’t keep up with technology. What about ethics? Ethics, too, is deliberative, and new norms take some time to develop; but an initial ethical analysis of a new development or practice can happen fairly quickly. Many technologists, however, are not encouraged to conduct that analysis, even superficially. They are not even taught to spot an ethical issue—and some (though certainly not all) seem surprised when backlash ensues against some of their creations. (See, for example, the critical coverage of the now-defunct Google Buzz, or more recent reaction to, say, “Hello Barbie.”)

A growing chorus has argued that we need a code of ethics for technologists. That’s a start, but we need more than that. If technology can mold us, and technologists are the ones who shape that technology, we should demand some level of ethics training for technologists. And that training should not be limited to the university context; an ethics training component should also be included in the curriculum of any developer “bootcamp,” and maybe in the onboarding process when tech companies hire new employees.  

Such training would not inoculate technologists against making unethical decisions—nothing can do that, and in some situations we may well reach no consensus on what the ethical action is. Such training, however, would prepare them to make more thoughtful decisions when confronted, say, with ethical dilemmas that involve conflicts between competing goods. It would help them make choices that better reflect their own values.

Sometimes, we need consumers and regulators to push back against Big Tech. But in his talk titled “Build a Better Monster: Morality, Machine Learning, and Mass Surveillance,” Maciej Ceglowski argues that “[t]he one effective lever we have against tech companies is employee pressure. Software engineers are difficult to hire, expensive to train, and take a long time to replace.” If he is right, then tech employees might have even more power than people realized—or at least an additional kind of power they can wield. All the more reason why we should demand that technologists get at least some ethics training and recognize their role in defending democracy.

I work in an applied ethics center, and we do believe that technology can help democracy (we offer a free ethical-decision-making app, for example; we even offer a MOOC—a free online course—on ethical campaigning!). For it to do that, though, we need people who are ready to tackle the ethical questions—both within and outside of tech companies.

Irina Raicu is the director of the Internet Ethics program at Santa Clara University’s Markkula Center for Applied Ethics.

This article is part of The Democracy Project, a collaboration with The Atlantic.

Jun 26, 2017
--