Three Recent Calls for a Tech Code of Ethics
Conditioning the morality muscles
Irina Raicu is the director of the Internet Ethics program at the Markkula Center for Applied Ethics at Santa Clara University. Views are her own.
In a talk delivered in late April, titled "Build a Better Monster: Morality, Machine Learning, and Mass Surveillance,” Pinboard creator Maciej Ceglowski called for the development of a new code of ethics:
We need a code of ethics for our industry, to guide our use of machine learning, and its acceptable use on human beings. …
Young people coming into our industry should have a shared culture of what is and is not an acceptable use of computational tools. In particular, they should be taught that power can't be divorced from accountability.
Ceglowski was speaking at a conference held in Philadelphia. Two days later, an article published in Quartz described a separate Silicon-Valley-based effort to develop a code of ethics:
Technologists increasingly see that their software has downsides. While it powers ever more of the global economy, it’s also enabling authoritarian states to conduct universal surveillance, racial profiling, deportations and political propaganda…. Desire is growing for an accepted code of ethics to govern the technology profession the way the Hippocratic Oath guides doctors and medical professionals.
The Quartz post acknowledged the fact that professional organizations such as the ACM (Association for Computing Machinery) and IEEE (Institute of Electrical and Electronics Engineers) have long had codes of ethics. However, many technologists are not members of either the ACM or IEEE. And so, as the article details, even as those organizations are in the process of redrafting their codes, the president of the Silicon Valley incubator Y Combinator—Sam Altman—is one of the folks advocating for a new code of ethics. “Tech companies,” he wrote back in March, “are very receptive to their employees' influence. We believe that employees should come together and clearly define the values and policies they'd like to see their companies uphold.”
Finally, last month The Atlantic published a piece that I wrote, in which I argue that we need not just a code of ethics but also more consistent and pervasive ethics training for technologists. Codes of ethics are useful, but often vague—and have to be interpreted and applied to particular sets of facts and decisions that technologists face. Virtue ethicists argue that ethical decision-making is like a muscle that needs to be exercised lest it atrophy, and experiments have shown that “moral reminders” help people make more ethical decisions (at least when it comes to cheating); some level of ongoing ethics training, therefore—in colleges and universities and coding bootcamps, but also in workplaces large and small—could serve as exercise and reminders. It is this secondary meaning of “training” that might be more directly relevant to ethical decision-making and tech: not training as in the teaching of specific skills, but training as the ongoing conditioning of the morality muscles; the reminder that technologists wield great power; and the application, in a variety of contexts, of the notion that (as Ceglowski put it) “power can't be divorced from accountability.”
Photo by Lerkoz, cropped, used under a Creative Commons license.