Image by Alan Warburton / © BBC / Better Images of AI / Nature / CC-BY 4.0
Irina Raicu is the director of the Internet Ethics program (@IEthics) at the Markkula Center for Applied Ethics; Jonathan Kwan is an Assistant Professor of Philosophy at New York University Abu Dhabi and was previously the Markkula Center’s Inclusive Excellence Postdoctoral Fellow in Immigration Ethics. Views are their own.
Full disclosure: One of us works—and the other one did—in an applied ethics center. The definition and the role of ethics are, therefore, deeply important to us. We are fully aware, however, of the fact that ethics centers and philosophy departments are not the only places where conversations about tech ethics happen. They also happen within a variety of academic disciplines, in vulnerable communities, in (some) technology companies, in (some) governmental agencies, in the media, among targeted activists, and around many dinner tables.
However, especially in the context of technology ethics (and AI ethics in particular), various passionate critics of the status quo have, in the last few years, made a variety of statements such as “It’s not about ethics—it’s about power”; “it’s not about ethics—it’s about justice”; and “it’s not about ethics—it’s about human rights.” We believe that those statements misconstrue ethics—in particular applied ethics.
First, ethics does not ignore issues of power. While it doesn’t focus in particular on questions about who has power (which are more squarely addressed by disciplines such as political science, sociology, anthropology, religion, etc.), applied ethics certainly asks who should have what power, and which uses of that power are morally legitimate. When it speaks about rights and duties, it addresses the power of individuals and states vis-à-vis each other. When it encompasses, as it does, issues of justice, it inherently must address imbalances of power, too.
Another school of thought in ethics, care (or feminist) ethics, speaks clearly and directly about relations and power. In a 2011 interview, for example, ethicist Carol Gilligan noted that “[s]tudying development, [she] realized that concerns about oppression and concerns about abandonment are built into the human life cycle, given the differential power between children and adults and the fact that care is essential for human survival.” And virtue ethics adds to all of this the focus on power of character and of role models. In the struggle between South Africa and Nelson Mandela, for example, or between leaders like Martin Luther King, Jr. and the racist structures in the U.S., who had the power? What kinds of power are we talking about?
Justice, of course, is also not a topic solely addressed by ethics, but ethics has always been historically intertwined with questions of what constitutes justice. So to say “it’s not about ethics—it’s about justice,” in the context of tech ethics, is actually to limit the ambit of the discussion to a subset of issues, or to stretch the meaning of “justice” so as to encompass also rights, consequentialist attempts to maximize benefits and minimize harms, and virtues such as empathy and prudence—in other words, to cast the term “justice” just as broadly as the term “ethics.”
And human rights law, while deeply important, doesn’t answer all questions—for example, what to do when certain rights come into conflict in a particular context and application, or what anchors the moral authority and necessity of human rights.
Clearly, the “It’s not about ethics, it’s about…” kinds of criticisms are not really about the definition of ethics per se, but about a particular way of “doing” ethics that is seen as insufficiently political or robust (and conducive to some tech company efforts to avoid regulation). But there is nothing about applied ethics that inherently leaves out social-political matters, or insists that ethical analysis remain merely in the realm of the interpersonal. In fact, in the effort to correct for this framing of ethics as PR or anti-regulation by talking more about power, rights, justice, etc., one must inevitably still engage in ethical thinking--just with a more robust and accurate conception of ethics.
When we ask technologists and business people and regulators interested in tech to respect the autonomy and dignity of various tech products’ users and of others impacted by those products; when we ask them to consider from the outset both the intended and unintended consequences that tech products will bring into the world; when we ask them to consider the distribution of tech’s benefits and burdens among different groups and individuals (in a variety of contexts, including the protection of vulnerable populations, labor displacement, and moral deskilling); when we ask them to be creative but also bring humility and empathy to their work, we are, in fact, talking about human rights and justice and power—because we are talking about ethics. And when we ask lawmakers and administrative agencies to craft new regulations or to address areas in which various regulations and policy goals come into conflict with each other, we are talking about ethics, too. (We will always need both ethics and law.)
“Technology ethics” is a broad concept. “AI ethics” addresses the particular problems that arise from a new, powerful, and insufficiently understood technology. But giving substantive content to those terms is no harder than giving such content to terms like “power,” “justice,” or “rights.” The actual applied work requires human judgment and respectful debates among people who disagree on what the right thing to do is, with the goal of achieving some workable consensus that turns into norms. It’s that process that matters, much more than what we call it.