Skip to main content
Markkula Center for Applied Ethics

The “E” Word

Cropped sign that reads

Cropped sign that reads "ethics"

Irina Raicu

Irina Raicu is the director of the Internet Ethics program at the Markkula Center for Applied Ethics at Santa Clara University.  Views are her own.

In late August, a tweet announced that the ACM Conference on Fairness, Accountability, and Transparency scheduled for 2020 had received 291 paper submissions—80% more than the number reviewed for this year’s session. From its inception, this conference has broken new ground and featured outstanding research; it has played a key role in making topics such as bias and disparate impact part of most current conversations about AI.

Retweeting that announcement, Rumman Choudhury, who is Accenture’s Lead for Responsible AI, added a comment that read, in part, “If the abstracts are any indication of what's to come in our field… 'ethics' (the term) is dead, long live <some other term>.”

Full disclosure: to belabor the obvious: I work in an applied ethics center.              

That being said, I come here neither to bury “ethics” nor to praise it. I come to offer a minor point of clarification, based on certain assumptions (since I haven’t read any of the abstracts). Here they are.

I assume that some the papers submitted to the ACM FAT conference will address the consequences of the deployment and use of technological tools in society. I assume that most, if not all, will still discuss fairness, accountability, and transparency. I assume that some will mention human rights, and the duties of those who develop technology. Some might mention the fact that values are embedded in the data sets on which the AI is trained, in the criteria and weights of the algorithms, and in the very questions/issues that AI is directed to address. Some will discuss trade-offs, and ways to maximize benefits and minimize harms. Some might even discuss the tension between the role that AI might play vis-à-vis individuals or particular groups and whatever we define as the “common good.” Or the tension inherent in the deployment of AI tools that are supposed to get better at their task (and therefore more useful for future users) by making decisions that impact (and potentially harm) people right now.

If those assumptions are correct, those papers are still talking about ethics.

There are a lot of misconceptions about what ethics is and isn’t. We’ve also seen a lot of push-back, over the last year, against “virtue signaling,” and a growing false dilemma debate presenting ethics and regulation as an either/or (both are, in fact, necessary parts of the solution to the kinds of problems addressed in conferences like ACM FAT). Given all of that, it’s not surprising that people are trying to find a less loaded way in which to frame their efforts. In any case, as long as we’re talking about consequences, rights, dignity, fairness and justice, common good, and ways to prevent harms, about what technologists should do, and about the kinds of habits they might need to develop to reach those goals, we’re still on the same page.

So our Center will continue to offer free resources for any folks who are interested in decision-making, tech practice, and embedding in engineering, computer science, and data analytics curricula—and we look forward to reading more outstanding papers presented at ACM FAT.

Photo: “That Way” by Justin Baeder, cropped, used under a Creative Commons license.

Sep 3, 2019
--

Subscribe to Our Blogs

* indicates required
Subscribe me to the following blogs: