Skip to main content
Markkula Center for Applied Ethics

A New Report on Ethical AI, An Older Post about AI Ethics Traps, and Some Hopes

Digital brain with branching circuit pathways representing machine learning and artificial intelligence.

Digital brain with branching circuit pathways representing machine learning and artificial intelligence.

Irina Raicu

Irina Raicu is the director of the Internet Ethics program (@IEthics) at the Markkula Center for Applied Ethics. Views are her own.

In June, the Pew Research Center released a report titled “Experts Doubt Ethical AI Design Will Be Broadly Adopted as the Norm Within the Next Decade.” It details responses from “[s]ome 602 technology innovators, developers, business and policy leaders, researchers and activists” to a question that the research center posed (in a collaboration with the Imagining the Internet Center at Elon University).

The authors of the report are careful to note that it was “a nonscientific canvassing, based on a nonrandom sample,” and that the results “represent only the opinions of the individuals who responded to the queries and are not projectable to any other population.” Those important qualifications got lost in some of the media coverage of the report, however.

In addition, in part because of the massive scale of the report itself, and because some of the answers quoted at length address a wide range of issues, it is also easy to miss the fact that the question posed by the researchers was quite narrowly focused: “By 2030, will most of the AI systems being used by organizations of all sorts employ ethical principles focused primarily on the public good?”

It would be easy to be optimistic about progress being made in the development and deployment of more ethical AI and still answer “no” to that question (or at least want to start by asking, in turn, whether there are any other systems “used by organizations of all sorts” that “employ ethical principles focused primarily on the public good,” and whether “ethical principles focused on the public good” was even the appropriate framing here; in the context of medical AI, for example, do we want to focus on the “public good,” or on the interests of individual patients?)

I am flagging the question in part because I think it’s important to keep it in mind as you evaluate a report that is very much worth reading—but also in order to clarify the context of my own answer, which is included in a section titled “Worries about developments in AI”:

The conversation around AI ethics has been going on for several years now. However, what seems to be obvious among those who have been a part of it for some time has not trickled down into the curricula of many universities who are training the next generation of AI experts. Given that, it looks like it will take more than 10 years for ‘most of the AI systems being used by organizations of all sorts to employ ethical principles focused primarily on the public good.’ Also, many organizations are simply focused primarily on other goals – not on protecting or promoting the public good.

The next section of the Pew report is titled “Hopes about developments in ethical AI”—and I have lots of hopes about such developments, too! I do think progress will be made by 2030—I just don’t think that by then we would have achieved the particular goal posited by the question.

As a term, “AI ethics” is increasingly too broad; my hope is that we will have many more narrowly focused conversations that address distinct types of data, distinct AI/ML tools, deployment in distinct social contexts, and distinct benefits and harms to distinct populations. We might not want “organizations of all sorts” to employ ethical principles with the same primary focus.

Some of the comments from both the “worries” and the “hopes” sections of the report also reminded me of a post published a few years ago by Annette Zimmermann and Bendert Zevenbergen on the “Freedom to Tinker” blog, which deserves a broad audience; it is titled “AI Ethics: Seven Traps.” Note, in particular, part of their response to what they call “the relativism trap”:

In light of pervasive moral disagreement, it is easy to conclude that ethical reasoning can never stand on firm ground: it always seems to be relative to a person’s views and context. But this does not mean that ethical reasoning about AI and its social and political implications is futile…. While it may not always be possible to determine ‘the one right answer’, it is often possible to identify at least  some paths of action are clearly wrong, and some paths of action that are comparatively better (if not optimal all things considered). If that is the case, comparing the respective merits of ethical arguments can be action-guiding for developers and policy-makers, despite the presence of moral disagreement.

In any case, the Pew Research Center report offers a lot of food for thought and has itself sparked even more conversations about the ethics of AI. Such conversations, and the growing recognition that they need to include the perspectives of people who have already been directly impacted by AI tools, are one reason to be hopeful about positive developments in AI by 2030. The worries, of course, remain—and there is lots of work to be done.

Image: "Machine Learning & Artificial Intelligence", cropped, by mikemacmarketing, is licensed under CC BY 2.0.

Jul 2, 2021
--

Subscribe to Our Blogs

* indicates required
Subscribe me to the following blogs: