Skip to main content
Markkula Center for Applied Ethics

Artificial Intelligence and Ethics: Sixteen Challenges and Opportunities

a white Google AI car parked against a backdrop of blue sky with white clouds

a white Google AI car parked against a backdrop of blue sky with white clouds

Brian Patrick Green

Tony Avelar/Associated Press

Brian Patrick Green is the director of Technology Ethics at the Markkula Center for Applied Ethics. This article is an update of an earlier article [1]. Views are his own.

Artificial intelligence and machine learning technologies are rapidly transforming society and will continue to do so in the coming decades. This social transformation will have deep ethical impact, with these powerful new technologies both improving and disrupting human lives. AI, as the externalization of human intelligence, offers us in amplified form everything that humanity already is, both good and evil. Much is at stake. At this crossroads in history we should think very carefully about how to make this transition, or we risk empowering the grimmer side of our nature, rather than the brighter.

Why is AI ethics becoming a problem now? Machine learning (ML) through neural networks is advancing rapidly for three reasons: 1) Huge increase in the size of data sets; 2) Huge increase in computing power; 3) Huge improvement in ML algorithms and more human talent to write them. All three of these trends are centralizing of power, and “With great power comes great responsibility” [2].

As an institution, the Markkula Center for Applied Ethics has been thinking deeply about the ethics of AI for several years. This article began as presentations delivered at academic conferences and has since expanded to an academic paper (links below) and most recently to a presentation of “Artificial Intelligence and Ethics: Sixteen Issues” I have given in the U.S. and internationally [3]. In that spirit, I offer this current list:

1. Technical Safety

The first question for any technology is whether it works as intended. Will AI systems work as they are promised or will they fail? If and when they fail, what will be the results of those failures? And if we are dependent upon them, will we be able to survive without them?

For example, several people have died in a semi-autonomous car accident because vehicles encountered situations in which they failed to make safe decisions. While writing very detailed contracts that limit liability might legally reduce a manufacturer’s responsibility, from a moral perspective, not only is responsibility still with the company, but the contract itself can be seen as an unethical scheme to avoid legitimate responsibility.

The question of technical safety and failure is separate from the question of how a properly-functioning technology might be used for good or for evil (questions 3 and 4, below). This question is merely one of function, yet it is the foundation upon which all the rest of the analysis must build.

2. Transparency and Privacy

Once we have determined that the technology functions adequately, can we actually understand how it works and properly gather data on its functioning? Ethical analysis always depends on getting the facts first—only then can evaluation begin.

It turns out that with some machine learning techniques such as deep learning in neural networks it can be difficult or impossible to really understand why the machine is making the choices that it makes. In other cases, it might be that the machine can explain something, but the explanation is too complex for humans to understand.

For example, in 2014 a computer proved a mathematical theorem, using a proof that was, at the time at least, longer than the entire Wikipedia encyclopedia [4]. Explanations of this sort might be true explanations, but humans will never know for sure.

As an additional point, in general, the more powerful someone or something is, the more transparent it ought to be, while the weaker someone is, the more right to privacy he or she should have. Therefore the idea that powerful AIs might be intrinsically opaque is disconcerting.

3. Beneficial Use & Capacity for Good

The main purpose of AI is, like every other technology, to help people lead longer, more flourishing, more fulfilling lives. This is good, and therefore insofar as AI helps people in these ways, we can be glad and appreciate the benefits it gives to us.

Additional intelligence will likely provide improvements in nearly every field of human endeavor, including, for example, archaeology, biomedical research, communication, data analytics, education, energy efficiency, environmental protection, farming, finance, legal services, medical diagnostics, resource management, space exploration, transportation, waste management, and so on.

As just one concrete example of a benefit from AI, some farm equipment now has computer systems capable of visually identifying weeds and spraying them with tiny targeted doses of herbicide. This not only protects the environment by reducing the use of chemicals on crops, but it also protects human health by reducing exposure to these chemicals.

4. Malicious Use & Capacity for Evil

A perfectly well functioning technology, such as a nuclear weapon, can, when put to its intended use, cause immense evil. Artificial intelligence, like human intelligence, will be used maliciously, there is no doubt.

For example, AI-powered surveillance is already widespread, in both appropriate contexts (e.g., airport-security cameras), perhaps inappropriate ones (e.g., products with always-on microphones in our homes), and conclusively inappropriate ones (e.g., products which help authoritarian regimes identify and oppress their citizens). Other nefarious examples can include AI-assisted computer-hacking and lethal autonomous weapons systems (LAWS), a.k.a. “killer robots.” Additional fears, of varying degrees of plausibility, include scenarios like those in the movies “2001: A Space Odyssey,” “Wargames,” and “Terminator.”

While movies and weapons technologies might seem to be extreme examples of how AI might empower evil, we should remember that competition and war are always primary drivers of technological advance, and that militaries and corporations are working on these technologies right now. History also shows that great evils are not always completely intended (e.g., stumbling into World War I and various nuclear close-calls in the Cold War), and so having destructive power, even if not intending to use it, still risks catastrophe. Because of this, forbidding, banning, and relinquishing certain types of technology would be the most prudent solution.

5. Bias in Data, Training Sets, etc.

One of the interesting things about neural networks, the current workhorses of artificial intelligence, is that they effectively merge a computer program with the data that is given to it. This has many benefits, but it also risks biasing the entire system in unexpected and potentially detrimental ways.

Already algorithmic bias has been discovered, for example, in areas ranging from criminal punishment to photograph captioning. These biases are more than just embarrassing to the corporations which produce these defective products; they have concrete negative and harmful effects on the people who are the victims of these biases, as well as reducing trust in corporations, government, and other institutions which might be using these biased products. Algorithmic bias is one of the major concerns in AI right now and will remain so in the future unless we endeavor to make our technological products better than we are. As one person said at the first meeting of the Partnership on AI, “We will reproduce all of our human faults in artificial form unless we strive right now to make sure that we don’t” [5].

6. Unemployment / Lack of Purpose & Meaning

Many people have already perceived that AI will be a threat to certain categories of jobs. Indeed, automation of industry has been a major contributing factor in job losses since the beginning of the industrial revolution. AI will simply extend this trend to more fields, including fields that have been traditionally thought of as being safer from automation, for example law, medicine, and education. It is not clear what new careers unemployed people ultimately will be able to transition into, although the more that labor has to do with caring for others, the more likely people will want to be dealing with other humans and not AIs.

Attached to the concern for employment is the concern for how humanity spends its time and what makes a life well-spent. What will millions of unemployed people do? What good purposes can they have? What can they contribute to the well-being of society? How will society prevent them from becoming disillusioned, bitter, and swept up in evil movements such as white supremacy and terrorism?

7. Growing Socio-Economic Inequality

Related to the unemployment problem is the question of how people will survive if unemployment rises to very high levels. Where will they get money to maintain themselves and their families? While prices may decrease due to lowered cost of production, those who control AI will also likely rake in much of the money that would have otherwise gone into the wages of the now-unemployed, and therefore economic inequality will increase. This will also affect international economic disparity, and therefore is likely a major threat to less-developed nations.

Some have suggested a universal basic income (UBI) to address the problem, but this will require a major restructuring of national economies. Various other solutions to this problem may be possible, but they all involve potentially major changes to human society and government. Ultimately this is a political problem, not a technical one, so this solution, like those to many of the problems described here, needs to be addressed at the political level.

8. Environmental Effects

Machine learning models require enormous amounts of energy to train, so much energy that the costs can run into the tens of millions of dollars or more. Needless to say, if this energy is coming from fossil fuels, this is a large negative impact on climate change, not to mention being harmful at other points in the hydrocarbon supply chain.

Machine learning can also make electrical distribution and use much more efficient, as well as working on solving problems in biodiversity, environmental research, resource management, etc. AI is in some very basic ways a technology focused on efficiency, and energy efficiency is one way that its capabilities can be directed.

On balance, it looks like AI could be a net positive for the environment [6]—but only if it is actually directed towards that positive end, and not just towards consuming energy for other uses.

9. Automating Ethics

One strength of AI is that it can automate decision-making, thus lowering the burden on humans and speeding up – potentially greatly speeding up—some kinds of decision-making processes. However, this automation of decision making will presents huge problems for society, because if these automated decisions are good, society will benefit, but if they are bad, society will be harmed.

As AI agents are given more powers to make decisions, they will need to have ethical standards of some sort encoded into them. There is simply no way around it: the ethical decision-making process might be as simple as following a program to fairly distribute a benefit, wherein the decision is made by humans and executed by algorithms, but it also might entail much more detailed ethical analysis, even if we humans would prefer that it did not—this is because Ai will operate so much faster than humans can, that under some circumstances humans will be left “out of the loop” of control due to human slowness. This already occurs with cyberattacks, and high-frequency trading (both of which are filled with ethical questions which are typically ignored) and it will only get worse as AI expands its role in society.

Since AI can be so powerful, the ethical standards we give to it had better be good.

10. Moral Deskilling & Debility

If we turn over our decision-making capacities to machines, we will become less experienced at making decisions. For example, this is a well-known phenomenon among airline pilots: the autopilot can do everything about flying an airplane, from take-off to landing, but pilots intentionally choose to manually control the aircraft at crucial times (e.g., take-off and landing) in order to maintain their piloting skills.

Because one of the uses of AI will be to either assist or replace humans at making certain types of decisions (e.g. spelling, driving, stock-trading, etc.), we should be aware that humans may become worse at these skills. In its most extreme form, if AI starts to make ethical and political decisions for us, we will become worse at ethics and politics. We may reduce or stunt our moral development precisely at the time when our power has become greatest and our decisions the most important.

This means that the study of ethics and ethics training are now more important than ever. We should determine ways in which AI can actually enhance our ethical learning and training. We should never allow ourselves to become deskilled and debilitated at ethics, or when our technology finally does present us with hard choices to make and problems we must solve—choices and problems that, perhaps, our ancestors would have been capable of solving—future humans might not be able to do it.

For more on deskilling, see this article [7] and Shannon Vallor’s original article on the topic [8].

11. AI Consciousness, Personhood, and “Robot Rights”

Some thinkers have wondered whether AIs might eventually become self-conscious, attain their own volition, or otherwise deserve recognition as persons like ourselves. Legally speaking, personhood has been given to corporations and (in other countries) rivers, so there is certainly no need for consciousness even before legal questions may arise.

Morally speaking, we can anticipate that technologists will attempt to make the most human-like AIs and robots possible, and perhaps someday they will be such good imitations that we will wonder if they might be conscious and deserve rights—and we might not be able to determine this conclusively. If future humans do conclude AIs and robots might be worthy of moral status, then we ought to err on the side of caution and give it.

In the midst of this uncertainty about the status of our creations, what we will know is that we humans have moral characters and that, to follow an inexact quote of Aristotle, “we become what we repeatedly do” [9]. So we ought not to treat AIs and robots badly, or we might be habituating ourselves towards having flawed characters, regardless of the moral status of the artificial beings we are interacting with. In other words, no matter the status of AIs and robots, for the sake of our own moral characters we ought to treat them well, or at least not abuse them.

12. AGI and Superintelligence

If or when AI reaches human levels of intelligence, doing everything that humans can do as well the average human can, then it will be an Artificial General Intelligence—an AGI—and it will be the only other such intelligence to exist on Earth at the human level.

If or when AGI exceeds human intelligence, it will become a superintelligence, an entity potentially vastly more clever and capable than we are: something humans have only ever related to in religions, myths, and stories.

Importantly here, AI technology is improving exceedingly fast. Global corporations and governments are in a race to claim the powers of AI as their own. Equally importantly, there is no reason why the improvement of AI would stop at AGI. AI is scalable and fast. Unlike a human brain, if we give AI more hardware it will do more and more, faster and faster.

The advent of AGI or superintelligence will mark the dethroning of humanity as the most intelligent thing on Earth. We have never faced (in the material world) anything smarter than us before. Every time Homo sapiens encountered other intelligent human species in the history of life on Earth, the other species either genetically merged with us (as Neanderthals did) or was driven extinct. As we encounter AGI and superintelligence, we ought to keep this in mind; though, because AI is a tool, there may be ways yet to maintain an ethical balance between human and machine.

13. Dependency on AI

Humans depend on technology. We always have, ever since we have been “human;” our technological dependency is almost what defines us as a species. What used to be just rocks, sticks, and fur clothes has now become much more complex and fragile, however. Losing electricity or cell connectivity can be a serious problem, psychologically or even medically (if there is an emergency). And there is no dependence like intelligence dependence.

Intelligence dependence is a form of dependence like that of a child to an adult. Much of the time, children rely on adults to think for them, and in our older years, as some people experience cognitive decline, the elderly rely on younger adults too. Now imagine that middle-aged adults who are looking after children and the elderly are themselves dependent upon AI to guide them. There would be no human “adults” left—only “AI adults.” Humankind would have become a race of children to our AI caregivers.

This, of course, raises the question of what an infantilized human race would do if our AI parents ever malfunctioned. Without that AI, if dependent on it, we could become like lost children not knowing how to take care of ourselves or our technological society. This “lostness” already happens when smartphone navigation apps malfunction (or the battery just runs out), for example.

We are already well down the path to technological dependency. How can we prepare now so that we can avoid the dangers of specifically intelligence dependency on AI?

14. AI-powered Addiction

Smartphone app makers have turned addiction into a science, and AI-powered video games and apps can be addictive like drugs. AI can exploit numerous human desires and weaknesses including purpose-seeking, gambling, greed, libido, violence, and so on.

Addiction not only manipulates and controls us; it also prevents us from doing other more important things—educational, economic, and social. It enslaves us and wastes our time when we could be doing something worthwhile. With AI constantly learning more about us and working harder to keep us clicking and scrolling, what hope is there for us to escape its clutches? Or, rather, the clutches of the app makers who create these AIs to trap us—because it is not the AIs that choose to treat people this way, it is other people.

When I talk about this topic with any group of students, I discover that all of them are “addicted” to one app or another. It may not be a clinical addiction, but that is the way that the students define it, and they know they are being exploited and harmed. This is something that app makers need to stop doing: AI should not be designed to intentionally exploit vulnerabilities in human psychology.

15. Isolation and Loneliness

Society is in a crisis of loneliness. For example, recently a study found that “200,000 older people in the UK have not had a conversation with a friend or relative in more than a month” [10]. This is a sad state of affairs because loneliness can literally kill [11]. It is a public health nightmare, not to mention destructive of the very fabric of society: our human relationships. Technology has been implicated in so many negative social and psychological trends, including loneliness, isolation, depression, stress, and anxiety, that it is easy to forget that things could be different, and in fact were quite different only a few decades ago.

One might think that “social” media, smartphones, and AI could help, but in fact they are major causes of loneliness since people are facing screens instead of each other. What does help are strong in-person relationships, precisely the relationships that are being pushed out by addictive (often AI-powered) technology.

Loneliness can be helped by dropping devices and building quality in-person relationships. In other words: caring.

This may not be easy work and certainly at the societal level it may be very difficult to resist the trends we have already followed so far. But resist we should, because a better, more humane world is possible. Technology does not have to make the world a less personal and caring place—it could do the opposite, if we wanted it to.

16. Effects on the Human Spirit

All of the above areas of interest will have effects on how humans perceive themselves, relate to each other, and live their lives. But there is a more existential question too. If the purpose and identity of humanity has something to do with our intelligence (as several prominent Greek philosophers believed, for example), then by externalizing our intelligence and improving it beyond human intelligence, are we making ourselves second-class beings to our own creations?

This is a deeper question with artificial intelligence which cuts to the core of our humanity, into areas traditionally reserved for philosophy, spirituality, and religion. What will happen to the human spirit if or when we are bested by our own creations in everything that we do? Will human life lose meaning? Will we come to a new discovery of our identity beyond our intelligence?

Perhaps intelligence is not really as important to our identity as we might think it is, and perhaps turning over intelligence to machines will help us to realize that. If we instead find our humanity not in our brains, but in our hearts, perhaps we will come to recognize that caring, compassion, kindness, and love are ultimately what make us human and what make life worth living. Perhaps by taking away some of the tedium of life, AI can help us to fulfill this vision of a more humane world.

Conclusion

There are more issues in the ethics of AI; here I have just attempted to point out some major ones. Much more time could be spent on topics like AI-powered surveillance, the role of AI in promoting misinformation and disinformation, the role of AI in politics and international relations, the governance of AI, and so on.

New technologies are always created for the sake of something good—and AI offers us amazing new abilities to help people and make the world a better place. But in order to make the world a better place we need to choose to do that, in accord with ethics.

Through the concerted effort of many individuals and organizations, we can hope that AI technology will help us to make a better world.

This article builds upon the following previous works: “AI: Ethical Challenges and a Fast Approaching Future” (Oct. 2017) [12], “Some Ethical and Theological Reflections on Artificial Intelligence,” (Nov. 2017) [13], Artificial Intelligence and Ethics: Ten areas of interest (Nov. 2017) [1], “AI and Ethics” (Mar. 2018) [14], “Ethical Reflections on Artificial Intelligence”(Aug. 2018) [15], and several presentations of “Artificial Intelligence and Ethics: Sixteen Issues” (2019-20) [3].

References

[1] Brian Patrick Green, “Artificial Intelligence and Ethics: Ten areas of interest,” Markkula Center for Applied Ethics website, Nov 21, 2017.

[2] Originally paraphrased in Stan Lee and Steve Ditko, “Spider-Man,” Amazing Fantasy vol. 1, #15 (August 1962), exact phrase from Uncle Ben in J. Michael Straczynski, Amazing Spider-Man vol. 2, #38 (February 2002). For more information: https://en.wikipedia.org/wiki/With_great_power_comes_great_responsibility

[3] Brian Patrick Green, “Artificial Intelligence and Ethics: Sixteen Issues,” various locations and dates: Los Angeles, Mexico City, San Francisco, Santa Clara University (2019-2020).

[4] Bob Yirka, “Computer generated math proof is too large for humans to check,” Phys.org, February 19, 2014, available at: https://phys.org/news/2014-02-math-proof-large-humans.html

[5] The Partnership on AI to Benefit People and Society, Inaugural Meeting, Berlin, Germany, October 23-24, 2017.

[6] Leila Scola, “AI and the Ethics of Energy Efficiency,” Markkula Center for Applied Ethics website, May 26, 2020, available at: https://www.scu.edu/environmental-ethics/resources/ai-and-the-ethics-of-energy-efficiency/

[7] Brian Patrick Green, “Artificial Intelligence, Decision-Making, and Moral Deskilling,” Markkula Center for Applied Ethics website, Mar 15, 2019, available at: https://www.scu.edu/ethics/focus-areas/technology-ethics/resources/artificial-intelligence-decision-making-and-moral-deskilling/

[8] Shannon Vallor, “Moral Deskilling and Upskilling in a New Machine Age: Reflections on the Ambiguous Future of Character.” Philosophy of Technology 28 (2015):107–124., available at: https://link.springer.com/article/10.1007/s13347-014-0156-9

[9] Brad Sylvester, “Fact Check: Did Aristotle Say, ‘We Are What We Repeatedly Do’?” Check Your Fact website, June 26, 2019, available at: https://checkyourfact.com/2019/06/26/fact-check-aristotle-excellence-habit-repeatedly-do/

[10] Lee Mannion, “Britain appoints minister for loneliness amid growing isolation,” Reuters, January 17, 2018, available at: https://www.reuters.com/article/us-britain-politics-health/britain-appoints-minister-for-loneliness-amid-growing-isolation-idUSKBN1F61I6

[11] Julianne Holt-Lunstad, Timothy B. Smith, Mark Baker,Tyler Harris, and David Stephenson, “Loneliness and Social Isolation as Risk Factors for Mortality: A Meta-Analytic Review,” Perspectives on Psychological Science 10(2) (2015): 227–237, available at: https://journals.sagepub.com/doi/full/10.1177/1745691614568352

[12] Markkula Center for Applied Ethics Staff, “AI: Ethical Challenges and a Fast Approaching Future: A panel discussion on artificial intelligence,” with Maya Ackerman, Sanjiv Das, Brian Green, and Irina Raicu, Santa Clara University, California, October 24, 2017, posted to the All About Ethics Blog, Oct 31, 2017, video available at: https://www.scu.edu/ethics/all-about-ethics/ai-ethical-challenges-and-a-fast-approaching-future/

[13] Brian Patrick Green, “Some Ethical and Theological Reflections on Artificial Intelligence,” Pacific Coast Theological Society (PCTS) meeting, Graduate Theological Union, Berkeley, 3-4 November, 2017, available at: http://www.pcts.org/meetings/2017/PCTS2017Nov-Green-ReflectionsAI.pdf

[14] Brian Patrick Green, “AI and Ethics,” guest lecture in PACS003: What is an Ethical Life?, University of the Pacific, Stockton, March 21, 2018.

[15] Brian Patrick Green, “Ethical Reflections on Artificial Intelligence,” Scientia et Fides 6(2), 24 August 2018. Available at: http://apcz.umk.pl/czasopisma/index.php/SetF/article/view/SetF.2018.015/15729

Thank you to many people for all the helpful feedback which has helped me develop this list, including Maya Ackermann, Kirk Bresniker, Sanjiv Das, Kirk Hanson, Brian Klunk, Thane Kreiner, Angelus McNally, Irina Raicu, Leila Scola, Lili Tavlan, Shannon Vallor, the employees of several tech companies, the attendees of the PCTS Fall 2017 meeting, the attendees of the needed.education meetings, several anonymous reviewers, the professors and students of PACS003 at the University of the Pacific, the students of my ENGR 344: AI and Ethics course, as well as many more.

Aug 18, 2020
--

Subscribe to Our Blogs

* indicates required
Subscribe me to the following blogs: