Skip to main content
Markkula Center for Applied Ethics

Ethics Is More Important than Technology

computer screen displaying lines of code

computer screen displaying lines of code

Brian Patrick Green

Brian Patrick Green is the director of Technology Ethics at the Markkula Center for Applied Ethics at Santa Clara University. Views are his own.

The recent appearance of GPT-3 [1] and other powerful new artificial intelligence technologies have once again raised the question of the role of ethics in technology. In this article I argue that there is no question that, rather than technology having some sort of priority, ethics is always the measure of technology. Every technology should be evaluated for whether it benefits or harms society, and the technologists themselves are the first ones who ought to apply this evaluation. If technologists refuse this task, then external regulators must do this work for them.

About a year ago I was having a conversation with a few artificial intelligence (AI) and machine learning (ML) researchers. The topic of discussion was what responsibility the developers of AI and ML had towards preventing the misuse of their work.

A hypothetical situation was proposed: if an AI algorithm with the reading comprehension level of a clever high school student were available, should it be released freely? The AI researchers thought of numerous good uses… think of the effort saved in reading emails! The AI could do it all for you. I chimed in with some downsides. Among others, this could be used to monitor communications and create a totalitarian surveillance state. The researchers responded that this would be against the law and therefore was not a problem. I responded that in other nations (never mind the potential fragility of our own civil liberties) this negative use case would certainly occur. The researchers replied that there was nothing to be done about that; it was beyond their control. In other words: “that’s not our problem;” we take no responsibility. Release it anyway.

Thus free civilization dies—as the outcome of technologists refusing to take moral responsibility for their own actions.

Ethics Now More than Ever

Every day, humanity grows in power. In the past, humanity was involuntarily constrained by its weakness, now we must learn to be voluntarily constrained by our own good judgment, our ethics. [2] Technological power gives us new choices, but only ethics can tell us which among those new choices are actually good. And we must choose wisely, or we will come to live in a terrible world. Perhaps we can scratch the 2016 election as a learning experience, paid for dearly, but if such socio-political distortions continue, the 2016 election may come to look like the start of a trend that destroyed free society.

Ethics is More Important than Technology

What a hammer does—what its wielder chooses to do with it—is more important than the hammer itself. When a hammer amplifies the power of hitting, then the choice of what to hit is also amplified in importance. It becomes even more important that we choose to hit the right things—a nail, for example, not our own finger, or another person. Technologies are important because they give us choices, but it is humans who must, through ethics, decide what to do with those choices.

Wanting the Right Things

Technology is, at its core, “know how,” as in “knowing how to get things done,” typically by creating technological products to do some of the work for us. Technology then is a way of efficiency, of getting what we desire. With AI, we can put our own technological growth on steroids, amplifying and accelerating our pursuit of our desires. With artificial general intelligence (AGI) and super intelligence, some even seek to achieve everything we want, to “solve everything” as one prominent AI researcher has put it [3].

But “solving everything” is a morally neutral goal. One could just as easily solve for “kill people” as “protect people.” Rather than solving everything, the unspoken moral assumption which undoubtedly lies behind the statement should be brought forth: solve everything good. But that will require discipline and direction. Once we gain the power to get whatever we desire, controlling that desire becomes more important than ever before. We have to learn a second order desire, a desire for desire—a desire to want the right things. We must ask ourselves an ethical question: what ought we to want? If we choose wisely, our world will improve. If we choose unwisely, our world will degrade.

We should assess of our own desires in order to govern those desires and, through them, our own technology. Likewise, we also ought to realize that we should relinquish our desires for evil technologies such as weapons of mass destruction. It may be reasonable to keep such weapons temporarily, to maintain geopolitical stability, but we should be regretful in this state, always hoping that a better day shall come when such weapons are no longer needed. This relinquishment of desire shows respect for our own finitude. We are limited beings who cannot always wield our powers correctly. If there are powers that exceed our ability to control them, we should not be like the Sorcerer’s Apprentice or Dr. Frankenstein. Instead we should humbly relinquish those desires and coordinate in a global manner to be sure those powers always remain under control [4].

Abusus Non Tollit Usum

There is an old Latin phrase: abusus non tollit usum—that is, abuse does not take away use [5]. For example, a knife can be used for robbery or murder, but its legitimate use to cut vegetables remains. However, this dictum is for smaller technologies than some of those we have today [6]. For the largest, most powerful technologies of our age, those which threaten the very existence of human life and of many other life forms on Earth, no finite benefit can justify risking an infinite loss [7]. Our technologies today are extinction-level, and we should not risk human extinction for the sake of any small benefit. Sometimes abuse should take away use.

For AI, the ultimate dual use technology, the perils are especially pronounced. Just as human intelligence can be directed towards good and evil, AI will be directed towards good and evil. But we should limit its potential for evil, or we will come to live in a terrible world.

Two Questions for the Future

1) How can we use technology to make it easier to do good?

2) How can we use technology to make it harder to do evil?

For those working in AI and ML who seek to release powerful new technologies on an unprepared world, these questions are inadequately considered. Ideally technologists would govern themselves with the highest ethical standards, and therefore not need any external governance; indeed, helping technologists in this way is what I try to do. Technologists themselves are the first and best line of defense against bad technologies because they deeply understand the technology and are right there when it is being developed.

But if technologists refuse to think about these questions, then the rest of society needs to think about these questions for them, and make decisions about these questions for them. Nobody likes being told what to do, but for those who refuse to govern themselves, others must make decisions on their behalf. Ethics is more important than technology, and refusing to recognize that just pushes the problem to a higher level of regulation; from self-regulation to external regulation. By saying “that’s not our problem,” technologists abdicate their self-governance. They effectively decide to shove a decision onto society, and ask society to regulate them. If AI and ML researchers want to continue in freedom, unfettered by excessive (and likely poorly designed) external regulation, then they should learn to govern themselves in a trustworthy manner.

And if researchers refuse to do that, the rest of us will have to do it for them.

This talk, minus the first paragraph connecting it to contemporary developments in AI, was originally presented at the San Jose State University Paseo + Deep Humanities & Arts Interdisciplinary Symposium on April 12, 2019.

References

[1] James Vincent, “OpenAI’s Latest Breakthrough Is Astonishingly Powerful, but Still Fighting Its Flaws: The ultimate autocomplete,” The Verge, July 30, 2020. Available at: https://www.theverge.com/21346343/gpt-3-explainer-openai-examples-errors-agi-potential

[2] Brian Patrick Green, “Pope Francis, the Encyclical Laudato Si, Ethics, and Existential Risk,” Institute for Ethics and Emerging Technologies, Aug 16, 2015. Available at: https://ieet.org/index.php/IEET2/more/green20150816

[3] Tom Simonite, “How Google Plans to Solve Artificial Intelligence,” MIT Technology Review, March 31, 2016, available at: https://www.technologyreview.com/s/601139/how-google-plans-to-solve-artificial-intelligence/

[4] Bill Joy, “Why the Future Doesn’t Need Us,” Wired (April 2000), available at: https://www.wired.com/2000/04/joy-2/

[5] See, e.g., this Reddit answer, with numerous examples of versions of the phrase over the last 500 years: https://www.reddit.com/r/latin/comments/25db0f/question_about_a_phrase_meaning/?utm_source=share&utm_medium=web2x and the oldest instance cited therein: Andreas Fricius Modrzewski, Commentariorum de Republica emendanda Libri quinque: quorum: Primus, de Moribus. Secundus, de Legibus. Tertius, de Bello. Quartus, de Ecclesia. Quintus, de Schola. (Oporinus, 1554) p. 314, available at: https://books.google.de/books?id=ZEdcAAAAcAAJ&lpg=PA314&ots=3KOrQppGmW&dq=%22propter+abusum%22+tolli&pg=PA314&hl=de#v=onepage&q=%22propter%20abusum%22%20tolli&f=false

[6] Brian Patrick Green, “Emerging technologies, catastrophic risks, and ethics: three strategies for reducing risk.” Proceedings of the 2016 IEEE International Symposium on Ethics in Engineering, Science and Technology (ETHICS), Vancouver, BC, Canada, 13–14 May 2016.

[7] Brian Patrick Green, “Little Prevention, Less Cure: Synthetic Biology, Existential Risk, and Ethics,” for the Workshop on Research Agendas in the Societal Aspects of Synthetic Biology, Arizona State University, Tempe, Arizona, November 4-6, 2014, available at: http://cns.asu.edu/sites/default/files/greenp_synbiopaper_2014.pdf

Aug 10, 2020
--

Subscribe to Our Blogs

* indicates required
Subscribe me to the following blogs: