Skip to main content
Markkula Center for Applied Ethics

Virtue Ethics on the Cusp of Virtual Reality

Shannon Vallor

Shannon Vallor

Shannon Vallor on technology and the virtues

Irina Raicu

Our colleague Shannon Vallor, associate professor and chair of Santa Clara University’s Philosophy department (and an MCAE faculty scholar), has an upcoming book due to be published in August by Oxford University Press. Last Friday, as part of an “Ethics at Noon” event, she spoke about some of the observations and conclusions that she draws in that book, titled Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting.

Vallor pointed out that technology has been part of the human story from its very beginning, and that from the earliest tools developed, our technological enhancements have had social and ethical impact. As Vallor put it, “The ways we use technology have always shaped our affordances for moral life.” She argued that “it is absolutely incoherent to be anti-technology and to be a human being”—but no, she is not a tech-infused-Silicon-Valley-apologist; she is adamantly opposed to what she calls “technocratic disengagement from human values other than efficiency, productivity, and innovation for its own sake.”

In her recent work, Vallor focuses primarily on recent advances in robotics, artificial intelligence, new (social) media, and bio-enhancement. As many have noted, advances in all of those fields are happening quickly—often faster than laws and even social norms can develop around them. Rather than simply noting (or bemoaning) that fact, Vallor asks us, first, to recognize that technologies are not “value-neutral”:  that every technology presupposes a vision of what the “good life” is. (Technologists are often reluctant to acknowledge this—or simply assume that their vision of the “good life” is widely shared.) She then points out that technological advances have unpredictable and open-ended consequences, and that current technologies pose “new global problems of collective human action,” many of which have implications for the future of human flourishing on this planet.

With this background in mind, Vallor draws on virtue ethics to ask, “What sort of people will deal well with the challenges posed by emerging technology? What qualities will they need in order to flourish?” In other words, what are the virtues demanded by a rapidly changing world—the virtues that would allow people to anticipate challenges, perhaps meet them before they arrive, or at least respond best to them when they do?

Vallor proposes a list of 12 “technomoral virtues.” (Take a moment to appreciate that term; it is not overly familiar to folks discussing the development of technology, in the heart of Silicon Valley. Vallor was not the one who coined it, but her work might increase its use.) They are honesty, self-control, humility, justice, courage, empathy, care, civility, flexibility, perspective, magnanimity, and wisdom. These virtues, according to Vallor, acquire a particular cast in the “technomoral” setting: “courage,” for example, in this context, is “intelligent fear and hope with respect to the dangers and opportunities presented by emerging technologies;” “magnanimity” is defined as “moral ambition and leadership” in technology policy, research, design, and use.

Each of those virtues receives an in-depth analysis in the book. The list itself, however, throws down a gauntlet. Are these the right virtues for our tech-infused times? And do those virtues overlap, intersect? “Silicon Valley” (that generalization that gets thrown around) is often accused of lacking empathy; is that the case, or is it a deficiency of humility and perspective that leads to actions perceived as lacking in empathy?

And do we all aspire to all of these virtues? Do we educate others to practice them? Vallor’s book has a lot more to say about all of this (certainly more than could be covered in a 45-minute talk and Q&A, not to mention a brief blog post)—including suggestions for ways to foster technomoral development, through habitual moral practice, focused moral attention, appropriate extension of moral concern, and more.

As Vallor puts it in the book’s introduction,

No ethical framework can cut through the general constraints of technosocial opacity. The contingencies that obscure a clear vision of even the next few decades of technological and scientific development are simply far too numerous to resolve— in fact, given accelerating physical, geopolitical, and cultural changes in our present environment, these contingencies and their obscuring effects are likely to multiply rather than diminish. What this book offers is not an ethical solution to technosocial opacity, but an ethical strategy for cultivating the type of moral character that can aid us in coping, and even flourishing, under such chal­lenging conditions.

While drawing on Aristotelian, Confucian, and Buddhist ethics, Vallor also seeks to draw on the powers of technology itself. She calls for an “interweaving of moral and technological expertise” to create “a practical and pow­erful strategy for cultivating technomoral selves.” She argues that in a time of opacity, “the technomoral virtues offer the philosophical equivalent of a blind man’s cane.” It’s a modest metaphor: she could have pointed out that humans have also invented eyeglasses, cornea replacements, night vision goggles, Google Glass… But humility, remember, is close to the top of her list of technomoral virtues—and the cane warns us there is much ahead that we do not see but we must still prepare for.

May 25, 2016
--

Subscribe to Our Blogs

* indicates required
Subscribe me to the following blogs: