Skip to main content
Markkula Center for Applied Ethics

Beyond the Reflective Surface

book and coffee cup on a saucer on a table

book and coffee cup on a saucer on a table

An upcoming lecture by Shannon Vallor, author of The AI Mirror

Irina Raicu

Irina Raicu is the director of the Internet Ethics program (@IEthics) at the Markkula Center for Applied Ethics. Views are her own.

Back in 2016, I wrote a blog post about a talk given by professor Shannon Vallor in anticipation of the publication of her first book, Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting. Professor Vallor will be returning to Santa Clara University on April 29 to deliver this year’s Regan Lecture, discussing her new book: The AI Mirror: How to Reclaim our Humanity in an Age of Machine Thinking. It’s an in-person event; registration is free, and required.

Instead of a book review, I thought I’d offer here a couple of extended quotes from the book—carrying with them, hopefully, a flavor of what you might hear if you join us.

“Levinas,” writes Vallor,

tells us that we live in a time ‘where no being looks at the face of the other.’ He wrote this in 1961, long before… AI mirrors converted the meaning of a human face to the calculation of uniquely identifying mathematical vectors in a faceprint. … But the AI mirror threatens to engrave it even deeper into our way of being.

All of us share a kind of knowledge that the AI mirror cannot: what it is like to be a human alive, bearing and helping others to bear the lifelong weight of animal flesh driven by a curious, creative, and endlessly anxious mind….

How AI systems see us, and how the AI ecosystem represents us in these mirrors, is not how we see each other in these intermittent moments of solidarity. To an AI model, I am a cluster of differently weighted variables that project a mathematical vector through a predefined possibility space, terminating in a prediction. To an AI developer, I am an item in the training data, or the test data. To an AI model tester, I am an instance in the normal distribution, or I am an edge case. To a judge looking at an AI predetention algorithm, I am a risk score. To an urban designer of new roads for autonomous vehicles, I am an erratic obstacle to be kept outside the safe and predictable machine envelope. To an Amazon factory AI, I am a very poorly optimized box delivery mechanism.

(Note: that’s not the kind of writing that AI would generate…) Vallor continues:

When we are then asked to accept care from a robot rather than a human, when we are denied a life-changing opportunity by an algorithm, when we read a college application essay written for the candidate by a large language model—we must ask what in that transaction, however efficient it might be and however well it might scale, has fallen into the gap between our lived humanity and the AI mirror.

Vallor doesn’t simply assess that gap and critique it; she calls for both technical and social innovations that might lessen it. She is not wishing or pushing for a world devoid of AI, but for one with AI tools that better reflect and promote human flourishing.

Join us, if you can, on April 29, for one of those “moments of solidarity,” in which we will look at each other while examining the AI mirror, rather than looking into that mirror to examine ourselves. Incidentally, as Vallor explains, “only a subset of the data about humans that could be used to train machine learning models is actually being used today for this purpose… It follows that what AI systems today can learn about us and reflect to us is… only a very partial and often distorted view.” Together, in conversation, we might get a clearer one.

Apr 21, 2025
--

Subscribe to Our Blogs

* indicates required
Subscribe me to the following blogs: