
This Ethics Spotlight explores the impact AI is having on human dignity, part of the Markkula Center’s work in the international project New Humanism in the time of Neurosciences and Artificial Intelligence (NHNAI).
Perspectives
Three Readings on Dignity by Subramaniam Vincent, director of journalism and media ethics at the Markkula Center of Applied Ethics.
Explore three readings that offer useful context and learning about dignity and its significance to our society today.
Is Dignity a Bad Idea for AI Ethics? Responding to Dignity’s Critics by Brian Patrick Green, director of technology ethics at the Markkula Center for Applied Ethics
While critics argue that dignity should not be included in discourse around AI, its place as a foundational concept behind our rights necessitates its inclusion.
Representative Democracy Requires the Responsible Use of AI by John P. Pelissero, director of government ethics at the Markkula Center for Applied Ethics
While the responsible use of AI has the opportunity to enhance democracy, we must be careful for it not to replace the roles of current representatives.
Dignity and Virtual Worlds by Don Heider, executive director of the Markkula Center for Applied Ethics at Santa Clara University
Upholding dignity in the virtual world is key to making virtual environments remain safe and welcoming for all.
Digital Dignity and the Expansion of Selves by Erick Ramirez, assistant professor of philosophy at Santa Clara University and faculty scholar with the Markkula Center for Applied Ethics.
As new technologies expand the way we see and interact with each other we must take into account how we extend dignity to those around us in these new spaces.
AI and the Pink Elephant in the Room by Maya Ackerman, associate professor of computer science and engineering and faculty scholar with the Markkula Center for Applied Ethics.
The biases in AI reveal the implicit biases that we hold within society, giving an opportunity for self reflection and change.
Who Cares About the Ethics of AI? Women Do by Ann Skeet, senior director of leadership ethics at the Markkula Center for Applied Ethics at Santa Clara University.
Studies show women are using AI less than men which has possible negative implications for the future workforce.
AI and Human Dignity by Thomas G. Plante, professor of psychology at Santa Clara University and faculty scholar at the Markkula Center for Applied Ethics.
As people become more engaged with AI it is important that we preserve the sacredness of human dignity while maximizing the potential and minimizing the downsides of AI.
Digital Dignity in the Age of AI-Generated Emails by Tracy Barba, director of venture and equity ethics with the Markkula Center for Applied Ethics at Santa Clara University.
If AI is to be developed as the future of email it must be developed with a focus on maintaining digital dignity and human connection.
Preserving the Power of Human Connection by Ann Skeet, senior director of leadership ethics at the Markkula Center for Applied Ethics.
Preserving human connection is crucial as humans are uniquely able to honor each other's dignity in a way that AI is not able to.
How AI Threatens Human Freedom by Brian Patrick Green, director of technology ethics at the Markkula Center for Applied Ethics.
The choices we make in terms of what we allow AI to do for us is crucial towards ensuring we preserve human freedom.
We Must Reassert the Moral Meaning of Dignity by Subramaniam Vincent, director of journalism and media ethics at the Markkula Center of Applied Ethics at Santa Clara University
We must reassert the moral-political meaning of dignity to use it as a lens through which we can look at society.
You Can’t Demonize Dignity: Religions and Social Media by David DeCosse, director of religious and Catholic Ethics with the Markkula Center for Applied Ethics at Santa Clara University.
Religions must lead a movement pushing back against the demonization of dignity.
Recordings of Six Presentations from “AI and the Environment: Sustaining the Common Good by Irina Raicu, director of the Internet Ethics program at the Markkula Center for Applied Ethics.
Explore six panel recordings of the Markkula Center for Applied Ethics and Next10 conference on AI and sustainability.
Falling Flat by Irina Raicu,, director of the Internet Ethics program at the Markkula Center for Applied Ethics.
AI is a tool not a replacement for human creativity, something we must remember as more of our content becomes training data.
Wild Goose Chase by Irina Raicu,, director of the Internet Ethics program at the Markkula Center for Applied Ethics.
A poem about human connection in a time of social media.
On Cura Personalis and Generative AI in Education by Irina Raicu,, director of the Internet Ethics program at the Markkula Center for Applied Ethics.
We must assess different uses of AI carefully based on their impact on the cura personalis as a whole, and reject the uses that do not foster dignity.
To Bing or Not to Bing by Irina Raicu,, director of the Internet Ethics program at the Markkula Center for Applied Ethics.
A poem speaking about the research experience in the age of chatbots.
Scenarios for Optimistic Sci-Fi Stories by Irina Raicu,, director of the Internet Ethics program at the Markkula Center for Applied Ethics.
Future scenarios as artificial intelligence evolves and become more entrenched in our society over time.
Recordings from Digital Dignity Day
Recordings from our May 2, 2025 daylong conference exploring the impact AI is having on human dignity, part of the Markkula Center’s work in the international project New Humanism in the time of Neurosciences and Artificial Intelligence (NHNAI).
NHNAI Overview with Brian Green (Markkula Center for Applied Ethics) and Mathieu Guillermin (Lyon Catholic University).
AI is changing the world, but we tend not to see the truly global picture. The NHNAI project has gathered that global perspective, and here the project leader will share what has been discovered.
De-Coding our Humanity: Dignity and Fullness in the Digital Age with Keynote Speaker, Professor Shannon Vallor (University of Edinburgh).
Professor Shannon Vallor, unveils new work, inspired by Charles Taylor’s The Secular Age, on the evolution of the modern conception of the "coded human." Vallor outlines this concept and highlights its reinforcement by AI and other emerging technologies, explores how it increasingly obscures our access to the experiences of human dignity and fullness, and how we might move beyond its limits.
Human Dignity Framing with Jane Pak (Refugee & Immigrant Transitions), Greg Eskridge (Uncuffed Leadership Fellow), Irina Raicu (Markkula Center for Applied Ethics) and moderated by Subbu Vincent, director of media and journalism ethics at the Markkula Center for Applied Ethics.
How do grassroots leaders and ethicists frame dignity in their efforts to remove indignities that people and communities at the margins experience both offline and online? What is an available and accessible definition of “dignity” (with implications) that grounds the other values and processes in societies towards becoming more just?
Organizational Responsibility with Daniel Lim (Salesforce), Benjamin Larsen (World Economic Forum), Brian Green (Markkula Center for Applied Ethics), Tracy Pizzo Frey (Restorative AI), and moderated by Ann Skeet (Markkula Center for Applied Ethics).
AI is developed and deployed in organizational contexts. What are the responsibilities of organizations to do AI right? What can organizations do to promote AI in ways that protect human dignity?
Marginalized Communities with Elizabeth Tellman ’09 (University of Arizona) and David DeCosse (Markkula Center for Applied Ethics).
Explore the impact and opportunities of digital technologies on communities living on the edges of society.
The Digital Future: AI and Humanity with Mathieu Guillermin (Lyon Catholic University), Susan Kennedy (Santa Clara University), Dr. Shannon Vallor (University of Edinburgh) and moderated by Brian Green and Thor Wasbotten (Markkula Center for Applied Ethics)
Where have we been and where are we going? This session reflects on event discussions and asks: now that we are aware of the risks and opportunities, what can we do to promote human dignity in a world of AI?