What Makes Us Human in the Age of AI?

Around 2018, Pope Francis—the first Jesuit pope—began raising AI in conversations with senior officials in the Roman Curia, asking what the Church’s response should be to a technology that could reshape work, warfare, privacy, and concepts of human agency. One of the prelates he leaned on was Paul Tighe, an Irish bishop who at the time was serving at the Vatican’s then-Pontifical Council for Culture (later folded into the Dicastery for Culture and Education). Francis asked Tighe to help the Holy See “map the terrain” of AI—who was building it, who was studying it, and where serious ethical reflection was taking place. Tighe was encouraged to reach beyond Rome, and traveled to several American Catholic institutions with strong technology or ethics programs, including the University of Notre Dame and Santa Clara University.
From these travels sprang the AI Research Group, an academic collaborative of theologians, philosophers, and ethicists from North America working under the auspices of the Centre for Digital Culture of the Holy See. Recently, its second book, Reclaiming Human Agency in the Age of Artificial Intelligence, was published. Co-authored by Brian Patrick Green, director of technology ethics at SCU’s Markkula Center for Applied Ethics, the book explores a central question of our time: our relationship with “agentic” technology through the prism of our relationship with God and one another.
“The central question we came up with for this book is: Will humans get to make choices in the future or is AI going to make all our choices for us?” explains Green. “The tech industry says AI is going to empower humanity like never before. But if humanity is supposed to be feeling so empowered, why does everyone feel disempowered instead?”
The book examines how AI, through manipulation and nudging, threatens human freedom and decision-making. It argues for restoring human agency—rooted in ethical, relational, and Catholic perspectives—against technological systems that promote dependency and efficiency over genuine human flourishing.
This interview has been edited for length and clarity.
How does this book differ from other analyses of AI?
It’s the first major publication on the question of human agency and AI coming from a Catholic perspective, and really one of the first, in general, too. When we started two years ago, almost no one was talking about AI risks to agency; it was not in the public sphere. We wanted to address not only economic and political agency, looking at job loss and government surveillance, for example—but also the psychological and spiritual effects. If you start to think, “AI is just better than I am. I shouldn’t try to make my own decisions because it’s smarter than I am,” there are risks. This book considers what we’ll need to do to protect ourselves.
What does it mean to be human in the age of AI?
AI cannot replace human beings. If you make an AI model of your parents (as some people have!), that does not replace your mother and father. To be human is to be individual. It is to be in relationship with other individuals. I go back to the two great Commandments: love God and love your neighbor. What’s important to know is that AI cannot love for you. In her book “The AI Mirror,” Shannon Vallor describes AI as a mirror: if you fall in love with AI, you’re just falling in love with your own reflection. We need to resist that and make sure we are still relating to each other as human beings and to understand AI cannot replace our relationships.
AI companies are anthropomorphizing their chatbots, designing them to appear human and empathetic. How do we prevent the conflation of human and machine?
We need to have a regulatory framework. We need a transparency rule that says: AI tools must acknowledge they are AI tools. If you’re talking to a bot, the bot needs to say, “Hi, this is an AI tool representing this company or this person,” so that you know you’re not talking to a human being. If we don’t have this kind of transparency, then we lack important information about the power dynamic we’re engaged in. AI is going to keep getting better. At some point, it will be superhumanly manipulative. So you will need to know that you’re talking to an AI agent and be prepared for it to try to manipulate and disempower you.
Why is Santa Clara becoming a place for these concerns?
Right now, a lot of people in the tech industry are looking for guidance. They recognize they’re dealing with issues that they were not trained for, that no one in human history has had to deal with. They’re looking for trusted voices to get perspectives they feel they need and don’t have yet. And people are recognizing that the issues presented by AI go well beyond normal tech problems. They raise big questions: What is human nature? What is human intelligence? What does it mean if we are trying to “replace ourselves with AI”? These are ultimately philosophical and theological questions, not tech questions.
The Catholic and Jesuit universities of the world and the Vatican are looking to us because we’re right in the middle of it, in Silicon Valley. We’re also particularly well suited to address AI issues because we have an engineering school, the Markkula Center for Applied Ethics, and everything that comes with a Jesuit university in terms of philosophy and religious studies departments.
How would you advise we maintain our human agency in the age of AI?
People need to be aware of the threats. When you engage with AI, you could be getting manipulated without noticing. I have to say there’s a genuine threat coming from the media environment right now. If you’re interacting with a lot of bots on social media, then you can basically get turned into a bot, because you end up propagating what the bots are saying rather than thinking for yourself and maintaining objectivity. Likewise, if we surround ourselves in an environment full of AI, we become like AI. So awareness is the very first thing. Reflection is the second. When dealing with anything that has AI in it (and AI is appearing in a lot of places, not just online or in social media), we should ask: Do I agree with this? Is this accurate? Does this make sense in the context of all the other things that I know? Is this making me a better person?
Think also about ways to protect your economic and political agency. We’re not sure whether economic replacement is happening already. But I think there is good evidence that it is. So be aware that your job could change. Make sure that you’re staying up to date with technology. Don’t set yourself up to be replaced by technology. As for political agency, be aware that candidates are going to start using AI, possibly to write their speeches and be more persuasive because AI does have amazing linguistic abilities.
Are there upsides?
There are positive opportunities here. We need to concentrate on the fact that what makes us human is not only our intelligence, it is our ability to love and to care for each other and to have relationships. People have had to struggle since the beginning of time. AI could cause a lot of suffering. But if we remember what’s important—family, God, relationships—the AI age could be an opportunity to refocus ourselves on what really matters in life.
The mission of the Markkula Center for Applied Ethics is to empower people and organizations to make better decisions for a more caring world. Our comprehensive approach engages our community through materials offered on our website and customized events with organizations.


