Skip to main content
Markkula Center for Applied Ethics

What Does it Mean to be Human in the Time of AI?

Laura Clark, ’24

Laura Clark is a Santa Clara University senior majoring in philosophy and religious studies with an emphasis in ethics and values and a 2023-24 Hackworth Fellow with the Markkula Center for Applied Ethics. Views are her own.

 

Considering questions regarding human flourishing, how we understand individual and humanity’s purpose in relation to emerging AI technologies, and where the fear of AI arises from, three panelists — Maya Ackerman, David DeCosse, and Eric Haynie — with varying backgrounds in computer science and engineering; religious, Catholic, and campus ethics in the Markkula Center; and campus informational technology and Buddhism, respectively, came together with the goal of furthering student, staff, faculty, and community reflection on the topic.

The first question posed to the panelists was “what does human flourishing look like in the time of AI?” Maya Ackerman began by discussing technological developments beginning with the printing press which immediately sparked an explosion of information that could be disseminated throughout society. Later on, the internet marked another turning point, making information further accessible. Ackerman explained that now we face another transformative time in which all the information on the internet is compiled in AI, creating something of “a massive brain” which is “intelligent in a way and is out of reach for an individual person because we could never hope to consume so much information.” At the same time, Ackerman acknowledged that “our entire economic system was not set up for this, creating a lot of friction about the practicality of it.” While AI obtains wonderful capacities for us, we need to figure out how to integrate these into our world.

David DeCosse weighed in concerning the “hype” factor of AI which he described as the excitement and anticipation in terms of AI achieving consciousness that is being used to increase funding for AI companies. For DeCosse, it remains significant to maintain a balance between the creative and good possibilities of AI technologies and the greedy nature of the companies who control it. DeCosse described human flourishing as deeply connected to struggle, failure, and justice. He said “I worry about human flourishing being abstracted away from the concrete challenges of human life, especially those who suffer deeply in many ways. Suffering and struggle is invaluable to the process of flourishing.” Therefore, recognition of AI’s potential must be balanced against AI’s potential to enhance suffering.

Eric Haynie added to this worry, commenting on how we seem to be vacillating between immense possibility and the limitations of our capitalist society. This prompted Ackerman to emphasize the need to separate the capacity of AI technologies vs. what human beings are choosing to do with it, particularly those with power. She explained that before ChatGPT was introduced, AI was largely an academic endeavor about enhancing our capacities. After ChatGPT’s launch, generative AI has become synonymous with a chatbot. Thus there is a gap between what AI is, what it is capable of, and how it is being marketed in our world. Ackerman cited the innovation of gunpowder as an example of a scientific innovation which was corrupted by the people in power, altering what was meant to be beautiful in the creation of fireworks to a destructive force. For Ackerman, rather than seeing the technology as bad in itself, it really has to do with how powerful people are choosing to apply this tech.

DeCosse added that the concern that AI will outsmart us fails to capture the fact that living a good and happy human life means much more than being intelligent. DeCosse notes that “AI, as with so many things, suggests a sort of means to an end way of being, which is worrisome.” This disinterested way of thinking corrupts moral decision-making and fails to encompass what flourishing really means. AI pushes us to systematize and categorize things like love which must be reoriented toward human relationality. Haynie continued along this line of thought saying that “We have this tool which is a technology and we have the human reaction to it.” We must consider how to enhance our human relationship with one another and use of this tool. Ackerman posed a few questions aligning with these points of “How can we come together as humans to decide how we engage with AI? How can it be possible to enhance humanity and do we need any tech to help human flourishing?” For Ackerman, she argued that we don’t need these things for human flourishing; what is needed is relationships and community.

The next question which the panelists considered was “Does AI help us or harm us in terms of reflecting on our individual purpose and humanity’s purpose?” Haynie began the conversation by referencing how there are ways to engage with AI that can level certain playing fields in regards to learning. He provided an example of students in his classes who struggle with writing or composition, saying that he’s seen success in supplementing learning with platforms such as ChatGPT. In this way AI can support learning. Ackerman raised the point that one of the concerns of generative AI platforms is that it is biased. She emphasized the fact AI is trained on our data. She said that,“We are all biased… the only way to overcome [these biases] is to become aware of them”. Ackerman drove home the idea that people desire perfect machines that don’t make mistakes. “We want machines that explain themselves. We hold them to a standard that we don’t hold each other.” But for Ackerman this view misses the essential fact that humanity needs to look at ourselves and our situation and intentionally do better as humanity. Ackerman defended that it is a big mistake to compare AI to previous generations of tools like calculators, reinforcing that “maybe if we want machines to be better we need to be better and work on our implicit biases.”

DeCosse picked up on this point quoting a piece recently published by venture capitalist Marc Andreessen, saying that it exemplified an underlying concern about the “hype” surrounding AI. The quote mentioned that people only do things for other people for 3 reasons - love, money, or force, arguing that love does not scale. Therefore the economy must run on money or force. Andreessen says that “the force experiment has been run and found wanting,” favoring money. DeCosse said “As a theologian - love does scale.” For DeCosse, a problem of AI is that its market-based, technocratic logic takes over how we think of who we are. Citing Pope Francis, DeCosse said that the greatest temptation is to define another as totally other, as impenetrable to me and highlighted his fear that AI has picked up that temptation and run with it. DeCosse identified the purpose of humanity is to love — including to counteract biases.

Connecting DeCosse’s vision of humanity and love, Ackerman added that “It has been an eye opening experience to see how the venture world behaves.” You need venture capital to have a company because you need the means. The way we see generative AI play out are with people who explicitly choose money or love and view any positive sentiment toward humanity as a weakness or a hindrance. Ackerman says “This is what we are up against - that is not the AI, it is the powerful people taking it and using it.” Haynie echoed these sentiments by mentioning tech companies' use of the Dalai Lama, a representation of compassion, in their advertising. Rounding out the discussion, Ackerman maintained that “no amount of technological innovation is going to fix what people need to fix themselves”.

The last question the panel gave their thoughts on was “Where does our fear of AI come from? What is it about our humanity or human nature that we need to protect when it comes to AI?” Ackerman started by saying that the fears of AI are legitimate. She provided an example saying that even 6 years ago when music generators came out, people said “I don’t want it. I don’t want it to take my job.” The music companies initially made public speeches about protecting artists but at the same time were creating and investing in technology to replace artists. “It is incredible the amount of greed in this world… It is a reality that there are people who choose to do bad things.” Haynie further commented that “part of the fear may come from this uncanniness of a technology that sounds like a human. There is a “frankenstein complex” where we can type something into a chatbot and within seconds it sounds kind of cogent. It sounds sentient” which he argued instills fear in us. Additionally Haynie noted how the sense of speed and rapid pace at which AI development is happening adds to the fear. People fear that “if we don’t get on top of it now, at what point do things start proliferating in any direction?” DeCosse concluded the conversation by acknowledging that there is a lot not to fear in AI. Even so, he pointed to AI systems as a form of anonymous power of which it can be hard to grasp who is responsible for it.

During the question and answer period, one person asked: to what degree does AI fall into the category of a tool with net negative use? Dr. Ackerman expressed that what is unfortunate is that we are not going to get a high level of government regulation so it is up to us to use AI responsibly. Another audience member expressed that since AI is limited to large companies with extensive financial resources to create these large language models, and the models also have a large environmental impact, why are people remaining so optimistic about it? The panelists explained that AI is not simply relegated to LLMs but that is currently the way that generative AI is being marketed. Further, there are many ways to create AI platforms that do not significantly harm the environment. A question regarding the job landscape prompted responses from the panelists about how it will depend upon the social decisions people make on top of the technology. As a final point, DeCosse articulated that the human face-to-face interaction should never be lost because it is so valuable.

Overall, the event tied together current concerns regarding the technological advance of AI with the idea of human flourishing and how we can understand the two in light of one another. Ultimately, human relationality and our capacity for love must be prioritized and understood to live in the time of artificial intelligence.

Join our next event:

Are you smarter than ChatGPT?

Noon-1 p.m. PST, Wednesday Jan. 31, Benson Parlors B&C

An interactive opportunity for students, faculty, and others to test their abilities to differentiate between ChatGPT and human responses and find places where ChatGPT hallucinates or makes errors. The event will also provide a place for students and faculty to discuss how ChatGPT has changed the educational landscape. The content of the prompts and the conversation will focus on ethical uses of generative AI technology.

Noon - 1 p.m. January 31, 2024. Santa Clara University. Are you Smarter Than ChatGPT?

Dec 4, 2023
--

Make a Gift to the Ethics Center

Content provided by the Markkula Center for Applied Ethics is made possible, in part, by generous financial support from our community. With your help, we can continue to develop materials that help people see, understand, and work through ethical problems.