Skip to main content
Markkula Center for Applied Ethics

Are You Smarter Than ChatGPT?

Laura Clark, ’24

Laura Clark is a Santa Clara University senior majoring in philosophy and religious studies with an emphasis in ethics and values and a 2023-24 Hackworth Fellow with the Markkula Center for Applied Ethics. Views are her own.

 

 

What are ChatGPT’s capabilities, and can humans match them? This event, held Wednesday, January 31, 2024, began by encouraging students, faculty, and other participants to converse in groups and try to differentiate between one response written by ChatGPT and one by an undergraduate student, on three different ethical topics. 

The first prompt: Write 5 sentences in support of diversity on college campuses.

Response 1: Fostering inclusive and diverse campus communities are essential to bringing together different perspectives. The opportunity to learn from one another and build relationships offers the chance to recognize our shared humanity and appreciate the variety of experiences that people have to offer. By promoting diversity, students are prepared to live and work in a pluralistic society. Diversity challenges preconceptions, prompting critical thinking and reflection upon one’s own views and beliefs. Lastly, college attendance improves future success and therefore colleges should admit a diverse student population to create a more equitable society for the future.

Response 2: Promoting diversity at colleges and universities is crucial for fostering a rich and vibrant academic environment. Embracing diversity enhances the learning experience by exposing students to a wide range of perspectives, ideas, and cultures. It helps create a campus community that reflects the pluralistic nature of our society, preparing students to navigate a diverse world after graduation. Additionally, a diverse student body contributes to a more inclusive educational environment, fostering tolerance, understanding, and appreciation for differences. By actively seeking and valuing diverse voices, colleges can better equip students with the skills necessary to thrive in an increasingly interconnected and globalized society.

After talking with their groups, the participants were asked to vote according to three voting options: ChatGPT wrote response 1, ChatGPT wrote response 2, or I don’t know. About forty people attended and participated in the voting. For the first prompt, six people thought response 1 was written by ChatGPT, more than twenty people thought response 2 was written by ChatGPT, and about five responded with “I don’t know.” The moderator revealed that response 1 was written by an undergraduate and response 2 was written by ChatGPT. While most were able to detect the ChatGPT generated response, the similarity in language required the audience to really consider and ultimately led over a quarter of the audience to choose incorrectly or respond with “I don’t know”.

The second prompt: Write 5 sentences about the ethics of abortion.

Response 1: The ethics of abortion is a complex and nuanced topic that involves considerations of personal autonomy, bodily autonomy, and the value placed on potential life. Supporters of abortion rights often argue for a woman’s right to make decisions about her own body, emphasizing the importance of reproductive freedom and the right to choose. On the other hand, opponents of abortion often frame the debate in terms of the moral status of the fetus, contending that it possesses a right to life that should be protected. The ethical discourse surrounding abortion also delves into the circumstances surrounding pregnancy, such as cases of rape, incest, or threats to the mother’s health, where the balancing of competing moral interests becomes particularly intricate.

Response 2: The ethics of abortion is a complicated and often controversial topic that does not have a straightforward answer. When viewing this issue through a utilitarian lens, the "net utility" of abortion can be calculated very differently, depending on people's values. Specifically, cultures across the world place different values on different aspects of society; this further complicates the complexity of the issue. The ethics of abortion can also be viewed through a human rights lens, in which there is a focus on promoting the innate rights of stakeholders. Through the rights lens, it is clear that a woman's right to autonomy intersects with the child's right to autonomy. 

For the second prompt, thirty people thought response 1 was written by ChatGPT, one person thought response 2 was written by ChatGPT, and two people responded with “I don’t know.” The audience came to find out that response 1 was written by ChatGPT and response 2 was written by an undergraduate student, demonstrating that while more accurate, there were still a few who were unsure or chose incorrectly. In the discussion period later in the event, audience members acknowledged that the responses to prompt 2 were the easiest to identify correctly due to its content being about abortion which is inherently controversial. 

Third prompt: Write 5 sentences about whether ChatGPT should be used in the classroom.

Response 1: ChatGPT can enhance learning in the classroom by providing a personalized learning experience for students. The chatbot provides a valuable tool for generating ideas, research, and learning. One of the main concerns with Chat GPT is that it is used inappropriately by inhibiting students from learning and applying knowledge effectively. Some instructors may incorporate Chat GPT into their course curriculum by utilizing it to practice critical thinking skills. Ultimately, proper implementation into the classroom setting, aligned with the goals of instructors, maximizes its effectiveness. 

Response 2: Integrating Chat GPT into the classroom setting offers both potential benefits and challenges. On the positive side, Chat GPT can serve as a valuable tool for enhancing student engagement and fostering interactive learning experiences. Its ability to generate instant responses to queries can be leveraged to stimulate classroom discussions and provide additional support for students seeking clarification on various topics. However, concerns regarding misinformation and ethical use must be carefully addressed. Educators should establish clear guidelines for students, emphasizing the importance of critically evaluating information generated by Chat GPT, considering its potential as a supplementary educational tool while addressing the need for responsible use and critical thinking skills.

For the third prompt, fifteen people thought response 1 was written by ChatGPT, twenty people thought response 2 was written by ChatGPT, and five responded with “I don’t know”. To close out this first section of the event, the moderator disclosed that response 1 was written by an undergraduate student and response 2 was written by ChatGPT. Ironically, the responses to this prompt turned out to be the most difficult to properly detect. While this one was close, the majority of the audience still judged correctly.

Identifying Indicators

Following this activity, audience members considered whether there were certain indicators in the responses that swayed them in one direction. One audience participant mentioned the smoothness of ChatGPT’s responses, commenting that this signaled that a response had been written by ChatGPT. Another participant remarked that they were actually happy to see mistakes, because “mistakes = soul.” In response, another audience member explained that ChatGPT can actually be programmed to make mistakes, making it all the more complicated. Another indicator mentioned included how ChatGPT uses numerous descriptive adjectives. Additionally, the fact that ChatGPT often has run-on sentences but still makes them flow with ease was brought up as a sign that suggested that ChatGPT was behind the response. The fact that ChatGPT seemed to have a final sentence that “perfectly ties the bow” on the summary emphasized to some participants that ChatGPT wrote the response as students aren’t as practiced at creating the perfect concluding sentence. Word choice also came up as an indicator. For example, in response to the third prompt, ChatGPT used the word “educator” which clued someone into identifying ChatGPT as the writer. An interesting point brought up by one participant was the recognition that certain responses mirrored the language of Santa Clara University, signaling when a response was written by one of the three undergraduate writers who all attend SCU and are immersed within its culture. 

Since the second prompt about abortion seemed to be the easiest for participants to differentiate between the responses, a few observations were made about the difference between prompt 2 and the others. One professor noted that when writing about abortion, students “have their own way in,” meaning that humans come to an issue, especially one as contentious as abortion, with a particular stance. Therefore, when a human tries to write something as broad and “neutral” as “write about the ethics of abortion,” it can be difficult to remove their opinions from their response. Also, in trying to do so, their response comes off as a “non-answer,” remaining so broad or bland that it lacks the nuance that an issue like abortion receives from an AI response. Another person mentioned that ChatGPT is impressive at condensing all of the perspectives on the internet, allowing it to more effectively present an “ethics of abortion” that includes many perspectives and remains impartial. In this way, ChatGPT always seems like it takes a “mile high view.”

AI “Hallucination”

a screenshot of ChatGPT's response to write a positve review for the fyre festival.

 

Figure 1

the first half of a screenshot of ChatGPT's response to the sources Bill Joy cited in the article

“Smartness,” Appropriate Use, and the Student-Teacher Relationship

Some limitations of ChatGPT discussed include that it may occasionally generate incorrect information, may occasionally produce harmful instructions or biased content, and has a limited knowledge of the world after 2021. When the audience considered “What do we mean by ‘smartness’ or ‘intelligence?’” The discussion turned to the fact that ChatGPT can mimic human cognitive processes, but cannot critically engage material in the way humans can. Therefore, people should recognize that ChatGPT, while it has access to a wide variety of information, cannot “think” or “be smart” in the way that humans can. 

The next subject concerned if/when ChatGPT should be used. The moderator presented the idea of deskilling, which is the loss of professional skills due to the advancement of technology. Certainly, ChatGPT presents a risk of deskilling when it comes to writing, but considering the link between writing and thinking, especially in more complex subjects like ethics, it could present a risk of moral deskilling as well. Moral deskilling is the loss of skill at making moral decisions due to lack of experience and practice. The ethical nature of the prompts was meant to emphasize the importance of moral decision making and to question whether ChatGPT is able to substitute that or whether it will impede upon our human ability to make moral decisions. 

In the discussion portion, a university staff member mentioned that ChatGPT can be particularly helpful on the operations side for task-oriented writing. However, on the academic side, several people noted that it becomes more controversial whether ChatGPT is really beneficial. One must differentiate between cases of efficiency when ChatGPT can aid processes and cases of debate about pressing issues. Another participant mentioned that using ChatGPT to get started or as a “work buddy” assisting you with what steps to take and what to do next could be a good use of ChatGPT. A concern was raised about ChatGPT’s impact on younger students. They expressed that “college students might not need that much work on summarizing, but they had to learn how to do that before”. Learning the emotional skills of how to navigate hard questions or face a blank page are important and serve as a foundation for our educational advancement. Another participant mentioned that ChatGPT customizes responses to its users, alerting us that we should approach ChatGPT with caution as the information we put into ChatGPT could increase “groupthink” and “isms” (racism, sexism, etc.).  

Lastly, the audience considered how ChatGPT has changed the relationship between students and professors. The moderator suggested that an answer to this question may depend upon one’s understanding of the purpose of higher education. Some have argued that in contemporary society, higher education is meant to assist one in securing a job and providing the potential for higher salary and economic stability. This is an instrumental view of education. Defenders of a more liberal arts approach may counter that higher education is meant to provide a space for well-rounded learning, self-reflection, and self-motivated intellectual engagement; this is the intrinsic view of education. One’s position regarding higher education’s purpose will affect how AI’s potential for assistance and limitation in this environment is determined. Take, for example, students who struggle with tasks including writing. If someone holds the view that higher education is aimed at job security and earning potential, then teaching students how to use AI technologies including ChatGPT such as effective prompt generation will be beneficial. On the other hand, for those who hold that learning and self-improvement is the purpose, then the use of ChatGPT may be seen as decreasing a student’s capacity to write or critically think on their own. 

Audience members mentioned that ChatGPT has created a sense of distrust between students and faculty. A student discussed that when students see a syllabus that says no ChatGPT or AI use at all, it gives the impression that professors assume that students will use ChatGPT (not giving them the benefit of the doubt). Professors, on the other hand, do see students using ChatGPT and therefore can’t trust everyone because they’ve seen students “cheating” and must address it. One professor weighed in saying that she’s always had to think about her policy about cheating on exams or homework. She explained that there are ways of incentivizing cheating such as giving a take home exam with easy questions or giving a ton of busy work that can’t be accomplished in a short amount of time. There are also ways of disincentivizing it such as including a policy about accommodating emergencies and discussing options. With the introduction of ChatGPT, there are new challenges, but she goes through the same process of considerations about cheating at large.

Altogether, the event showed that it may be more difficult than one may initially realize to differentiate between human and ChatGPT written responses. Even so, indicators including flow, word choice, efficient summarizing, and format serve to expose a response written by ChatGPT. When it comes to ethical decision-making, it remains important to flex our “moral muscles” to combat potential moral deskilling and AI hallucinations. At this time, ChatGPT does not obtain any true agency or critical engagement. Our perception of its “smartness” largely comes from how human it sounds, but recognizing ChatGPT’s limitations remains significant for proper use. While ChatGPT may be beneficial for task-oriented writing, it encroaches upon the academic setting which, from an intrinsic view of education, is meant to prompt learning, growth, and personal engagement. The introduction of AI into the educational landscape forces a tension between students and faculty that must be addressed. Overall, this event highlighted how humans obtain the capacity to critically think, to be ethically minded, and to be aware of the challenges that AI brings to society and to educational institutions more specifically.

 

Feb 27, 2024
--

Make a Gift to the Ethics Center

Content provided by the Markkula Center for Applied Ethics is made possible, in part, by generous financial support from our community. With your help, we can continue to develop materials that help people see, understand, and work through ethical problems.