Skip to main content
Markkula Center for Applied Ethics

Guidelines for the Ethical Use of Generative AI (i.e. ChatGPT) on Campus

Chat GPT: Optimizing Language Models for Dialogue. Richard Drew_Associated Press

Chat GPT: Optimizing Language Models for Dialogue. Richard Drew_Associated Press

Nnenna Uche ‘23, Sean Grame ‘24, Callie O’Neill ‘23, & Kailyn Pedersen ’23

Richard Drew/Associated Press

Nnenna Uche, Sean Grame, Callie O’Neill, & Kailyn Pedersen are all 2022-23 Hackworth Fellows and members of the Campus Ethics team at the Markkula Center for Applied Ethics. Views are their own.

Over the past months, there has been an influx of innovative generative artificial intelligence programs with endless capabilities. Most notably, ChatGPT has changed the way that the average person uses the internet. While search engines like Google provide infinite amounts of information, ChatGPT, and other AI-driven tools, are capable of completing complex tasks within seconds. ChatGPT can research and plan vacation itineraries, rewrite resumes, draft emails, and even write substantial papers on almost any topic. While this innovation is exciting, it is also daunting when considering its impact on academia. Now that students can ask an AI engine to write papers and discussion posts, complete math assignments, and even write complex code for computer science courses, how can professors accurately ensure academic integrity? More importantly, how can students ethically incorporate the resources that AI provides, without dishonestly using the AI systems to cheat?

The Campus Ethics team, a group of Hackworth Fellows from the Markkula Center for Applied Ethics, study this dilemma, providing a guide for students to help them use ChatGPT ethically. This guide outlines the strengths and weaknesses of ChatGPT, aiming to accurately inform students on the pros and cons of its usage. From there, students can reflect on the recommendations proposed by the Campus Ethics team in order to make ethical choices when using AI such as ChatGPT. As technology continues to rapidly advance, the Markkula Center for Applied Ethics aims to provide students, faculty, and staff with resources on how to navigate both the ethical dilemmas and exciting innovation that results. 

"It’s not so much the tools themselves but the underlying deception that’s the problem.”

~Irina Raicu, director of the Internet Ethics Program at the Markkula Center for Applied Ethics at Santa Clara University.

What is ChatGPT?

In the fall of 2022, a beta version of ChatGPT was launched allowing users to test the key features of the new generative AI application. ChatGPT is an artificial intelligence “chatbot,” capable of producing high-level responses. It can generate comprehensive feedback on questions ranging from, “What are fun ideas for my sister’s birthday?” to even complex questions asking, “Why do people laugh at jokes?” In academia, it is capable of writing a poem, an essay with a word count, and even solving physics problems. We acknowledge that these technological advances are astounding and deserve to be celebrated. However, we believe that ChatGPT can only be a useful tool if utilized correctly. As technology advances, artificial intelligence will evolve, allowing for an increase in the risks associated with it. We offer a number of threats, as well as suggested tips for its use, specifically rooted in academia from our student perspective. Following are our suggested guidelines from an ethical perspective on how to use the current ChatGPT technology. 

Some examples are as follows, 

Sydney has been assigned a draft of a paper for her philosophy class. The assignment is due by midnight. She’s heard of this new software called ChatGPT, and it sounds almost too good to be true. Her friend told her that it will write papers for you on any topic instantly, and this is tempting. When making her decision, Sydney should carefully consider:

What is at stake in this decision? 

Is there anyone who will be affected by this decision, directly or indirectly?

Are there duties or obligations to adhere to?

Overall, we suggest that a useful approach for using ChatGPT in this scenario would be to ask it to compile a summary of empirical research. In this way, we believe that ChatGPT’s best use is for offering clarification and assistance on research or the collection of comprehensive data. 

Using ChatGPT could help a student flesh out brainstorming for research papers and frameworks. Writing a comprehensive paper can be overwhelming, especially when working against the clock. ChatGPT can be useful in finding a few articles to begin background research. In the same way a calculator is useful for calculating large quantities, ChatGPT can be used to narrow down a topic and offer suggestions on areas of focus. 

Weaknesses of Generative IA

Generative AI is Inaccurate 

Although it can function as a useful tool, in its current form, ChatGPT has many weaknesses similar to many other generative AI platforms. This includes common typographical and grammatical errors that would be obvious to any professor reading the material. Therefore copying information directly from ChatGPT serves no beneficial advantage.

Cheating isn’t the only concern. On its website, ChatGPT acknowledges that “occasionally” it provides “incorrect information,” and admits it tends to produce “longer answers” in an effort to look more comprehensive. 

Instead, asking the question, “Explain ____ in simple terms” is an application of how ChatGPT can be used to synthesize information for better use. 

Generative AI has Implicit Bias & Contributes to Moral Deskilling

Michelle is taking an ethnic studies class for her diversity requirement. She has to write a 10-page paper about the prejudices in the health care system. She knows she should have started much earlier in the quarter. However, she only put it off because she didn’t know where to start with research, and now the deadline is approaching in just a few days. She remembers her roommate telling her how she used ChatGPT to write most of her paper and how it saved her hours of time. Michelle doesn’t consider herself a cheater, but neither does her roommate. She decides to use ChatGPT to produce a first draft of a paper and then manually change the structure to make it her own paper. She does this by asking, “write me an essay about prejudices in the health care system.” She is pretty satisfied with what she sees. She fixes the typos and changes some wording and then turns it in. 

In this case, the largest concern we want to focus on is the bias that can seep into the answers generated by ChatGPT. For questions concerning personal opinions on a controversial topic such as prejudices in our society, it’s not beneficial to ask generative AI because it will simply provide the answers that it finds according to the most popular options found online. ChatGPT, similar to other generative AI's, uses pattern recognition that is consistent with the most widely used ideas. In other words, it has built in assumptions that use the most widely accepted biases to generate information. 

If Michelle had simply asked, “Find a few articles about prejudices in the health care system,” to start some background research and then drafted her own paper, this would be a much more applicable use of ChatGPT.

As students seeking an education, arguably one of the most important aspects is the practice of finding our moral compass and applying our education to real-world contexts so that we can contribute to society in productive ways that support the common good. To do this, it is necessary to learn, and demonstrate our knowledge through means such as tests, projects, and papers. 

So, if we suddenly start turning to ChatGPT to cover the nuances of academia, we may lose the very purpose behind the reason why we are receiving a college education in the first place. We believe that we must be considerate when using tools such as AI, so as not to “deskill” ourselves, morally, or otherwise. As Shannon Vallor of the University of Edinburgh (formerly of Santa Clara University), writes, “Even if intelligent machines could somehow direct all human interactions to produce the most just, harmonious, and compassionate outcomes possible, we would be diminished as creatures were we utterly helpless to act justly and compassionately without their assistance.” 

Overview of Ethical Concerns Related to ChatGPT

A few suggestions regarding the ethical use of ChatGPT: 

  1. NEVER directly copy any words used by ChatGPT or any generative AI.
  2. Always be wary of the blatant biases that generative AI’s may harbor.
  3. Do not rely on ChatGPT for accurate information; utilize a variety of reliable sources when researching important topics.
  4. Treat ChatGPT as an additional learning tool, not a vehicle to avoid honestly completing academic work.
  5. Whenever using ChatGPT be sure to double check all information against other sources to ensure accuracy.
  6. Be specific and concise when interacting with ChatGPT as its responses will only be as strong as the prompts.
  7. Before using ChatGPT, remember your own capabilities and the value gained through problem-solving.  
  8. Before you use ChatGPT, ask yourself if your professor would approve of the way you are using it, and if you consider it to follow academic integrity.

We propose a few questions for students to think about as they assess their use of ChatGPT and whether or not it’s ethical. We pose these questions from a virtue perspective, which can be thought of as having our actions be consistent with certain ideal virtues that provide for the full development of our humanity. It asks of any action, “What kind of person will I become if I do this?” or “Is this action consistent with my acting at my best?”

  • Does using ChatGPT contribute to the common good?
  • If I use ChatGPT to help in this case, what other shortcuts would I use?
  • Am I cheating myself of the opportunity to learn in other ways?
  • Am I being the very best person I can be?
  • Does using ChatGPT take away the value I would gain from problem-solving?

Although these questions offer valuable opportunities to assess your decision making, we recognize that they most likely don’t initially come to mind for most students. We encourage you to think about them in order to raise their awareness of the susceptibility of cheating. 

For other questions for ethical decision making, please refer to the Markkula Center’s Framework for Ethical Decision Making. 

Through this resource, we aim to highlight the weaknesses of generative AI. We recognize that ChatGPT can be used to do quite large amounts of synthesizing and summarizing. Although this may seem to be a great use of the application, we hope to persuade students to stray away from this type of learning. We also want to highlight that ChatGPT can actually circumvent intellectual formation rather than supporting it because of the issues with inaccuracies and implicit biases that are well known but not always correct. Overall, we want students to understand that ChatGPT can be effective if you use it as an aid rather than a first resort. 

“The only thing we can do for the future is to do the right thing now.”

~Wendell Berry

 

May 22, 2023
--

Content provided by the Markkula Center for Applied Ethics is made possible, in part, by generous financial support from our community. With your help, we can continue to develop materials that help people see, understand, and work through ethical problems. 

Make a Gift to the Ethics Center Button