Skip to main content
Markkula Center for Applied Ethics

Cura Personalis and Generative AI in Education

leaves, pixelated AI image

leaves, pixelated AI image

Caring for the whole person in the time of chatbots

Irina Raicu

Irina Raicu is the director of the Internet Ethics program (@IEthics) at the Markkula Center for Applied Ethics. Views are her own.

In early October, as part of Santa Clara University’s Grand Reunion weekend, I gave a talk about AI in education. The following blog post is based on my notes for that. No generative AI was used in brainstorming, drafting, or polishing it. I have to express much gratitude, however, to my colleague David DeCosse, Director of the Religious and Catholic Ethics and Campus Ethics Programs, whose input greatly shaped this—together with the insights and observations shared by the participants in two recent meetings of Santa Clara University’s Tech Ethics Faculty Group, which addressed the impact of generative AI on their students and their teaching practices.

To get us started, here is a concise definition of cura personalis from the website of Regis University: “Latin phrase meaning ‘care for the person,’ cura personalis is having concern and care for the personal development of the whole person. This implies a dedication to promoting human dignity and care for the mind, body and spirit of the person.” To this, Santa Clara University’s Office of Student Life adds that “[c]aring for the personal development of the entire person (emotional, mental, spiritual, physical) is a community effort including support from families, peers, faculty, and staff.”

When we talk about generative AI, we often relate it to the mind. So it’s worth focusing on AI’s impact on the other parts of cura personalis—dignity, spirit, body, and wholeness—as they play out in the higher education context.

Generative AI has played a significant role in education for a year now. ChatGPT was released to the public on Nov. 30, 2022—and college students were already using it long before most instructors became aware of it.

Here is how one data scientist, Colin Fraser, has described what tools like ChatGPT are and do: language models are programmed to “record empirical relationships between word frequencies over a historical corpus of text, and use those empirical relationships to create random sequences of words that have similar statistical properties to the training data.” Fraser adds, “The only thing anchoring the output of a language model to the truth is the truth’s relationship to word frequencies in the training data, and nothing guarantees that relationship to be solid.”

Unfortunately, many of the people using such models don’t understand that limitation—especially since the outputs of those models are often accurate, and often impressive. The authoritative tone of those outputs (which is a design choice) also leads many to overestimate their relationship to “truth.”

Surprisingly, the company that developed ChatGPT claims to have been surprised by its use in the education context. Two months after the public release of ChatGPT, OpenAI blogged, “We are engaging with educators in the U.S. to learn what they are seeing in their classrooms and to discuss ChatGPT’s capabilities and limitations…. These are important conversations to have as part of our mission is to deploy large language models safely, in direct contact with affected communities.” 

The impact on the affected communities of students, educators, administrators, parents, etc., however, goes well beyond the classroom.

So what changes might students encounter in the short term or in the long-term as a result of the incorporation of AI tools in higher education?

Generative AI might impact students’ interactions with educational institutions themselves, especially in the process of admission. Chatbots might offer information about admissions to potential applicants, or help applicants navigate the admission process. Chatbot conversations might be used to help prepare students for interviews. Might chatbot interviews eventually replace interviews with admission officers? In the meantime, there is little doubt that tools like ChatGPT are already being used to partially or fully draft application essays.

Note that some of those uses might make admission processes less stressful for students—or more so (either way impacting both their physical and mental well-being). In terms of questions related to spirit and dignity, would interactions with a chatbot be an adequate substitute to interactions with a person capable of caring? On the other hand, for students with fewer resources and sources of help in managing applications, might AI tools be better than nothing, or more accessible than other options they might have otherwise?

In the classroom, in courses across disciplines, generative AI is of course already impacting students involved in various types of school projects. Some students are using chatbots to conduct research, for example; in this context, a key concern is the accuracy (or lack thereof) of generated results right now. A second related question is what will happen to accuracy as online information streams become increasingly polluted with AI-generated misinformation and what Kate Crawford has called “hallucitations”—i.e. citations to nonexistent sources or to content that doesn’t exist but is ascribed to real sources.

But there are also other types of impacts on school projects. Students have noted, for example, that chatbots can help them identify research topics, draft summaries of readings, or help with the pre-writing part of essays. Of course, LLMs can also draft entire essays. All of these uses might be viewed as helpful learning tools (setting aside the issues of plagiarism and lack of originality and accuracy in the context of full drafts), but they might also lead to  deskilling—or the loss of the opportunity for students to develop and improve skills in the first place.

As it turns out, the process of summarizing, for example, helps people remember. And prewriting and writing help people decide what they want to say. Simply being handed a summary or pre-written text doesn’t have the same effects on the mind.

As in the case of the uses in the application process, using generative AI for classes might help reduce students’ stress levels (which, of course, would impact both body and mind). For some people, though, both research and writing are also related to spirit—to the desire for self-expression. Will students who feel that they express themselves through writing reject tools like ChatGPT? Will “prompt engineering” (writing and refining the guidance/requests for LLM outputs) become a different kind of self-expression? Will the interplay with chatbots lead some to develop different, related skills?

In the context of education, of course, self-expression is partly shaped and limited by the fact that it is ultimately graded. So it’s important to note that generative AI also has an impact on grading. Some teachers have been testing its usefulness for that purpose. Moreover, back in 2019, long before ChatGPT became a recognizable term, a Motherboard article warned that AI systems “often called automated essay scoring engines” were already “either the primary or secondary grader on standardized tests [for high school students] in at least 21 states”—and in 18 of those, according to a survey conducted by the publication, “only a small percentage of students’ essays… [would] be randomly selected for a human grader to double check the machine’s work.”

The article noted concerns from experts that such tools amplify grading biases, but also pointed out a more basic issue: a professor and students at MIT had “developed the Basic Automatic B.S. Essay Language (BABEL) Generator, a program that patched together strings of sophisticated words and sentences into meaningless gibberish essays,” and had demonstrated that “the nonsense essays consistently received high, sometimes perfect, scores when run through several different scoring engines.” Motherboard then replicated that result. So flawed grading algorithms were leading to perverse incentives, rewarding the opposite of good writing. They also functioned in complete opposition to what cura personalis entails.

It's harder to fault students for using AI to draft essays if the education system is willing to use AI to grade them.

Of course, since the advent of ChatGPT, we are now also seeing the impact of related edtech tools that claim to be able to determine whether student-submitted writing is in fact AI-generated. Some instructors are turning to such tools—which are themselves flawed and have been criticized in a wave of articles detailing false accusations of plagiarism (including, in one extreme case, a professor who threatened to fail half of the students in his class because he had run their essays through ChatGPT and took its analysis as truth).

The use of algorithmic assessment therefore impacts mind, body, and spirit: the mind by creating incentives that miseducate students (at least when it comes to writing essays); the body by compounding the stress of false accusations; and the spirit by diminishing student dignity and treating students as data points, not whole persons.

A related dimension of generative AI is its impact on creativity. Students are creators, too, of course. Some instructors and students view generative AI as a tool that can help them express themselves creatively: in one SCU engineering class that I attended, for example, a student noted that he can’t draw, but he really enjoys the ability to prompt an image-generating model and tweak the results to reflect what he would have drawn, had he been able to—images that would not have existed outside his mind otherwise. Others mentioned using generative AI tools that help with music composition.

But AI-generated writing, at least, has been criticized for being cliched and verbose (likely as a result of a combination of the data it’s trained on, the probabilistic nature of its output, and the human feedback used in fine-tuning; as OpenAI noted in its blog announcing the launch of ChatGPT, “trainers prefer longer answers that look more comprehensive.”) The impact of generative AI on student creativity seems therefore to vary from medium to medium and depending on the ways in which students (and instructors!) use the generative AI tools.

That’s why it’s deeply important to assess different uses carefully.

As my colleague David DeCosse (director of the Religious and Catholic Ethics and Campus Ethics Programs at the Markkula Center) put it bluntly, among the various uses discussed above, some “strike at the heart of personhood and what we're trying finally to do with a Jesuit education in fostering that dignity and agency for the sake of the good.”

We need to reject the ones that do, and continue to assess the others.

Fostering agency within students does mean that it’s important for universities like Santa Clara to prepare them to function in a world that includes AI in a lot of contexts. They need to understand how to use it; what its limitations are; where it belongs, and where it doesn’t. They also need to understand how it impacts them even when they’re not the ones actively using it—when it’s being used on them. And they also need to understand that they are not powerless in their interactions with the technology—that, even if they are not the ones building it (though some of them will be), and not just as consumers but as citizens, they have a role to play in shaping it.

So what do we know so far about how generative AI is being used in Santa Clara University courses? The following are not results of comprehensive research but insights gleaned from two meetings of the Tech Ethics Faculty Group at Santa Clara—a self-selected community of faculty from various departments, who get together quarterly to discuss ethical issues related to technology (sometimes related to particular faculty projects, other times simply prompted by new research or social trends). Once in the Fall of 2022 and once this Spring, we asked the group to discuss how ChatGPT was impacting their courses.

As of now, some faculty require students to use it. Some have banned it from their classrooms. Some allow it to be used as long as the use is clearly disclosed and the extent of use can be documented. Some mentioned taking the opportunity to use chatbots as a means to challenge students to develop their own critical thinking skills, and their own comfort level with and understanding of the technology.

There is a lot of experimentation; it’s not yet clear what works.

The use of generative AI plays out differently in graduate courses v. undergraduate/introductory courses in which students are supposed to learn key basic skills (it’s more problematic in the latter). It also plays out differently in different disciplines: ChatGPT might be helpful to students in an accounting class, for example, helping to clarify concepts, but it might be harmful in a writing-intensive class.

Many instructors are discussing generative AI as part of a broader conversation about academic integrity. The new tools also create new questions of equity. It used to be, for example, that rich students could pay others to write essays or assignments for them—now all students can ask an AI tool to do that… But “democratizing” plagiarism might play out differently for different students, too: paid versions of tools like ChatGPT do more than the free ones, and sophisticated users of LLMs will get different results than those who know less about the tools. Professor Ethan Mollick, who teaches at Wharton, recently tweeted, “If you are a teacher who is confident that AI does a bad job on your assignments based on a few very ‘ChatGPT’ answers, I'll bet that you are only catching the folks bad at prompting.”

Of course, there is also the fact that the feel of entering a prompt into a chatbot is different than reaching out to pay someone for an essay. Plagiarism might be encouraged by these tools in a different way, with its impact on the spirit somewhat camouflaged by the technology.

And what about cura personalis for faculty and staff?

Most of us are currently struggling to understand these new tools, to respond to student usage, and to determine whether/how to use them ourselves, both ethically and effectively. We are trying to convey to students what the limitations and usefulness of these tools are—even as we are still learning about them ourselves. We are engaging with students in conversations, as co-learners.  In a way, this struggle to address the impact of AI in education might help students view their instructors as whole persons, too.

Cura personalis prompts us to consider the “care for the mind, body and spirit” of all of those involved in the world of higher education. The impact of AI tools on the human spirit, in particular, is not often discussed.

When I asked my colleague David DeCosse what he might say about what spirit involves, he mentioned "longing for meaning, mystery in life, and orientation to something true." Longing for meaning is obviously something very different than searching for information or statistical analysis of information. Mystery in life is a reminder of all the knowledge we still don't have, and of the fact that not all that matters is quantifiable (think AI versus “love that surpasses all understanding”). And orientation to something true is (as we are discovering through the use of large language models) quite distinct from an analysis of word frequencies in large training datasets, packaged to sound credible.

The notion of cura personalis, therefore, helps clarify the distinction between personhood and AI: personhood implicates embodiment, dignity and agency, and spirit, not purely a synthesis and analysis of data—however massive the datasets are.

Part of the work of educating the whole person in the time of generative AI should be to help students understand that, too.

Illustration: XK Studio & Google DeepMind / Better Images of AI / AI Lands / CC-BY 4.0

Nov 30, 2023
--

Subscribe to Our Blogs

* indicates required
Subscribe me to the following blogs: