Skip to main content
Markkula Center for Applied Ethics

Ethics and AI

Julian Dreiman ’21 is a 2020-21 Hackworth Fellow at the Markkula Center for Applied Ethics. Views are his own.

A synopsis of each survey project is provided below and includes summary infographics and links to the full reports.

 

Student Opinion of AI & Topics Related to AI Ethics

Artificial Intelligence (AI) has changed - and is changing - almost every facet of how the world works. From automation of formerly mundane tasks, to predictive policing, to content promotion, to job applications, AI is omnipresent and making never before seen impacts. Over the course of the 2020-2021 academic year I gathered Santa Clara students’ opinion on AI and potential ethical concerns. In a sixteen question survey I asked questions related to four topics: fairness and equity, social impact, moral status and intelligence, and use cases and settings. Several students also chose to share their greatest concerns with AI in an optional free response section.

One clear take away from these data is that students are aware of the pressing ethical concerns that AI poses. For example, students understand that some level of preferential treatment is needed to ensure an equitable outcome, and furthermore that equitable AI is a more preferable outcome than equality. Moreover, in the free response section of the survey where students were given a space to share their greatest concern in regard to AI, many expressed concerns about increased inequality, institutionalized bias, and institutionalized prejudice. Additionally in the free response section, students expressed concern that AI would lead to increased job loss and mass poverty. In addition to these important, albeit highly publicized, AI ethics concerns, students also expressed a more nuanced set of concerns.

One of these areas is human interaction. In the free response section students raised concerns about the loss of meaningful human interaction and a detachment from human values resulting from AI. These sentiments are supported by the data which show that students are concerned with AI being used in tasks commonly associated with human interaction and interpersonal skills. To highlight, a significant portion of students believe that AI should not be used to judge teachers effectiveness and quality, signaling that these are important tasks that should remain under human purview. Moreover, students expressed concern that many people do not understand how AI truly works, that people’s expectations of AI are too high, and that people place too much trust in AI.

Focusing on questions of moral responsibility and status, students shared contrasting views. On one hand, students believed that if AI reached human level intelligence that it should not be granted a similar moral status as humans. On the other hand, students broadly believed that AI models should be morally responsible for their outputs. (This contrast, however, may have resulted from poor wording of questions or a lack of understanding of the volitional capabilities of AI. e.g. I assume that students believe someone should be morally responsible but perhaps not that AI has the necessary traits needed for moral responsibility). Lastly, students seemed to be generally optimistic about AI’s impact on society in the future and have a clear notion of where and how AI should and should not be used.

The chart below contains a summary of findings. The full data can be viewed here: 

 Summary of Findings: Student Opinion of AI and Topics Related to AI Ethics

 

View directly, or right-click to save a copy.

Student Opinion of Personal Data Collection

More so than anytime before, the topics of digital identity and personal data are on the forefront of people’s minds. Over the course of the 2020-2021 academic year I gathered Santa Clara students’ opinion of the collection, sale, and usage of personal data. With sixteen survey questions and a free response area, I aimed to measure what Santa Clara students thought in regard to whether or not digital privacy should be a human right, what role the federal government should or shouldn’t play in the personal data marketplace, their comfort level with how their personal data is currently collected, and what steps they take in order to protect their personal data online.

A total of 87 students completed the survey representing all three colleges and four years of undergraduate study. The data represented in this document are a selection of important questions and answers that are organized into five categories. Each category includes a visual representation of survey responses followed by a written analysis and explanation.

The first clear take away from these data is that students place a large importance on digital personal privacy and the protection of their personal data. A vast majority of students believe that digital privacy should be a human right alongside the right to life and freedom. This sentiment is echoed by students' free responses to my question “what is your greatest concern regarding the collection of personal data?” Many students touched on the idea of governmental overreach or being punished for what one says online. That being said, one respondent did note that they were unconcerned with the government collecting their personal data so long as it was used in important national security measures.

Related to the importance of digital privacy is the privacy paradox. While student’s acknowledge their concern with the current personal data marketplace, a minority take high degree of actions to protect their personal data. In their free responses, students noted that they are concerned about corporations buying their personal data (most commonly respondents were concerned about location data). There was also much concern voiced about being the target of personalized ads and even personalized misinformation. More practically, there were concerns of data leaks, hacking, identity theft, financial crimes, and being induced into purchasing more goods than necessary.

Another interesting conclusion drawn from these data is students’ mixed opinions of what the federal government ought to do in regard to personal data and simultaneously how trustworthy the federal government is in matters of digital privacy. On one hand, a large majority of students believe that the federal government should regulate the personal data marketplace. A similarly large majority of students also wish that the federal government would do more to protect their personal privacy while online. At the same time, those same students do not trust the federal government when it comes to matters of personal data, the internet, and digital privacy and believe that the federal government should not have the right to surveil citizens’ digital activity while at home.

The chart below contains a summary of findings. The full data can be viewed here: 

Summary of Findings: Student Opinion of Personal Data Collection

View directly, or right-click to save a copy.

Make a Gift to the Ethics Center

Content provided by the Markkula Center for Applied Ethics is made possible, in part, by generous financial support from our community. With your help, we can continue to develop materials that help people see, understand, and work through ethical problems.