Skip to main content
Markkula Center for Applied Ethics

Teaching a Course on AI Ethics as Part of Engineering/CS Curricula

Brian Patrick Green

Brian Patrick Green is the director of Technology Ethics at the Markkula Center for Applied Ethics. Views are his own.

There are many ways to integrate ethical education into the software engineering/CS curriculum, for example via analysis of case studies, discussions of essays on ethics, ethics modules, integration with senior design projects, and entire courses on ethics. The School of Engineering at Santa Clara University incorporates several of these means of teaching ethics, and I teach a 2-unit elective course on AI ethics as part of these efforts.

My course covers 16 topics in AI ethics, including safety & reliability, transparency, good and bad uses, environmental effects, bias & fairness, impact on employment, inequality, the automation of ethics, moral de-skilling, robot consciousness and rights, AGI and superintelligence, dependency, AI addiction, psycho-social effects, and the effects of AI on the human spirit. Given the broad range of topics, over the course of a ten week quarter, some issues get quite a bit of attention and some issues less. But student feedback suggests that the course has quite an impact.

At Santa Clara University, this is an MA-level course, but senior undergrads can also join. Since the course is an elective, the students who take it are already interested in the subject matter. Much of class is set up so that students can pursue their interests through asking questions, talking to each other, and researching for papers and presentations. As a subject, ethics is particularly well-suited for group discussion: in fact, such discussions can be vital for illuminating ethical issues and tensions.

One of the benefits of offering an entire course on ethics is that it emphasizes to engineers and computer scientists that ethics truly is its own discipline (and not a subject merely subservient to rare engineering cases), while also being deeply connecting to computer science and engineering in its application. The opportunity to go over multiple real-life cases iterates over and over that these ethical issues are not theoretical: they are reality, every day.

Reading books on business and AI also highlights some key ethical issues for the students. By assigning Kai-Fu Lee’s book AI Superpowers and Peter Thiel’s book Zero to One, I invite my students to probe at the assumptions underlying parts of Silicon Valley’s culture, as well as the computing cultures in other parts of the world, and consider their strengths and weaknesses. This deep analysis of the mentalities of the global computing industry, which form so much of the ethical decision-making in the industry, would not be likely to happen outside a dedicated ethics course.

In my AI ethics course, students also get to write two essays and do one group presentation on topics of their choice, either chosen from a list or in consultation with the instructor. These deep dives into specific subjects, combined with the application of ethical tools learned in class, give the students practical experiences in thinking about and assessing ethical issues in computing and AI. While the experience is obviously not the same as being in a corporation and actually making real-life choices, it is remarkable how adept some students become at this ethical analysis in just a few weeks of intensive study.

When considering how to construct an engineering and/or computer science curriculum that emphasizes ethics, having an entire course on ethics is extraordinarily helpful to students in the long run. Ethical thinking really is a skill, and the more we use it, the better we become at it. Sometimes people refer to ethical thinking as being like a muscle that we can exercise, or like a skill such as bird watching: the more you practice it, the more birds (ethical issues) you will see. (For an unpacking of that second simile, see “Overview of Ethics in Technology Practice.”)

As a last point, the learning objectives for my course are adapted from a longer list of suggested learning outcomes currently available on the Markkula Center website. [1] The ones I focus on include:

1. Identifying ethical issues in AI & ML work, applications, and/or use cases

2. Applying specific concepts of normative ethics (such as duties, virtues, justice, risk, harm, etc.) to AI & ML contexts

3. Identifying the relevant moral stakeholders in AI & ML scenarios

4. Identifying some of the important moral values, interests, and conflicts at stake in particular scenarios

5. Applying one or several general frameworks for ethical decision-making in the context of AI & ML projects

6. Identifying and explaining fundamental ethical concerns in AI & ML (e.g. privacy, security, fairness, transparency, accountability, safety, control, deception, trust, etc.)

7. Recognizing established professional codes of computer ethics

8. Predicting and describing ethically-based objections or concerns about AI & ML from the perspectives of a diverse range of stakeholders inside and outside of AI & ML

While course modules or other curricular activities certainly help to achieve some of these objectives, an entire course on ethics in AI/ML or CS simply allows for more. In fact, a series of course(s) on the more applied and technical side of the subject might be required if the goal is to fulfill all of the learning outcomes listed on our website.

One of the delights and challenges of teaching technology ethics is that both the technology and the ethical contexts are constantly changing. The field is always new and in need of more and better thinking. Teaching students useful ethical concepts and tools, and inviting them to use their judgment and add their voices to the conversation, can help prepare them for the complex world that they will enter—a world in which they will need to make good ethical decisions, explain them, and advocate for them.


[1] “Embedding Ethics into Computing Curricula: Resources and Suggestions,” Markkula Center for Applied Ethics website, N.D., available at:

May 7, 2021