Videos from "The Character of AI: A Technology Ethics Conference"
On July 21, 2022, the Markkula Center for Applied Ethics hosted a day-long conference on AI ethics. This event was distinct from typical academic conferences: each session brought into conversation academic philosophers, graduate students, and technology practitioners. Posted below are video replays of the panels discussing AI and character as well as AI and diversity, equity, and inclusion. The speakers engaged and challenged each other in dynamic ways, all with the overarching goal of conceptualizing AI that enables and encourages the enrichment and cultivation of human moral intelligence.
The conference’s keynote address was delivered by Prof. Shannon Vallor, the Baillie Gifford Chair in the Ethics of Data and Artificial Intelligence at the Edinburgh Futures Institute (EFI) at the University of Edinburgh, Scotland.
Note: Video of the keynote presentation will be added to this page soon.
The conference featured three panel discussions:
Panel 1: AI and Character
What is the “character” of AI? What role does AI have in shaping character? What virtues might be required by those who make and control AI? What virtues—or vices—might be evoked or cultivated in those who are influenced by AI tools increasingly being deployed in our society?
Panel Discussion Held July 21, 2022
The panel was moderated by Brian Green and featured the following panelists:
Kirk Bresniker, Chief Architect of Hewlett Packard Labs and an HPE Fellow and Vice President
Panel 2: AI and Diversity, Equity, and Inclusion
What is the relationship between AI and DEI, and what should it be? For years the tech industry has been plagued by a lack of diversity which has led to numerous major problems not only for the workforce and companies themselves, but also for their products and society.
Panel Discussion Held July 21, 2022.
This panel, moderated by Susan Kennedy, considered the problem and possible solutions and featured the following panelists:
Panel 3: The Meaning of “Human-in-the-Loop”
The final panel was not recorded. It was moderated by Prof. John Sullins, and considered some problems associated with human interactions with AI and featured the following panelists:
- Ed Bayes (Everyday Robots);
- Assistant Prof. Susan Kennedy (Santa Clara University)
- Prof. Patrick Lin (Cal Poly, San Luis Obispo)
The day concluded with a conversation and reflections from some of the students who had attended the Summer Institute in Technology Ethics, which had preceded the conference at Santa Clara University.
Speakers and Moderators
Shannon Vallor is the Baillie Gifford Chair in the Ethics of Data and Artificial Intelligence at the Edinburgh Futures Institute (EFI) at the University of Edinburgh, where she is also appointed in Philosophy and directs EFI’s Centre for Technomoral Futures. Professor Vallor’s research explores how new technologies, especially AI, robotics, and data science, reshape human moral character, habits, and practices. Her work includes advising policymakers and industry on the ethical design and use of AI, and she is a former Visiting Researcher and AI Ethicist at Google. Professor Vallor currently serves as Chair of Scotland’s Data Delivery Group. In addition to her many articles and published educational modules on the ethics of data, robotics, and artificial intelligence, she is the author of the book Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting (Oxford University Press, 2016), the forthcoming Lessons from the AI Mirror: Rebuilding Our Humanity in an Age of Machine Thinking, and editor of the forthcoming Oxford Handbook of Philosophy of Technology (Oxford University Press, 2022).
Ed Bayes is Head of Policy at Everyday Robots, a project born from X, the moonshot factory, where he is leading on initiatives focused on trust and safety to the future of work. His background spans policy, law, design, and engineering; he previously advised the UK Treasury and Mayor of London on tech and business policy, and founded ‘AI for social good’ startups.
Marion Boulicault is a Distinguished Postdoctoral Scholar in Ethics and Technology at MIT. She takes an intersectional feminist approach to examining how social variables, such as race, gender and sexuality are operationalized within AI and other technological systems.
Kirk Bresniker is Chief Architect of Hewlett Packard Labs and an HPE Fellow and Vice President. Prior to joining Labs to drive The Machine Research and Advanced Development program, he was Vice President and Chief Technologist in the HP Servers Global Business Unit. He currently holds 32 US and 10 foreign patents and is a Senior member of the IEEE (where he co-chairs the Systems and Architectures chapter of the IEEE International Roadmap for Devices and Systems), as well as a founding member of the IEEE Industrial Advisory Board and the Technical Ethics Advisory Board at the Markkula Center for Applied Ethics.
Claudia Passos Ferreira is an Assistant Professor of Bioethics at NYU. She is currently investigating what theories of consciousness can tell us about infants, animals, and AI consciousness; she has also worked on moral development and empathy, and is interested in exploring what the philosophy and science of consciousness tell us about whether an AI system can be sentient.
Brian Patrick Green is the director of technology ethics at the Markkula Center for Applied Ethics. He teaches AI ethics, and previously taught other engineering ethics courses, in Santa Clara University’s Graduate School of Engineering. His academic background is in ethics, religion, social theory, and genetics. Green is author of the book Space Ethics (Rowman & Littlefield, 2021), co-author of the Ethics in Technology Practice corporate technology ethics resources (2018), co-editor of the book Religious Transhumanism and Its Critics (Lexington, 2022), and co-editor of a special issue of the Journal of Moral Theology on AI and moral theology (2022).
Matthew Kuan Johnson is a Research Fellow in the Faculty of Philosophy at the University of Oxford. He explores the surprising ways in which empathy can hinder efforts to make AI more inclusive, and is developing an account of the 'burdened virtues' that facilitate DEI priorities but involve a significant cost to the bearer.
Susan Kennedy is Assistant Professor of Philosophy at Santa Clara University. Before joining SCU, she was a Postdoctoral Fellow at Harvard University, where she worked with an interdisciplinary team to integrate ethical reasoning into the computer science curriculum. Her research focuses on the ethical, social, and political impacts of emerging technologies; she is especially interested in artificial wombs and reproductive technology, as well as the use of AI in health care.
Patrick Lin is the director of the Ethics + Emerging Sciences Group at Cal Poly, San Luis Obispo, where he is also a philosophy professor. As relevant to AI ethics, he is a member of the 100 Year Study on Artificial Intelligence, Center for a New American Security’s Task Force on AI & National Security, and Stanford Law’s Center for Internet and Society. Previous affiliations include the United Nations Institute for Disarmament Research, World Economic Forum, US Naval Academy, Stanford Engineering, and others. He has published extensively on a full range of issues in technology ethics.
Dr. Tina M. Park is the Head of Inclusive Research & Design at the Partnership on AI. She focuses on working with impacted communities on equity-driven research frameworks and methodologies to support the responsible development of AI and machine learning technologies through greater engagement of impacted communities. Building on PAI’s Methods for Inclusion project, this initiative aims to research, design, and pilot inclusive practices developed in collaboration with community-based, academic, policy, and corporate partners.
Christopher Quintana is a PhD student in Philosophy at Villanova University. He draws on Aristotelian social and moral philosophy to critically examine both our relationship to, and the nature of, design environments where algorithms are often embedded.
Irina Raicu is the director of the Internet Ethics program at the Center. She is a Certified Information Privacy Professional, and her work addresses issues ranging from privacy and data ethics to social media’s impact on society, from the digital divide to the ethics of encryption, and the ethics of AI (she is a member of the Partnership on AI's Working Group on Fair, Transparent, and Accountable AI). Her writing has appeared in publications including The Atlantic, U.S.A. Today, MarketWatch, Slate, the San Francisco Chronicle, and Recode, and she has authored and co-authored a variety of teaching materials, including the Ethics in Technology Practice compendium.
Erick Ramirez is Associate Professor of Philosophy at Santa Clara University. He is interested in all things moral psychology (moral judgment, sentimentalism, emotion, and psychopathology), and his current research centers on the ethics of virtual reality. He is especially interested in the use of VR for experiments, empathy enhancement, and behavioral modification, and is developing virtual reality modules of classic thought experiments. His is the author of The Ethics of Virtual and Augmented Reality: Building Worlds, which was published in 2021.
John P. Sullins is Professor of philosophy at Sonoma State University and the director of programming for the university’s Center for Ethics, Law, and Society (CELS). He has authored publications on, among other topics, the ethics of autonomous weapons systems, self-driving cars, affective robotics, and the design of autonomous ethical agents. He is involved in industry and government consultation involving ethical practices in technology design, and is the co-author of Great Philosophical Objections to Artificial Intelligence: The History and Legacy of the AI Wars (Bloomsbury Press, 2021).
This project was made possible through the support of a grant from Templeton World Charity Foundation, Inc funder DOI 501100011730. The opinions expressed in this event are those of the author(s) and do not necessarily reflect the views of Templeton World Charity Foundation, Inc.