Julian Dreiman ’21 is a 2020-21 Hackworth Fellow at the Markkula Center for Applied Ethics. Views are his own.
This academic year I have worked with Dr. Brian Green as the Technology Ethics Hackworth Fellow. Entering the fellowship with a background in Artificial Intelligence (AI) ethics, I was interested in diving more deeply into AI and its philosophical and practical challenges. I worked on two major projects, (a) gauging students' opinion of issues relating to AI ethics and mass personal data collection and (b) researching older adult care-robots and how they relate to deception.
Questions in AI ethics cannot be understood without first addressing questions of data (the gasoline which powers AI’s engine). To that end, I wrote a survey whose first section focused on student opinion of personal data collection. The data from the survey show that many students feel concerned about the amount of personal data that is being collected and the fact that much of that collection occurs without their explicit knowledge or consent. Furthermore, students expressed a desire for the federal government to have greater involvement in the regulation of personal data collection, but at the same time reported that they have low levels of trust in the federal government when it comes to matters of digital privacy and personal data. After gauging student’s opinions of personal data, I asked their opinion of questions relating to AI ethics. Students had strong and clear opinions about how AI should be developed. For example, an overwhelming majority argued that AI models should strive to create equity over equality and that AI should not be implemented in settings which require human, interpersonal skills such as grading student papers or evaluating teachers’ quality and effectiveness. Interestingly, students agreed that AI will generally help, not hurt, society over the next twenty years.
Understanding how the student body thinks about personal data collection and AI ethics is critical on many fronts. First, university students are at a formative stage in their lives, and asking these sorts of questions will ideally assist them in developing a strong moral compass. Second and related, students today will most likely be in the workforce longer than any other generation, and thus it is imperative that they are aware of certain ethical challenges before being presented with them in the real world. Third and final, students today are voters tomorrow and will shape how public policy affects all members of society. Being clear in their convictions and opinions about these difficult topics will assist them when it comes time to vote on important laws and regulations.
In addition to surveying the student body and analyzing the results, Dr. Green and I researched the effects that AI and robotics will have on older adult caregiving and aging. A caregiving void already exists where many older adults do not receive adequate treatment in adequate intervals. Further troubling is the growing silver tsunami of baby boomers who will soon flood into the caregiving market. To address the current void and preempt its increase, many corporations and startups have begun to create robots (carebots) to care for the eldery. Dr. Green and I specifically researched how carebots and their creators ought to address the question of deception. We asked the questions “should a carebot ever deceive an older adult?” and “if so, what circumstances necessitate it doing so?” The conclusion that we reached is that carebots should only rarely use deception and only if doing so causes more good than harm and the deception is related to a medically non-important part of the older adults’ care.
The question of deception and older adult care robots is paramount because without a doubt carebots will be a part of most caregiving routines in the future. What makes this question interesting is that technology such as a carebot in and of itself is not bad nor dangerous, it is how people choose to use that technology which can result in negative consequences. Looking into difficult questions of technology and aging before carebots become commonplace can help us to foretell potential issues and correct them before it's too late. Lastly, as technology frees us from more time consuming tasks it will create more free time and energy to devote to other important and creative tasks. I believe that carebots can help older adults age with more dignity and allow their families to spend more quality time with them.
My time as a Hackworth fellow has given me the time and opportunity to explore complex and interesting questions which I believe will have an important impact on all members of society. Technology pervades all aspects of life, and working to better understand it helps to ensure that it generates the most positive outcomes for the most people possible.
Read more about Julian's Hackworth Fellowship projects:
About Julian Dreiman
"My academic interests lie at the intersection of technology, specifically artificial intelligence and society. Through my Hackworth Fellowship, I hope to better understand how AI affects our daily lives and how to create better AI models to protect personal privacy and promote prosperity and equity for all. What excites me about technology is the immense impact that it has on every individual and the potential it holds for promoting human flourishing. When unregulated and under analyzed, however, technology can also bring great harm to individuals and society. After my time at SCU, I hope to pursue graduate studies in philosophy, with stops along the way to work in business ethics and as a ski patroller.
I enjoy trail running, rock climbing, backpacking, and skiing. I also love to cook and host dinner parties for friends and family. I’m an avid reader of The New York Times and a variety of books, including those by one of my favorite authors, David Sedaris."