Skip to main content
Markkula Center for Applied Ethics

The Ethics of AI Applications for Mental Health Care

ismagilov_Getty Images_Light bulb with gear brain concept drawing.

ismagilov_Getty Images_Light bulb with gear brain concept drawing.

Thomas G. Plante, PhD., ABPP

ismagilov/Getty Images

Thomas Plante (@ThomasPlante) Augustin Cardinal Bea, SJ professor of psychology at Santa Clara University, is a faculty scholar with the Markkula Center for Applied Ethics and an adjunct clinical professor of psychiatry at Stanford University School of Medicine. Views are his own.un

Technological advances and innovations along with their implications for daily life in our society can be head spinning and jaw dropping. For those of us of a certain age, the development and implementation of these technological advances are simply unimaginable. As my wife and I often say to each other, “It isn’t our world any longer.” Here in Silicon Valley, it seems that everyone is somehow involved with technology in one way or another and these changes occur very quickly. Artificial Intelligence, in particular, seems to be a new and game-changing application in so many areas of our lives. As a clinical psychologist in practice as well as a psychology professor who teaches courses on the diagnosis and treatment of psychopathology, I have been especially interested in and intrigued by the application of artificial intelligence to mental health-related treatment. 

There are now countless applications readily available to help users with anxiety, depression, stress, and so many other mental health challenges. These new applications include Wysa, Woebot, Elomia, Youper, Koko, Repika, and many more. Do these new applications work as well, or perhaps even better, than traditional in-person psychotherapy provided by trained and licensed mental health professionals? Are there both positive and negative unintended consequences in using these new applications? What are the ethical issues involved with these new services? 

Research is in its infancy since these services are generally brand new, and thus more time is needed to conduct, process, and publish the much-needed large-scale randomized clinical trials in peer-reviewed professional outlets to examine their effectiveness. However, that being said, there are several critically important ethical issues to consider now when reflecting on this brave new world of artificial intelligence-directed psychotherapy. 

First, talented and enthusiastic engineers may work on these new products and services to treat mental health conditions without adequate consultation from licensed mental health professionals or from their professional organizations (e.g., the American Psychological Association). As chair of our university’s Institutional Review Board (IRB) for the past decade, we closely examine the ethical issues associated with all research conducted here at Santa Clara University. In this role I have noticed that too often well-meaning and motivated computer scientists and engineers develop products and services without appropriate professional consultation and need to be frequently reminded that providing any type of mental health service requires adequate and appropriate training, experience, and licensing in order to protect the public from potential harm. For example, you certainly would not want someone to perform surgery, pilot an aircraft, or represent you in court without adequate training, experience, and licensure. You certainly do not want to be treated for anxiety, depression, substance abuse, suicidality, and other psychological, psychiatric, relationship, or behavioral problems without appropriate competence to do so secured through adequate training, experience, and licensure. This issue is easily resolved by having appropriate mental health professionals participate in research and development as consultants with the technology experts. 

Second, mental health services need to maintain strict confidentiality. All client information must be protected following well-established standards offered by professional ethics and both state and federal law. Knowing exactly where one’s highly personal information is going, who has access to it, how it is stored, and how it might be used for unrelated and profit-making reasons is very important. Users should have full informed consent in language that is understandable to them before services are agreed to and provided. This involves much more than simply clicking a box saying that you consent to use a product without understanding what you are really consenting to as we so often do with other computer-related software applications and downloads. 

Third, do mental health applications using artificial intelligence actually work and for what types of problems and users? Preliminary research that is currently available suggests that these applications might be helpful for mild to moderate symptoms of common problems such as stress, anxiety, and depression but they may not be helpful or appropriately used for more severe symptoms or for more significant psychopathology (e.g., schizophrenia, bipolar disorder, personality disorders, addictions). Additionally, it is important to avoid slick marketing that promises treatment success without adequate evidence for outcomes, especially when you are dealing with the health and well-being of consumers.

Although I am highlighting potential problems with artificial intelligence-based mental health applications, to be fair, there is a critical need for mental health services that are not being met today. We are in the middle of a mental illness tsunami where a dramatic increase of the population is struggling with mental health problems. In fact, the United States Surgeon General recently issued an unprecedented advisory1 about the remarkable increase in anxiety, depression, suicidality, and substance abuse among the population and most especially among youth. There are not enough licensed mental health professionals available to treat the great increase in need of psychotherapy services. Additionally, professional in-person mental health services can be very expensive and not realistically affordable for many, even if they have health insurance to cover at least some of the costs. Finally, in-person psychotherapy is often inconvenient involving time and costs associated with travel, parking, and taking time off from work and family duties. 

These demands for time and fees are often a further obstacle for clients to receive the care that they need. Therefore, mental health applications using artificial intelligence could become a boon to treating more people and in a more affordable and convenient way. There is likely a place for these new technologies but ethically we need to be sure that adequate research is conducted to examine their effectiveness and for what types of problems and with what types of users. We must assure the public that their mental health concerns can be treated with services that are based on solid empirical evidence and best clinical practices and not on marketing hype or for the love of all things shiny, new, and techie. Mental health challenges can be a critical issue of life and death. We should treat them with this in mind moving forward with careful thought and humility, evidence based best practices, and keeping the health and welfare of those who struggle as our top priority. 

1Office of the Surgeon General (2021). Protecting Youth Mental Health: The US Surgeon General’s Advisory. Washington, DC: Author. 

 

Feb 6, 2023
--