Skip to main content
Markkula Center for Applied Ethics

Crisis Data: An Ethics Case Study

"Depression please cut to the chase." by darcyadelaide is marked with CC BY 2.0.

An AI Ethics Case Study

Irina Raicu

"Depression please cut to the chase." by darcyadelaide is marked with CC BY 2.0.

In January 2022, Politico published an article about a nonprofit called Crisis Text Line, which offers support via text messages for people who are going through mental health crises. For years, the nonprofit had been collecting a database of messages exchanged, and used the data to triage the incoming calls for help and to train its volunteers to better manage their difficult conversations with people in great distress. In a 2020 report, the nonprofit (which first launched in 2013) stated that “[b]y implementing data science tools and machine learning from day one, [it had] created the largest mental health dataset in the world.” A report section titled “Data Philosophy” added, “we share data to support smarter research, policy, and community organizing. Unlike other large-scale data sets on mental health and crisis, our data has incredible volume, velocity, and variety.”

As Politico reported, in 2018 the nonprofit also launched a for-profit spinoff called Loris.ai, which planned to use Crisis Text Line data (which it said was anonymized) to gain insights that would then be incorporated into customer-support software products. The plan was for a portion of the profits from that software to then be shared with the Crisis Text Line.

The Politico article sparked a great deal of criticism of that data-sharing agreement. Some critics were concerned that the data might still be traceable back to individuals who could then be stigmatized or otherwise harmed by being “outed” as dealing with severe mental health issues. Others argued that even anonymized data should not be used in ways that the people who texted in would not have anticipated—in other words, for purposes distinct from helping them directly. When the organization responded that its data-sharing agreement was disclosed to users (whose first text is answered by an automated reply that reads “By texting further with us, you agree to our Terms” and links to a 50-page agreement), critics questioned whether the mere act of users following through, under such circumstances, could be deemed to be “actual meaningful, emotional, fully understood consent.”

Some of the Crisis Text Line volunteers were greatly concerned by the secondary use of the data collected by the nonprofit, and raised those concerns both internally and externally. Once a petition was organized, demanding an end to the data sharing agreement, other volunteers expressed shock that they had not even been aware of the for-profit effort.

A few days after the Politico article was published, Crisis Text Line announced that it was ending the data-sharing agreement with Loris.ai. In a subsequent personal blog post responding to the controversy, researcher danah boyd, who had been a founding board member of CTL and had served as its board chair for some time, explained her thinking and her actions regarding the controversial arrangement. “Since my peers are asking for this to be a case study in tech ethics, I am going into significant detail,” she wrote. 

Part of it highlights one of the questions that arose early on in the development of the organization: “could we construct our training so that all counselors got to learn from the knowledge developed by those who came before them? This would mean using texter data for a purpose that went beyond the care and support of that individual.” boyd writes,

Yes, the Terms of Service allowed this, but this is not just a legal question; it’s an ethical question. Given the trade-offs, I made a judgment call early on that not only was using texter data to strengthen training of counselors without their explicit consent ethical, but that to not do this would be unethical. Our mission is clear: help people in crisis. To do this, we need to help our counselors better serve texters. We needed to help counselors learn and grow and develop skills with which they can help others.

The post continues, discussing additional challenges related to scaling access to the service, triage of incoming texts, the need for funding, and the desire to facilitate important research. After noting that she struggled with the question of sharing data with the for-profit entity, boyd states that she ultimately did vote in favor of it. She adds, “Knowing what I know now, I would not have.”

The blog post ends with a call for input: “I also have some honest questions,” boyd writes, “for all of you who are frustrated, angry, disappointed, or simply unsure about us.” Among those questions: “What is the best way to balance the implicit consent of users in crisis with other potentially beneficial uses of data which they likely will not have intentionally consented to but which can help them or others?” She also asks, “Is there any structure in which lessons learned from a non-profit service provider can be transferred to a for-profit entity? Also, how might this work with partner organizations, foundations, government agencies, sponsors, or subsidiaries, and are the answers different?”

Discussion questions

Before answering these questions, please review the Markkula Center for Applied Ethics’ Framework for Ethical Decision-Making, which details the ethical lenses discussed below.

  • Who are the stakeholders involved in this case?
  • Consider the case through the lenses of rights, justice, utilitarianism, the common good, virtue, and care ethics; what aspects of the ethical landscape do they highlight?
  • What would you say in answer to the questions posed by danah boyd, quoted above?

 

Mar 15, 2022
--
Internet Ethics Stories