Google Updates Suicide, Self-Harm Safeguards in Gemini as AI Lawsuits Mount
Alphabet’s Google announced it will direct Gemini chatbot users to a support hotline if the conversation indicates a “potential crisis related to suicide or self-harm.”
Guadalupe Hayes-Mota, director of the bioethics program at Santa Clara University, wants to see proof that AI chatbot developers are using clinically validated guidelines for interactions where mental health care is an issue. “Who’s actually making the decision when the crisis pops up for the individual, and how is that being done?” he asked.
Guadalupe Hayes-Mota, director, bioethics, quoted by KQED.
Apr 7, 2026