The Stanford research began by examining the literature for what constitutes best practices for mental health therapists. The team concluded that therapists should ideally provide empathy, avoid stigma, give tasks to complete between sessions, discourage self-harm and form an “alliance” with the patient. (These are remarkable similar to the work of an Ombuds with a visitor.) After conducting several experiments, the researchers found that AI chatbots: "1) express stigma toward those with mental health conditions and 2) respond inappropriately to certain common (and critical) conditions in naturalistic therapy settings—e.g., LLMs encourage clients' delusional thinking, likely due to their sycophancy." While they did not rule out a role for AI chatbots in therapy, they discouraged the replacement of human therapist. (arXiv:2504.18412; YouGov Survey; NY Times.)
June 23, 2025
Maybe AI Chatbots Won't Replace Ombuds
A new Stanford University computer science study critically examined the growing use of artificial intelligence for mental health therapy. (A poll in 2024, for example, found that 55% of young adults are comfortable talking about mental health concerns with a confidential AI chatbot.) The Stanford researchers, however, discovered that AI chatbots displayed dangerous tendencies to express stigma, encourage delusions and respond inappropriately in critical moments. This seems to confirm anecdotal stories about AI chatbots distorting reality.
Subscribe to:
Post Comments (Atom)

Similar research out of USC: https://aclanthology.org/2025.findings-naacl.430.pdf
ReplyDelete