The Stanford research began by examining the literature for what constitutes best practices for mental health therapists. The team concluded that therapists should ideally provide empathy, avoid stigma, give tasks to complete between sessions, discourage self-harm and form an “alliance” with the patient. (These are remarkable similar to the work of an Ombuds with a visitor.) After conducting several experiments, the researchers found that AI chatbots: "1) express stigma toward those with mental health conditions and 2) respond inappropriately to certain common (and critical) conditions in naturalistic therapy settings—e.g., LLMs encourage clients' delusional thinking, likely due to their sycophancy." While they did not rule out a role for AI chatbots in therapy, they discouraged the replacement of human therapist. (arXiv:2504.18412; YouGov Survey; NY Times.)

Similar research out of USC: https://aclanthology.org/2025.findings-naacl.430.pdf
ReplyDelete