ChatGPT for mental health support: A systematic scoping review of human-computer interaction implications
Dosyalar
Tarih
Yazarlar
Dergi Başlığı
Dergi ISSN
Cilt Başlığı
Yayıncı
Erişim Hakkı
Özet
Large language models (LLMs) such as ChatGPT are increasingly used around mental health, yet design-oriented syntheses remain limited. We conducted a PRISMA-aligned systematic scoping review focused specifically on ChatGPT (GPT-3.5/4+) in mental health-related use including "in the wild" adoption by laypeople, training applications, and clinical-adjacent pilots. Searches of five databases (November 2022 to August 2025), with citation tracking and gray-literature screening, yielded 34 studies spanning randomized and non-randomized experiments, pilot trials, surveys/interviews, simulations, digital ethnography, and structured editorials. Evidence supports adjunct, not replacement, roles. In education/supervision, one randomized trial and a comparative supervision study show skill gains when practice is scaffolded with rubrics and human oversight; expert ratings judged trainee case conceptualizations acceptable. In clinical/adjacent contexts, signals include quality-of-life improvement (small inpatient pilot), short-term anxiety reduction when the model provides empathetic feedback, and a clinical RCT (outside psychiatry) showing reduced anxiety/depression with a ChatGPT adjunct. Studies of public/self-help use document appropriation of ChatGPT as a "digital therapist" with identified risks including privacy concerns, boundary violations, and over-reliance. Safety-critical tasks remain unreliable (e.g., under-identification of suicide risk, degradation with complexity, and cultural-fit gaps). We derive human-computer interaction requirements: clear scope-of-use messaging, prompt scaffolding, human-in-the-loop, privacy-preserving defaults, and explicit escalation/hand-off pathways.










