Jun Tat Tan, Rick Kye Gan, Carlos Alsua, Mark Peterson, Ricardo Úbeda Sales, Ann Zee Gan, José Antonio Cernuda Martínez, Pedro Arcos González
{"title":"Psychological First Aid by AI: Proof-of-Concept and Comparative Performance of ChatGPT-4 and Gemini in Different Disaster Scenarios.","authors":"Jun Tat Tan, Rick Kye Gan, Carlos Alsua, Mark Peterson, Ricardo Úbeda Sales, Ann Zee Gan, José Antonio Cernuda Martínez, Pedro Arcos González","doi":"10.1002/jclp.23808","DOIUrl":null,"url":null,"abstract":"<p><strong>Objective: </strong>This study aimed to evaluate the performance and proof-of-concept of psychological first aid (PFA) provided by two AI chatbots, ChatGPT-4 and Gemini.</p><p><strong>Methods: </strong>A mixed-method cross-sectional analysis was conducted using validated PFA scenarios from the Institute for Disaster Mental Health. Five scenarios representing different disaster contexts were selected. Data were collected by prompting both chatbots to perform PFA based on these scenarios. Quantitative performance was assessed using the PFA principles of Look, Listen, and Link, with scores assigned using IFRC's PFA scoring template. Qualitative analysis involved content analysis for AI hallucination, coding responses, and thematic analysis to identify key subthemes and themes.</p><p><strong>Results: </strong>ChatGPT-4 outperformed Gemini, achieving an overall score of 90% (CI: 86%-93%) compared to Gemini's 73% (CI: 67%-79%), a statistically significant difference (p = 0.01). In the Look domain, ChatGPT-4 scored higher (p = 0.02), while both performed equally in the Listen and Link domain. The content analysis of AI hallucinations reveals that ChatGPT-4 has a relative frequency of 18.4% (CI: 12%-25%), while Gemini exhibits a relative frequency of 50.0% (CI: 26.6%-71.3%), (p < 0.01). Five themes emerged from the qualitative analysis: Look, Listen, Link, Professionalism, Mental Health, and Psychosocial Support.</p><p><strong>Conclusion: </strong>ChatGPT-4 demonstrated superior performance in providing PFA compared to Gemini. While AI chatbots show potential as supportive tools for PFA providers, concerns regarding AI hallucinations highlight the need for cautious implementation. Further research is necessary to enhance the reliability and safety of AI-assisted PFA, particularly by eliminating hallucinations, and to integrate the current advances in voice-based chatbot functionality.</p>","PeriodicalId":15395,"journal":{"name":"Journal of Clinical Psychology","volume":" ","pages":""},"PeriodicalIF":2.5000,"publicationDate":"2025-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Clinical Psychology","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1002/jclp.23808","RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"PSYCHOLOGY, CLINICAL","Score":null,"Total":0}
引用次数: 0
Abstract
Objective: This study aimed to evaluate the performance and proof-of-concept of psychological first aid (PFA) provided by two AI chatbots, ChatGPT-4 and Gemini.
Methods: A mixed-method cross-sectional analysis was conducted using validated PFA scenarios from the Institute for Disaster Mental Health. Five scenarios representing different disaster contexts were selected. Data were collected by prompting both chatbots to perform PFA based on these scenarios. Quantitative performance was assessed using the PFA principles of Look, Listen, and Link, with scores assigned using IFRC's PFA scoring template. Qualitative analysis involved content analysis for AI hallucination, coding responses, and thematic analysis to identify key subthemes and themes.
Results: ChatGPT-4 outperformed Gemini, achieving an overall score of 90% (CI: 86%-93%) compared to Gemini's 73% (CI: 67%-79%), a statistically significant difference (p = 0.01). In the Look domain, ChatGPT-4 scored higher (p = 0.02), while both performed equally in the Listen and Link domain. The content analysis of AI hallucinations reveals that ChatGPT-4 has a relative frequency of 18.4% (CI: 12%-25%), while Gemini exhibits a relative frequency of 50.0% (CI: 26.6%-71.3%), (p < 0.01). Five themes emerged from the qualitative analysis: Look, Listen, Link, Professionalism, Mental Health, and Psychosocial Support.
Conclusion: ChatGPT-4 demonstrated superior performance in providing PFA compared to Gemini. While AI chatbots show potential as supportive tools for PFA providers, concerns regarding AI hallucinations highlight the need for cautious implementation. Further research is necessary to enhance the reliability and safety of AI-assisted PFA, particularly by eliminating hallucinations, and to integrate the current advances in voice-based chatbot functionality.
期刊介绍:
Founded in 1945, the Journal of Clinical Psychology is a peer-reviewed forum devoted to research, assessment, and practice. Published eight times a year, the Journal includes research studies; articles on contemporary professional issues, single case research; brief reports (including dissertations in brief); notes from the field; and news and notes. In addition to papers on psychopathology, psychodiagnostics, and the psychotherapeutic process, the journal welcomes articles focusing on psychotherapy effectiveness research, psychological assessment and treatment matching, clinical outcomes, clinical health psychology, and behavioral medicine.