Exploring the Credibility of Large Language Models for Mental Health Support: Protocol for a Scoping Review.

IF 1.4 Q3 HEALTH CARE SCIENCES & SERVICES
Dipak Gautam, Philipp Kellmeyer
{"title":"Exploring the Credibility of Large Language Models for Mental Health Support: Protocol for a Scoping Review.","authors":"Dipak Gautam, Philipp Kellmeyer","doi":"10.2196/62865","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>The rapid evolution of large language models (LLMs), such as Bidirectional Encoder Representations from Transformers (BERT; Google) and GPT (OpenAI), has introduced significant advancements in natural language processing. These models are increasingly integrated into various applications, including mental health support. However, the credibility of LLMs in providing reliable and explainable mental health information and support remains underexplored.</p><p><strong>Objective: </strong>This scoping review systematically maps the factors influencing the credibility of LLMs in mental health support, including reliability, explainability, and ethical considerations. The review is expected to offer critical insights for practitioners, researchers, and policy makers, guiding future research and policy development. These findings will contribute to the responsible integration of LLMs into mental health care, with a focus on maintaining ethical standards and user trust.</p><p><strong>Methods: </strong>This review follows PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) guidelines and the Joanna Briggs Institute (JBI) methodology. Eligibility criteria include studies that apply transformer-based generative language models in mental health support, such as BERT and GPT. Sources include PsycINFO, MEDLINE via PubMed, Web of Science, IEEE Xplore, and ACM Digital Library. A systematic search of studies from 2019 onward will be conducted and updated until October 2024. Data will be synthesized qualitatively. The Population, Concept, and Context framework will guide the inclusion criteria. Two independent reviewers will screen and extract data, resolving discrepancies through discussion. Data will be synthesized and presented descriptively.</p><p><strong>Results: </strong>As of September 2024, this study is currently in progress, with the systematic search completed and the screening phase ongoing. We expect to complete data extraction by early November 2024 and synthesis by late November 2024.</p><p><strong>Conclusions: </strong>This scoping review will map the current evidence on the credibility of LLMs in mental health support. It will identify factors influencing the reliability, explainability, and ethical considerations of these models, providing insights for practitioners, researchers, policy makers, and users. These findings will fill a critical gap in the literature and inform future research, practice, and policy development, ensuring the responsible integration of LLMs in mental health services.</p><p><strong>International registered report identifier (irrid): </strong>DERR1-10.2196/62865.</p>","PeriodicalId":14755,"journal":{"name":"JMIR Research Protocols","volume":"14 ","pages":"e62865"},"PeriodicalIF":1.4000,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11822324/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"JMIR Research Protocols","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2196/62865","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
引用次数: 0

Abstract

Background: The rapid evolution of large language models (LLMs), such as Bidirectional Encoder Representations from Transformers (BERT; Google) and GPT (OpenAI), has introduced significant advancements in natural language processing. These models are increasingly integrated into various applications, including mental health support. However, the credibility of LLMs in providing reliable and explainable mental health information and support remains underexplored.

Objective: This scoping review systematically maps the factors influencing the credibility of LLMs in mental health support, including reliability, explainability, and ethical considerations. The review is expected to offer critical insights for practitioners, researchers, and policy makers, guiding future research and policy development. These findings will contribute to the responsible integration of LLMs into mental health care, with a focus on maintaining ethical standards and user trust.

Methods: This review follows PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) guidelines and the Joanna Briggs Institute (JBI) methodology. Eligibility criteria include studies that apply transformer-based generative language models in mental health support, such as BERT and GPT. Sources include PsycINFO, MEDLINE via PubMed, Web of Science, IEEE Xplore, and ACM Digital Library. A systematic search of studies from 2019 onward will be conducted and updated until October 2024. Data will be synthesized qualitatively. The Population, Concept, and Context framework will guide the inclusion criteria. Two independent reviewers will screen and extract data, resolving discrepancies through discussion. Data will be synthesized and presented descriptively.

Results: As of September 2024, this study is currently in progress, with the systematic search completed and the screening phase ongoing. We expect to complete data extraction by early November 2024 and synthesis by late November 2024.

Conclusions: This scoping review will map the current evidence on the credibility of LLMs in mental health support. It will identify factors influencing the reliability, explainability, and ethical considerations of these models, providing insights for practitioners, researchers, policy makers, and users. These findings will fill a critical gap in the literature and inform future research, practice, and policy development, ensuring the responsible integration of LLMs in mental health services.

International registered report identifier (irrid): DERR1-10.2196/62865.

求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
2.40
自引率
5.90%
发文量
414
审稿时长
12 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信