大语言模型在心理健康研究中的作用:研究人员的实践和观点的国际调查。

IF 4.9 0 PSYCHIATRY
Jake Linardon, Mariel Messer, Cleo Anderson, Claudia Liu, Zoe McClure, Hannah K Jarman, Simon B Goldberg, John Torous
{"title":"大语言模型在心理健康研究中的作用:研究人员的实践和观点的国际调查。","authors":"Jake Linardon, Mariel Messer, Cleo Anderson, Claudia Liu, Zoe McClure, Hannah K Jarman, Simon B Goldberg, John Torous","doi":"10.1136/bmjment-2025-301787","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Large language models (LLMs) offer significant potential to streamline research workflows and enhance productivity. However, limited data exist on the extent of their adoption within the mental health research community.</p><p><strong>Objective: </strong>We examined how LLMs are being used in mental health research, the types of tasks they support, barriers to their adoption and broader attitudes towards their integration.</p><p><strong>Methods: </strong>714 mental health researchers from 42 countries and various career stages (from PhD student, to early career researcher, to Professor) completed a survey assessing LLM-related practices and perspectives.</p><p><strong>Findings: </strong>496 (69.5%) reported using LLMs to assist with research, with 94% indicating use of ChatGPT. The most common applications were for proofreading written work (69%) and refining or generating code (49%). LLM use was more prevalent among early career researchers. Common challenges reported by users included inaccurate responses (78%), ethical concerns (48%) and biased outputs (27%). However, many users indicated that LLMs improved efficiency (73%) and output quality (44%). Reasons for non-use were concerns with ethical issues (53%) and accuracy of outputs (50%). Most agreed that they wanted more training on responsible use (77%), that researchers should be required to disclose use of LLMs in manuscripts (79%) and that they were concerned about LLMs affecting how their work is evaluated (60%).</p><p><strong>Conclusion: </strong>While LLM use is widespread in mental health research, key barriers and implementation challenges remain.</p><p><strong>Clinical implications: </strong>LLMs may streamline mental health research processes, but clear guidelines are needed to support their ethical and transparent use across the research lifecycle.</p>","PeriodicalId":72434,"journal":{"name":"BMJ mental health","volume":"28 1","pages":""},"PeriodicalIF":4.9000,"publicationDate":"2025-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12164621/pdf/","citationCount":"0","resultStr":"{\"title\":\"Role of large language models in mental health research: an international survey of researchers' practices and perspectives.\",\"authors\":\"Jake Linardon, Mariel Messer, Cleo Anderson, Claudia Liu, Zoe McClure, Hannah K Jarman, Simon B Goldberg, John Torous\",\"doi\":\"10.1136/bmjment-2025-301787\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>Large language models (LLMs) offer significant potential to streamline research workflows and enhance productivity. However, limited data exist on the extent of their adoption within the mental health research community.</p><p><strong>Objective: </strong>We examined how LLMs are being used in mental health research, the types of tasks they support, barriers to their adoption and broader attitudes towards their integration.</p><p><strong>Methods: </strong>714 mental health researchers from 42 countries and various career stages (from PhD student, to early career researcher, to Professor) completed a survey assessing LLM-related practices and perspectives.</p><p><strong>Findings: </strong>496 (69.5%) reported using LLMs to assist with research, with 94% indicating use of ChatGPT. The most common applications were for proofreading written work (69%) and refining or generating code (49%). LLM use was more prevalent among early career researchers. Common challenges reported by users included inaccurate responses (78%), ethical concerns (48%) and biased outputs (27%). However, many users indicated that LLMs improved efficiency (73%) and output quality (44%). Reasons for non-use were concerns with ethical issues (53%) and accuracy of outputs (50%). Most agreed that they wanted more training on responsible use (77%), that researchers should be required to disclose use of LLMs in manuscripts (79%) and that they were concerned about LLMs affecting how their work is evaluated (60%).</p><p><strong>Conclusion: </strong>While LLM use is widespread in mental health research, key barriers and implementation challenges remain.</p><p><strong>Clinical implications: </strong>LLMs may streamline mental health research processes, but clear guidelines are needed to support their ethical and transparent use across the research lifecycle.</p>\",\"PeriodicalId\":72434,\"journal\":{\"name\":\"BMJ mental health\",\"volume\":\"28 1\",\"pages\":\"\"},\"PeriodicalIF\":4.9000,\"publicationDate\":\"2025-06-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12164621/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"BMJ mental health\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1136/bmjment-2025-301787\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"0\",\"JCRName\":\"PSYCHIATRY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"BMJ mental health","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1136/bmjment-2025-301787","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"0","JCRName":"PSYCHIATRY","Score":null,"Total":0}
引用次数: 0

摘要

背景:大型语言模型(llm)为简化研究工作流程和提高生产力提供了巨大的潜力。然而,在心理健康研究界,关于它们的采用程度的数据有限。目的:我们研究了法学硕士在心理健康研究中的使用情况,他们支持的任务类型,采用法学硕士的障碍以及对法学硕士融入心理健康研究的更广泛态度。方法:来自42个国家和不同职业阶段(从博士生、早期职业研究员到教授)的714名心理健康研究人员完成了一项调查,评估法学硕士相关的实践和观点。研究结果:496(69.5%)报告使用法学硕士协助研究,94%表示使用ChatGPT。最常见的应用程序是校对书面工作(69%)和精炼或生成代码(49%)。法学硕士的使用在早期职业研究人员中更为普遍。用户报告的常见挑战包括不准确的回答(78%)、道德问题(48%)和有偏见的输出(27%)。然而,许多用户表示llm提高了效率(73%)和输出质量(44%)。不使用的原因是道德问题(53%)和输出的准确性(50%)。大多数人同意,他们需要更多关于负责任使用的培训(77%),研究人员应该被要求在手稿中披露法学硕士的使用(79%),他们担心法学硕士会影响他们的工作评估(60%)。结论:虽然法学硕士在心理健康研究中广泛使用,但主要障碍和实施挑战仍然存在。临床意义:法学硕士可以简化心理健康研究过程,但需要明确的指导方针来支持其在整个研究生命周期中的道德和透明使用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Role of large language models in mental health research: an international survey of researchers' practices and perspectives.

Role of large language models in mental health research: an international survey of researchers' practices and perspectives.

Background: Large language models (LLMs) offer significant potential to streamline research workflows and enhance productivity. However, limited data exist on the extent of their adoption within the mental health research community.

Objective: We examined how LLMs are being used in mental health research, the types of tasks they support, barriers to their adoption and broader attitudes towards their integration.

Methods: 714 mental health researchers from 42 countries and various career stages (from PhD student, to early career researcher, to Professor) completed a survey assessing LLM-related practices and perspectives.

Findings: 496 (69.5%) reported using LLMs to assist with research, with 94% indicating use of ChatGPT. The most common applications were for proofreading written work (69%) and refining or generating code (49%). LLM use was more prevalent among early career researchers. Common challenges reported by users included inaccurate responses (78%), ethical concerns (48%) and biased outputs (27%). However, many users indicated that LLMs improved efficiency (73%) and output quality (44%). Reasons for non-use were concerns with ethical issues (53%) and accuracy of outputs (50%). Most agreed that they wanted more training on responsible use (77%), that researchers should be required to disclose use of LLMs in manuscripts (79%) and that they were concerned about LLMs affecting how their work is evaluated (60%).

Conclusion: While LLM use is widespread in mental health research, key barriers and implementation challenges remain.

Clinical implications: LLMs may streamline mental health research processes, but clear guidelines are needed to support their ethical and transparent use across the research lifecycle.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
6.80
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信