Detection of suicidality from medical text using privacy-preserving large language models.

IF 8.7 1区 医学 Q1 PSYCHIATRY
Isabella Catharina Wiest, Falk Gerrik Verhees, Dyke Ferber, Jiefu Zhu, Michael Bauer, Ute Lewitzka, Andrea Pfennig, Pavol Mikolas, Jakob Nikolas Kather
{"title":"Detection of suicidality from medical text using privacy-preserving large language models.","authors":"Isabella Catharina Wiest, Falk Gerrik Verhees, Dyke Ferber, Jiefu Zhu, Michael Bauer, Ute Lewitzka, Andrea Pfennig, Pavol Mikolas, Jakob Nikolas Kather","doi":"10.1192/bjp.2024.134","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Attempts to use artificial intelligence (AI) in psychiatric disorders show moderate success, highlighting the potential of incorporating information from clinical assessments to improve the models. This study focuses on using large language models (LLMs) to detect suicide risk from medical text in psychiatric care.</p><p><strong>Aims: </strong>To extract information about suicidality status from the admission notes in electronic health records (EHRs) using privacy-sensitive, locally hosted LLMs, specifically evaluating the efficacy of Llama-2 models.</p><p><strong>Method: </strong>We compared the performance of several variants of the open source LLM Llama-2 in extracting suicidality status from 100 psychiatric reports against a ground truth defined by human experts, assessing accuracy, sensitivity, specificity and F1 score across different prompting strategies.</p><p><strong>Results: </strong>A German fine-tuned Llama-2 model showed the highest accuracy (87.5%), sensitivity (83.0%) and specificity (91.8%) in identifying suicidality, with significant improvements in sensitivity and specificity across various prompt designs.</p><p><strong>Conclusions: </strong>The study demonstrates the capability of LLMs, particularly Llama-2, in accurately extracting information on suicidality from psychiatric records while preserving data privacy. This suggests their application in surveillance systems for psychiatric emergencies and improving the clinical management of suicidality by improving systematic quality control and research.</p>","PeriodicalId":9259,"journal":{"name":"British Journal of Psychiatry","volume":null,"pages":null},"PeriodicalIF":8.7000,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"British Journal of Psychiatry","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1192/bjp.2024.134","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHIATRY","Score":null,"Total":0}
引用次数: 0

Abstract

Background: Attempts to use artificial intelligence (AI) in psychiatric disorders show moderate success, highlighting the potential of incorporating information from clinical assessments to improve the models. This study focuses on using large language models (LLMs) to detect suicide risk from medical text in psychiatric care.

Aims: To extract information about suicidality status from the admission notes in electronic health records (EHRs) using privacy-sensitive, locally hosted LLMs, specifically evaluating the efficacy of Llama-2 models.

Method: We compared the performance of several variants of the open source LLM Llama-2 in extracting suicidality status from 100 psychiatric reports against a ground truth defined by human experts, assessing accuracy, sensitivity, specificity and F1 score across different prompting strategies.

Results: A German fine-tuned Llama-2 model showed the highest accuracy (87.5%), sensitivity (83.0%) and specificity (91.8%) in identifying suicidality, with significant improvements in sensitivity and specificity across various prompt designs.

Conclusions: The study demonstrates the capability of LLMs, particularly Llama-2, in accurately extracting information on suicidality from psychiatric records while preserving data privacy. This suggests their application in surveillance systems for psychiatric emergencies and improving the clinical management of suicidality by improving systematic quality control and research.

使用保护隐私的大型语言模型从医疗文本中检测自杀倾向。
背景:将人工智能(AI)应用于精神疾病的尝试取得了一定的成功,这凸显了将临床评估信息纳入模型以改进模型的潜力。本研究的重点是使用大型语言模型(LLMs)从精神科医疗文本中检测自杀风险。目的:使用对隐私敏感的本地托管 LLMs 从电子健康记录(EHR)的入院记录中提取有关自杀状态的信息,特别是评估 Llama-2 模型的功效:我们比较了开放源码 LLM Llama-2 的几个变体在从 100 份精神科报告中提取自杀状态时与人类专家定义的基本事实的性能,评估了不同提示策略下的准确性、灵敏度、特异性和 F1 分数:德国微调 Llama-2 模型在识别自杀方面显示出最高的准确率(87.5%)、灵敏度(83.0%)和特异性(91.8%),在不同的提示设计中灵敏度和特异性都有显著提高:这项研究证明了 LLMs(尤其是 Llama-2)在保护数据隐私的同时从精神疾病记录中准确提取自杀信息的能力。这表明,LLMs 可应用于精神病突发事件的监控系统,并通过改进系统质量控制和研究来改善自杀行为的临床管理。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
British Journal of Psychiatry
British Journal of Psychiatry 医学-精神病学
CiteScore
13.70
自引率
1.90%
发文量
184
审稿时长
4-8 weeks
期刊介绍: The British Journal of Psychiatry (BJPsych) is a renowned international journal that undergoes rigorous peer review. It covers various branches of psychiatry, with a specific focus on the clinical aspects of each topic. Published monthly by the Royal College of Psychiatrists, this journal is dedicated to enhancing the prevention, investigation, diagnosis, treatment, and care of mental illness worldwide. It also strives to promote global mental health. In addition to featuring authoritative original research articles from across the globe, the journal includes editorials, review articles, commentaries on contentious issues, a comprehensive book review section, and a dynamic correspondence column. BJPsych is an essential source of information for psychiatrists, clinical psychologists, and other professionals interested in mental health.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信