Automated sepsis prediction from unstructured electronic health records using natural language processing: a retrospective cohort study.

IF 4.4 Q1 HEALTH CARE SCIENCES & SERVICES
Lipi Mishra, Sowmya Muchukunte Ramaswamy, Broderick Ivan McCallum-Hee, Keaton Wright, Riley Croxford, Sunil Belur Nagaraj, Matthew Anstey
{"title":"Automated sepsis prediction from unstructured electronic health records using natural language processing: a retrospective cohort study.","authors":"Lipi Mishra, Sowmya Muchukunte Ramaswamy, Broderick Ivan McCallum-Hee, Keaton Wright, Riley Croxford, Sunil Belur Nagaraj, Matthew Anstey","doi":"10.1136/bmjhci-2024-101354","DOIUrl":null,"url":null,"abstract":"<p><strong>Objective: </strong>Artificial intelligence (AI) holds promise for predicting sepsis. However, challenges remain in integrating AI, natural language processing (NLP) and free text data to enhance sepsis diagnosis at emergency department (ED) triage. This study aimed to evaluate the effectiveness of AI in improving sepsis diagnosis.</p><p><strong>Methods: </strong>This retrospective cohort study analysed data from 134 266 patients admitted to the ED and subsequently hospitalised between 1 January 2016 and 31 December 2021. The data set comprised 10 variables and free-text triage comments, which underwent tokenisation and processing using a bag-of-words model. We evaluated four traditional NLP classifier models, including logistic regression, LightGBM, random forest and neural network. We also evaluated the performance of the BERT classifier. We used area under precision-recall curve (AUPRC) and area under the curve (AUC) as performance metrics.</p><p><strong>Results: </strong>Random forest exhibited superior predictive performance with an AUPRC of 0.789 (95% CI: 0.7668 to 0.8018) and an AUC of 0.80 (95% CI: 0.7842 to 0.8173). Using raw text, the BERT model achieved an AUPRC of 0.7542 (95% CI: 0.7418 to 0.7741) and AUC of 0.7735 (95% CI: 0.7628 to 0.8017) for sepsis prediction. Key variables included ED treatment time, patient age, arrival-to-treatment time, Australasian Triage Scale and visit type.</p><p><strong>Discussion: </strong>This study demonstrates AI, particularly random forest and BERT classifiers, for early sepsis detection in EDs using free-text patient concerns.</p><p><strong>Conclusion: </strong>Incorporating free text into machine learning improved diagnosis and identified missed cases, enhancing sepsis prediction in the ED with an AI-powered clinical decision support system. Large, prospective studies are needed to validate these findings.</p>","PeriodicalId":9050,"journal":{"name":"BMJ Health & Care Informatics","volume":"32 1","pages":""},"PeriodicalIF":4.4000,"publicationDate":"2025-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12434735/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"BMJ Health & Care Informatics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1136/bmjhci-2024-101354","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
引用次数: 0

Abstract

Objective: Artificial intelligence (AI) holds promise for predicting sepsis. However, challenges remain in integrating AI, natural language processing (NLP) and free text data to enhance sepsis diagnosis at emergency department (ED) triage. This study aimed to evaluate the effectiveness of AI in improving sepsis diagnosis.

Methods: This retrospective cohort study analysed data from 134 266 patients admitted to the ED and subsequently hospitalised between 1 January 2016 and 31 December 2021. The data set comprised 10 variables and free-text triage comments, which underwent tokenisation and processing using a bag-of-words model. We evaluated four traditional NLP classifier models, including logistic regression, LightGBM, random forest and neural network. We also evaluated the performance of the BERT classifier. We used area under precision-recall curve (AUPRC) and area under the curve (AUC) as performance metrics.

Results: Random forest exhibited superior predictive performance with an AUPRC of 0.789 (95% CI: 0.7668 to 0.8018) and an AUC of 0.80 (95% CI: 0.7842 to 0.8173). Using raw text, the BERT model achieved an AUPRC of 0.7542 (95% CI: 0.7418 to 0.7741) and AUC of 0.7735 (95% CI: 0.7628 to 0.8017) for sepsis prediction. Key variables included ED treatment time, patient age, arrival-to-treatment time, Australasian Triage Scale and visit type.

Discussion: This study demonstrates AI, particularly random forest and BERT classifiers, for early sepsis detection in EDs using free-text patient concerns.

Conclusion: Incorporating free text into machine learning improved diagnosis and identified missed cases, enhancing sepsis prediction in the ED with an AI-powered clinical decision support system. Large, prospective studies are needed to validate these findings.

Abstract Image

使用自然语言处理从非结构化电子健康记录中自动预测败血症:一项回顾性队列研究。
目的:人工智能(AI)有望预测败血症。然而,在整合人工智能、自然语言处理(NLP)和自由文本数据以增强急诊室(ED)分诊中的败血症诊断方面仍然存在挑战。本研究旨在评价人工智能在提高脓毒症诊断中的有效性。方法:本回顾性队列研究分析了2016年1月1日至2021年12月31日期间入住急诊科并随后住院的134 266例患者的数据。数据集包括10个变量和自由文本分类评论,使用词袋模型进行了标记化和处理。我们评估了四种传统的NLP分类器模型,包括逻辑回归、LightGBM、随机森林和神经网络。我们还评估了BERT分类器的性能。我们使用精确召回率曲线下面积(AUPRC)和曲线下面积(AUC)作为性能指标。结果:随机森林的AUPRC为0.789 (95% CI: 0.7668 ~ 0.8018), AUC为0.80 (95% CI: 0.7842 ~ 0.8173),具有优越的预测性能。使用原始文本,BERT模型预测败血症的AUPRC为0.7542 (95% CI: 0.7418至0.7741),AUC为0.7735 (95% CI: 0.7628至0.8017)。关键变量包括急诊科治疗时间、患者年龄、到达治疗时间、澳大拉西亚分诊量表和就诊类型。讨论:本研究展示了人工智能,特别是随机森林和BERT分类器,在急诊室使用自由文本患者关注的早期败血症检测中。结论:将自由文本与机器学习相结合可以提高诊断和识别漏诊病例,并通过人工智能临床决策支持系统增强ED的败血症预测。需要大规模的前瞻性研究来验证这些发现。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
6.10
自引率
4.90%
发文量
40
审稿时长
18 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信