Development and External Validation of a Detection Model to Retrospectively Identify Patients With Acute Respiratory Distress Syndrome.

IF 7.7 1区 医学 Q1 CRITICAL CARE MEDICINE
Elizabeth Levy, Dru Claar, Ivan Co, Barry D Fuchs, Jennifer Ginestra, Rachel Kohn, Jakob I McSparron, Bhavik Patel, Gary E Weissman, Meeta Prasad Kerlin, Michael W Sjoding
{"title":"Development and External Validation of a Detection Model to Retrospectively Identify Patients With Acute Respiratory Distress Syndrome.","authors":"Elizabeth Levy, Dru Claar, Ivan Co, Barry D Fuchs, Jennifer Ginestra, Rachel Kohn, Jakob I McSparron, Bhavik Patel, Gary E Weissman, Meeta Prasad Kerlin, Michael W Sjoding","doi":"10.1097/CCM.0000000000006662","DOIUrl":null,"url":null,"abstract":"<p><strong>Objective: </strong>The aim of this study was to develop and externally validate a machine-learning model that retrospectively identifies patients with acute respiratory distress syndrome (acute respiratory distress syndrome [ARDS]) using electronic health record (EHR) data.</p><p><strong>Design: </strong>In this retrospective cohort study, ARDS was identified via physician-adjudication in three cohorts of patients with hypoxemic respiratory failure (training, internal validation, and external validation). Machine-learning models were trained to classify ARDS using vital signs, respiratory support, laboratory data, medications, chest radiology reports, and clinical notes. The best-performing models were assessed and internally and externally validated using the area under receiver-operating curve (AUROC), area under precision-recall curve, integrated calibration index (ICI), sensitivity, specificity, positive predictive value (PPV), and ARDS timing.</p><p><strong>Patients: </strong>Patients with hypoxemic respiratory failure undergoing mechanical ventilation within two distinct health systems.</p><p><strong>Interventions: </strong>None.</p><p><strong>Measurements and main results: </strong>There were 1,845 patients in the training cohort, 556 in the internal validation cohort, and 199 in the external validation cohort. ARDS prevalence was 19%, 17%, and 31%, respectively. Regularized logistic regression models analyzing structured data (EHR model) and structured data and radiology reports (EHR-radiology model) had the best performance. During internal and external validation, the EHR-radiology model had AUROC of 0.91 (95% CI, 0.88-0.93) and 0.88 (95% CI, 0.87-0.93), respectively. Externally, the ICI was 0.13 (95% CI, 0.08-0.18). At a specified model threshold, sensitivity and specificity were 80% (95% CI, 75%-98%), PPV was 64% (95% CI, 58%-71%), and the model identified patients with a median of 2.2 hours (interquartile range 0.2-18.6) after meeting Berlin ARDS criteria.</p><p><strong>Conclusions: </strong>Machine-learning models analyzing EHR data can retrospectively identify patients with ARDS across different institutions.</p>","PeriodicalId":10765,"journal":{"name":"Critical Care Medicine","volume":" ","pages":""},"PeriodicalIF":7.7000,"publicationDate":"2025-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Critical Care Medicine","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1097/CCM.0000000000006662","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"CRITICAL CARE MEDICINE","Score":null,"Total":0}
引用次数: 0

Abstract

Objective: The aim of this study was to develop and externally validate a machine-learning model that retrospectively identifies patients with acute respiratory distress syndrome (acute respiratory distress syndrome [ARDS]) using electronic health record (EHR) data.

Design: In this retrospective cohort study, ARDS was identified via physician-adjudication in three cohorts of patients with hypoxemic respiratory failure (training, internal validation, and external validation). Machine-learning models were trained to classify ARDS using vital signs, respiratory support, laboratory data, medications, chest radiology reports, and clinical notes. The best-performing models were assessed and internally and externally validated using the area under receiver-operating curve (AUROC), area under precision-recall curve, integrated calibration index (ICI), sensitivity, specificity, positive predictive value (PPV), and ARDS timing.

Patients: Patients with hypoxemic respiratory failure undergoing mechanical ventilation within two distinct health systems.

Interventions: None.

Measurements and main results: There were 1,845 patients in the training cohort, 556 in the internal validation cohort, and 199 in the external validation cohort. ARDS prevalence was 19%, 17%, and 31%, respectively. Regularized logistic regression models analyzing structured data (EHR model) and structured data and radiology reports (EHR-radiology model) had the best performance. During internal and external validation, the EHR-radiology model had AUROC of 0.91 (95% CI, 0.88-0.93) and 0.88 (95% CI, 0.87-0.93), respectively. Externally, the ICI was 0.13 (95% CI, 0.08-0.18). At a specified model threshold, sensitivity and specificity were 80% (95% CI, 75%-98%), PPV was 64% (95% CI, 58%-71%), and the model identified patients with a median of 2.2 hours (interquartile range 0.2-18.6) after meeting Berlin ARDS criteria.

Conclusions: Machine-learning models analyzing EHR data can retrospectively identify patients with ARDS across different institutions.

回顾性识别急性呼吸窘迫综合征患者的检测模型的建立和外部验证。
目的:本研究的目的是开发并外部验证一个机器学习模型,该模型使用电子健康记录(EHR)数据回顾性识别急性呼吸窘迫综合征(acute respiratory distress syndrome, ARDS)患者。设计:在这项回顾性队列研究中,通过医生对三组低氧性呼吸衰竭患者的鉴定(训练、内部验证和外部验证)确定ARDS。训练机器学习模型,根据生命体征、呼吸支持、实验室数据、药物、胸部放射学报告和临床记录对ARDS进行分类。采用受试者操作曲线下面积(AUROC)、精确召回率曲线下面积、综合校准指数(ICI)、敏感性、特异性、阳性预测值(PPV)和ARDS时间对表现最佳的模型进行评估和内外验证。患者:在两个不同的卫生系统中接受机械通气的低氧性呼吸衰竭患者。干预措施:没有。测量方法及主要结果:训练组1845例,内部验证组556例,外部验证组199例。ARDS患病率分别为19%、17%和31%。分析结构化数据的正则化逻辑回归模型(EHR模型)和结构化数据与放射学报告的正则化逻辑回归模型(EHR-radiology模型)的效果最好。在内部和外部验证中,ehr -放射学模型的AUROC分别为0.91 (95% CI, 0.88-0.93)和0.88 (95% CI, 0.87-0.93)。外部,ICI为0.13 (95% CI, 0.08-0.18)。在指定的模型阈值下,敏感性和特异性为80% (95% CI, 75%-98%), PPV为64% (95% CI, 58%-71%),该模型在符合柏林ARDS标准后的中位时间为2.2小时(四分位数范围为0.2-18.6)。结论:分析电子病历数据的机器学习模型可以回顾性地识别不同机构的ARDS患者。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Critical Care Medicine
Critical Care Medicine 医学-危重病医学
CiteScore
16.30
自引率
5.70%
发文量
728
审稿时长
2 months
期刊介绍: Critical Care Medicine is the premier peer-reviewed, scientific publication in critical care medicine. Directed to those specialists who treat patients in the ICU and CCU, including chest physicians, surgeons, pediatricians, pharmacists/pharmacologists, anesthesiologists, critical care nurses, and other healthcare professionals, Critical Care Medicine covers all aspects of acute and emergency care for the critically ill or injured patient. Each issue presents critical care practitioners with clinical breakthroughs that lead to better patient care, the latest news on promising research, and advances in equipment and techniques.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信