George S Chen, Terry Lee, Jennifer L Y Tsang, Alexandra Binnie, Anne McCarthy, Juthaporn Cowan, Patrick Archambault, Francois Lellouche, Alexis F Turgeon, Jennifer Yoon, Francois Lamontagne, Allison McGeer, Josh Douglas, Peter Daley, Robert Fowler, David M Maslove, Brent W Winston, Todd C Lee, Karen C Tran, Matthew P Cheng, Donald C Vinh, John H Boyd, Keith R Walley, Joel Singer, John C Marshall, James A Russell
{"title":"Machine Learning Accurately Predicts Need for Critical Care Support in Patients Admitted to Hospital for Community-Acquired Pneumonia.","authors":"George S Chen, Terry Lee, Jennifer L Y Tsang, Alexandra Binnie, Anne McCarthy, Juthaporn Cowan, Patrick Archambault, Francois Lellouche, Alexis F Turgeon, Jennifer Yoon, Francois Lamontagne, Allison McGeer, Josh Douglas, Peter Daley, Robert Fowler, David M Maslove, Brent W Winston, Todd C Lee, Karen C Tran, Matthew P Cheng, Donald C Vinh, John H Boyd, Keith R Walley, Joel Singer, John C Marshall, James A Russell","doi":"10.1097/CCE.0000000000001262","DOIUrl":null,"url":null,"abstract":"<p><strong>Objectives: </strong>Hospitalized community-acquired pneumonia (CAP) patients are admitted for ventilation, vasopressors, and renal replacement therapy (RRT). This study aimed to develop a machine learning (ML) model that predicts the need for such interventions and compare its accuracy to that of logistic regression (LR).</p><p><strong>Design: </strong>This retrospective observational study trained separate models using random-forest classifier (RFC), support vector machines (SVMs), Extreme Gradient Boosting (XGBoost), and multilayer perceptron (MLP) to predict three endpoints: eventual use of invasive ventilation, vasopressors, and RRT during hospitalization. RFC-based models were overall most accurate in a derivation COVID-19 CAP cohort and were validated in one COVID-19 CAP and two non-COVID-19 CAP cohorts.</p><p><strong>Setting: </strong>This study is part of the Community-Acquired Pneumonia: Toward InnoVAtive Treatment (CAPTIVATE) Research program.</p><p><strong>Patients: </strong>Two thousand four hundred twenty COVID-19 and 1909 non-COVID-19 CAP patients over 18 years old hospitalized and not needing invasive ventilation, vasopressors, and RRT on the day of admission were included.</p><p><strong>Interventions: </strong>None.</p><p><strong>Measurements and main results: </strong>Performance was evaluated with area under the receiver operating characteristic curve (AUROC) and accuracy. RFCs performed better than XGBoost, SVM, and MLP models. For comparison, we evaluated LR models in the same cohorts. AUROC was very high ranging from 0.74 to 0.95 in predicting ventilation, vasopressors, and RRT use in our derivation and validation cohorts. ML used and variables such as Fio2, Glasgow Coma Scale, and mean arterial pressure to predict ventilator, vasopressor use, creatinine, and potassium to predict RRT use. LR was less accurate than ML, with AUROC ranging 0.66 to 0.8.</p><p><strong>Conclusions: </strong>A ML algorithm more accurately predicts need of invasive ventilation, vasopressors, or RRT in hospitalized non-COVID-19 CAP and COVID-19 patients than regression models and could augment clinician judgment for triage and care of hospitalized CAP patients.</p>","PeriodicalId":93957,"journal":{"name":"Critical care explorations","volume":"7 6","pages":"e1262"},"PeriodicalIF":0.0000,"publicationDate":"2025-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Critical care explorations","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1097/CCE.0000000000001262","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/6/1 0:00:00","PubModel":"eCollection","JCR":"Q4","JCRName":"Medicine","Score":null,"Total":0}
引用次数: 0
Abstract
Objectives: Hospitalized community-acquired pneumonia (CAP) patients are admitted for ventilation, vasopressors, and renal replacement therapy (RRT). This study aimed to develop a machine learning (ML) model that predicts the need for such interventions and compare its accuracy to that of logistic regression (LR).
Design: This retrospective observational study trained separate models using random-forest classifier (RFC), support vector machines (SVMs), Extreme Gradient Boosting (XGBoost), and multilayer perceptron (MLP) to predict three endpoints: eventual use of invasive ventilation, vasopressors, and RRT during hospitalization. RFC-based models were overall most accurate in a derivation COVID-19 CAP cohort and were validated in one COVID-19 CAP and two non-COVID-19 CAP cohorts.
Setting: This study is part of the Community-Acquired Pneumonia: Toward InnoVAtive Treatment (CAPTIVATE) Research program.
Patients: Two thousand four hundred twenty COVID-19 and 1909 non-COVID-19 CAP patients over 18 years old hospitalized and not needing invasive ventilation, vasopressors, and RRT on the day of admission were included.
Interventions: None.
Measurements and main results: Performance was evaluated with area under the receiver operating characteristic curve (AUROC) and accuracy. RFCs performed better than XGBoost, SVM, and MLP models. For comparison, we evaluated LR models in the same cohorts. AUROC was very high ranging from 0.74 to 0.95 in predicting ventilation, vasopressors, and RRT use in our derivation and validation cohorts. ML used and variables such as Fio2, Glasgow Coma Scale, and mean arterial pressure to predict ventilator, vasopressor use, creatinine, and potassium to predict RRT use. LR was less accurate than ML, with AUROC ranging 0.66 to 0.8.
Conclusions: A ML algorithm more accurately predicts need of invasive ventilation, vasopressors, or RRT in hospitalized non-COVID-19 CAP and COVID-19 patients than regression models and could augment clinician judgment for triage and care of hospitalized CAP patients.