Yaning Wang, Jingfeng Zhang, Mingyang Li, Zheng Miao, Jing Wang, Kan He, Qi Yang, Lei Zhang, Lin Mu, Huimao Zhang
{"title":"SMART: Development and Application of a Multimodal Multi-organ Trauma Screening Model for Abdominal Injuries in Emergency Settings.","authors":"Yaning Wang, Jingfeng Zhang, Mingyang Li, Zheng Miao, Jing Wang, Kan He, Qi Yang, Lei Zhang, Lin Mu, Huimao Zhang","doi":"10.1016/j.acra.2024.11.056","DOIUrl":null,"url":null,"abstract":"<p><strong>Rationale and objectives: </strong>Effective trauma care in emergency departments necessitates rapid diagnosis by interdisciplinary teams using various medical data. This study constructed a multimodal diagnostic model for abdominal trauma using deep learning on non-contrast computed tomography (CT) and unstructured text data, enhancing the speed and accuracy of solid organ assessments.</p><p><strong>Materials and methods: </strong>Data were collected from patients undergoing abdominal CT scans. The SMART model (Screening for Multi-organ Assessment in Rapid Trauma) classifies trauma using text data (SMART_GPT), non-contrast CT scans (SMART_Image), or both. SMART_GPT uses the GPT-4 embedding API for text feature extraction, whereas SMART_Image incorporates nnU-Net and DenseNet121 for segmentation and classification. A composite model was developed by integrating multimodal data via logistic regression of SMART_GPT, SMART_Image, and patient demographics (age and gender).</p><p><strong>Results: </strong>This study included 2638 patients (459 positive, 2179 negative abdominal trauma cases). A trauma-based dataset included 1006 patients with 1632 real continuous data points for testing. SMART_GPT achieved a sensitivity of 81.3% and an area under the receiver operating characteristic curve (AUC) of 0.88 based on unstructured text data. SMART_Image exhibited a sensitivity of 87.5% and an AUC of 0.81 on non-contrast CT data, with the average sensitivity exceeding 90% at the organ level. The integrated SMART model achieved a sensitivity of 93.8% and an AUC of 0.88. In emergency department simulations, SMART reduced waiting times by over 64.24%.</p><p><strong>Conclusion: </strong>SMART provides rapid, objective trauma diagnostics, improving emergency care efficiency, reducing patient wait times, and enabling multimodal screening in diverse emergency contexts.</p>","PeriodicalId":50928,"journal":{"name":"Academic Radiology","volume":" ","pages":""},"PeriodicalIF":3.8000,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Academic Radiology","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1016/j.acra.2024.11.056","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0
Abstract
Rationale and objectives: Effective trauma care in emergency departments necessitates rapid diagnosis by interdisciplinary teams using various medical data. This study constructed a multimodal diagnostic model for abdominal trauma using deep learning on non-contrast computed tomography (CT) and unstructured text data, enhancing the speed and accuracy of solid organ assessments.
Materials and methods: Data were collected from patients undergoing abdominal CT scans. The SMART model (Screening for Multi-organ Assessment in Rapid Trauma) classifies trauma using text data (SMART_GPT), non-contrast CT scans (SMART_Image), or both. SMART_GPT uses the GPT-4 embedding API for text feature extraction, whereas SMART_Image incorporates nnU-Net and DenseNet121 for segmentation and classification. A composite model was developed by integrating multimodal data via logistic regression of SMART_GPT, SMART_Image, and patient demographics (age and gender).
Results: This study included 2638 patients (459 positive, 2179 negative abdominal trauma cases). A trauma-based dataset included 1006 patients with 1632 real continuous data points for testing. SMART_GPT achieved a sensitivity of 81.3% and an area under the receiver operating characteristic curve (AUC) of 0.88 based on unstructured text data. SMART_Image exhibited a sensitivity of 87.5% and an AUC of 0.81 on non-contrast CT data, with the average sensitivity exceeding 90% at the organ level. The integrated SMART model achieved a sensitivity of 93.8% and an AUC of 0.88. In emergency department simulations, SMART reduced waiting times by over 64.24%.
Conclusion: SMART provides rapid, objective trauma diagnostics, improving emergency care efficiency, reducing patient wait times, and enabling multimodal screening in diverse emergency contexts.
期刊介绍:
Academic Radiology publishes original reports of clinical and laboratory investigations in diagnostic imaging, the diagnostic use of radioactive isotopes, computed tomography, positron emission tomography, magnetic resonance imaging, ultrasound, digital subtraction angiography, image-guided interventions and related techniques. It also includes brief technical reports describing original observations, techniques, and instrumental developments; state-of-the-art reports on clinical issues, new technology and other topics of current medical importance; meta-analyses; scientific studies and opinions on radiologic education; and letters to the Editor.