Sebastiaan Hermans, Zixuan Hu, Robyn L Ball, Hui Ming Lin, Luciano M Prevedello, Ferco H Berger, Ibrahim Yusuf, Jeffrey D Rudie, Maryam Vazirabad, Adam E Flanders, George Shih, John Mongan, Savvas Nicolaou, Brett S Marinelli, Melissa A Davis, Kirti Magudia, Ervin Sejdić, Errol Colak
求助PDF
{"title":"RSNA 2023 腹部创伤人工智能挑战回顾与结果分析。","authors":"Sebastiaan Hermans, Zixuan Hu, Robyn L Ball, Hui Ming Lin, Luciano M Prevedello, Ferco H Berger, Ibrahim Yusuf, Jeffrey D Rudie, Maryam Vazirabad, Adam E Flanders, George Shih, John Mongan, Savvas Nicolaou, Brett S Marinelli, Melissa A Davis, Kirti Magudia, Ervin Sejdić, Errol Colak","doi":"10.1148/ryai.240334","DOIUrl":null,"url":null,"abstract":"<p><p><i>\"Just Accepted\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To evaluate the performance of the winning machine learning (ML) models from the 2023 RSNA Abdominal Trauma Detection Artificial Intelligence Challenge. Materials and Methods The competition was hosted on Kaggle and took place between July 26, 2023, to October 15, 2023. The multicenter competition dataset consisted of 4,274 abdominal trauma CT scans in which solid organs (liver, spleen and kidneys) were annotated as healthy, low-grade or high-grade injury. Studies were labeled as positive or negative for the presence of bowel/mesenteric injury and active extravasation. In this study, performances of the 8 award-winning models were retrospectively assessed and compared using various metrics, including the area under the receiver operating characteristic curve (AUC), for each injury category. The reported mean values of these metrics were calculated by averaging the performance across all models for each specified injury type. Results The models exhibited strong performance in detecting solid organ injuries, particularly high-grade injuries. For binary detection of injuries, the models demonstrated mean AUC values of 0.92 (range:0.91-0.94) for liver, 0.91 (range:0.87-0.93) for splenic, and 0.94 (range:0.93-0.95) for kidney injuries. The models achieved mean AUC values of 0.98 (range:0.96-0.98) for high-grade liver, 0.98 (range:0.97-0.99) for high-grade splenic, and 0.98 (range:0.97-0.98) for high-grade kidney injuries. For the detection of bowel/mesenteric injuries and active extravasation, the models demonstrated mean AUC values of 0.85 (range:0.74-0.73) and 0.85 (range:0.79-0.89) respectively. Conclusion The award-winning models from the AI challenge demonstrated strong performance in the detection of traumatic abdominal injuries on CT scans, particularly high-grade injuries. These models may serve as a performance baseline for future investigations and algorithms. ©RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240334"},"PeriodicalIF":8.1000,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"RSNA 2023 Abdominal Trauma AI Challenge Review and Outcomes Analysis.\",\"authors\":\"Sebastiaan Hermans, Zixuan Hu, Robyn L Ball, Hui Ming Lin, Luciano M Prevedello, Ferco H Berger, Ibrahim Yusuf, Jeffrey D Rudie, Maryam Vazirabad, Adam E Flanders, George Shih, John Mongan, Savvas Nicolaou, Brett S Marinelli, Melissa A Davis, Kirti Magudia, Ervin Sejdić, Errol Colak\",\"doi\":\"10.1148/ryai.240334\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p><i>\\\"Just Accepted\\\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To evaluate the performance of the winning machine learning (ML) models from the 2023 RSNA Abdominal Trauma Detection Artificial Intelligence Challenge. Materials and Methods The competition was hosted on Kaggle and took place between July 26, 2023, to October 15, 2023. The multicenter competition dataset consisted of 4,274 abdominal trauma CT scans in which solid organs (liver, spleen and kidneys) were annotated as healthy, low-grade or high-grade injury. Studies were labeled as positive or negative for the presence of bowel/mesenteric injury and active extravasation. In this study, performances of the 8 award-winning models were retrospectively assessed and compared using various metrics, including the area under the receiver operating characteristic curve (AUC), for each injury category. The reported mean values of these metrics were calculated by averaging the performance across all models for each specified injury type. Results The models exhibited strong performance in detecting solid organ injuries, particularly high-grade injuries. For binary detection of injuries, the models demonstrated mean AUC values of 0.92 (range:0.91-0.94) for liver, 0.91 (range:0.87-0.93) for splenic, and 0.94 (range:0.93-0.95) for kidney injuries. The models achieved mean AUC values of 0.98 (range:0.96-0.98) for high-grade liver, 0.98 (range:0.97-0.99) for high-grade splenic, and 0.98 (range:0.97-0.98) for high-grade kidney injuries. For the detection of bowel/mesenteric injuries and active extravasation, the models demonstrated mean AUC values of 0.85 (range:0.74-0.73) and 0.85 (range:0.79-0.89) respectively. Conclusion The award-winning models from the AI challenge demonstrated strong performance in the detection of traumatic abdominal injuries on CT scans, particularly high-grade injuries. These models may serve as a performance baseline for future investigations and algorithms. ©RSNA, 2024.</p>\",\"PeriodicalId\":29787,\"journal\":{\"name\":\"Radiology-Artificial Intelligence\",\"volume\":\" \",\"pages\":\"e240334\"},\"PeriodicalIF\":8.1000,\"publicationDate\":\"2024-11-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Radiology-Artificial Intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1148/ryai.240334\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Radiology-Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1148/ryai.240334","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
引用
批量引用