Le Nguyen Binh, Nguyen Thanh Nhu, Vu Pham Thao Vy, Do Le Hoang Son, Truong Nguyen Khanh Hung, Nguyen Bach, Hoang Quoc Huy, Le Van Tuan, Nguyen Quoc Khanh Le, Jiunn-Horng Kang
{"title":"基于 AO/OTA 分类检测小儿前臂远端骨折的多类深度学习模型","authors":"Le Nguyen Binh, Nguyen Thanh Nhu, Vu Pham Thao Vy, Do Le Hoang Son, Truong Nguyen Khanh Hung, Nguyen Bach, Hoang Quoc Huy, Le Van Tuan, Nguyen Quoc Khanh Le, Jiunn-Horng Kang","doi":"10.1007/s10278-024-00968-4","DOIUrl":null,"url":null,"abstract":"<p>Common pediatric distal forearm fractures necessitate precise detection. To support prompt treatment planning by clinicians, our study aimed to create a multi-class convolutional neural network (CNN) model for pediatric distal forearm fractures, guided by the AO Foundation/Orthopaedic Trauma Association (AO/ATO) classification system for pediatric fractures. The GRAZPEDWRI-DX dataset (2008–2018) of wrist X-ray images was used. We labeled images into four fracture classes (FRM, FUM, FRE, and FUE with F, fracture; R, radius; U, ulna; M, metaphysis; and E, epiphysis) based on the pediatric AO/ATO classification. We performed multi-class classification by training a YOLOv4-based CNN object detection model with 7006 images from 1809 patients (80% for training and 20% for validation). An 88-image test set from 34 patients was used to evaluate the model performance, which was then compared to the diagnosis performances of two readers—an orthopedist and a radiologist. The overall mean average precision levels on the validation set in four classes of the model were 0.97, 0.92, 0.95, and 0.94, respectively. On the test set, the model’s performance included sensitivities of 0.86, 0.71, 0.88, and 0.89; specificities of 0.88, 0.94, 0.97, and 0.98; and area under the curve (AUC) values of 0.87, 0.83, 0.93, and 0.94, respectively. The best performance among the three readers belonged to the radiologist, with a mean AUC of 0.922, followed by our model (0.892) and the orthopedist (0.830). Therefore, using the AO/OTA concept, our multi-class fracture detection model excelled in identifying pediatric distal forearm fractures.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"11 1","pages":""},"PeriodicalIF":2.9000,"publicationDate":"2024-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Multi-Class Deep Learning Model for Detecting Pediatric Distal Forearm Fractures Based on the AO/OTA Classification\",\"authors\":\"Le Nguyen Binh, Nguyen Thanh Nhu, Vu Pham Thao Vy, Do Le Hoang Son, Truong Nguyen Khanh Hung, Nguyen Bach, Hoang Quoc Huy, Le Van Tuan, Nguyen Quoc Khanh Le, Jiunn-Horng Kang\",\"doi\":\"10.1007/s10278-024-00968-4\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Common pediatric distal forearm fractures necessitate precise detection. To support prompt treatment planning by clinicians, our study aimed to create a multi-class convolutional neural network (CNN) model for pediatric distal forearm fractures, guided by the AO Foundation/Orthopaedic Trauma Association (AO/ATO) classification system for pediatric fractures. The GRAZPEDWRI-DX dataset (2008–2018) of wrist X-ray images was used. We labeled images into four fracture classes (FRM, FUM, FRE, and FUE with F, fracture; R, radius; U, ulna; M, metaphysis; and E, epiphysis) based on the pediatric AO/ATO classification. We performed multi-class classification by training a YOLOv4-based CNN object detection model with 7006 images from 1809 patients (80% for training and 20% for validation). An 88-image test set from 34 patients was used to evaluate the model performance, which was then compared to the diagnosis performances of two readers—an orthopedist and a radiologist. The overall mean average precision levels on the validation set in four classes of the model were 0.97, 0.92, 0.95, and 0.94, respectively. On the test set, the model’s performance included sensitivities of 0.86, 0.71, 0.88, and 0.89; specificities of 0.88, 0.94, 0.97, and 0.98; and area under the curve (AUC) values of 0.87, 0.83, 0.93, and 0.94, respectively. The best performance among the three readers belonged to the radiologist, with a mean AUC of 0.922, followed by our model (0.892) and the orthopedist (0.830). Therefore, using the AO/OTA concept, our multi-class fracture detection model excelled in identifying pediatric distal forearm fractures.</p>\",\"PeriodicalId\":50214,\"journal\":{\"name\":\"Journal of Digital Imaging\",\"volume\":\"11 1\",\"pages\":\"\"},\"PeriodicalIF\":2.9000,\"publicationDate\":\"2024-02-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Digital Imaging\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.1007/s10278-024-00968-4\",\"RegionNum\":2,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Digital Imaging","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1007/s10278-024-00968-4","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
Multi-Class Deep Learning Model for Detecting Pediatric Distal Forearm Fractures Based on the AO/OTA Classification
Common pediatric distal forearm fractures necessitate precise detection. To support prompt treatment planning by clinicians, our study aimed to create a multi-class convolutional neural network (CNN) model for pediatric distal forearm fractures, guided by the AO Foundation/Orthopaedic Trauma Association (AO/ATO) classification system for pediatric fractures. The GRAZPEDWRI-DX dataset (2008–2018) of wrist X-ray images was used. We labeled images into four fracture classes (FRM, FUM, FRE, and FUE with F, fracture; R, radius; U, ulna; M, metaphysis; and E, epiphysis) based on the pediatric AO/ATO classification. We performed multi-class classification by training a YOLOv4-based CNN object detection model with 7006 images from 1809 patients (80% for training and 20% for validation). An 88-image test set from 34 patients was used to evaluate the model performance, which was then compared to the diagnosis performances of two readers—an orthopedist and a radiologist. The overall mean average precision levels on the validation set in four classes of the model were 0.97, 0.92, 0.95, and 0.94, respectively. On the test set, the model’s performance included sensitivities of 0.86, 0.71, 0.88, and 0.89; specificities of 0.88, 0.94, 0.97, and 0.98; and area under the curve (AUC) values of 0.87, 0.83, 0.93, and 0.94, respectively. The best performance among the three readers belonged to the radiologist, with a mean AUC of 0.922, followed by our model (0.892) and the orthopedist (0.830). Therefore, using the AO/OTA concept, our multi-class fracture detection model excelled in identifying pediatric distal forearm fractures.
期刊介绍:
The Journal of Digital Imaging (JDI) is the official peer-reviewed journal of the Society for Imaging Informatics in Medicine (SIIM). JDI’s goal is to enhance the exchange of knowledge encompassed by the general topic of Imaging Informatics in Medicine such as research and practice in clinical, engineering, and information technologies and techniques in all medical imaging environments. JDI topics are of interest to researchers, developers, educators, physicians, and imaging informatics professionals.
Suggested Topics
PACS and component systems; imaging informatics for the enterprise; image-enabled electronic medical records; RIS and HIS; digital image acquisition; image processing; image data compression; 3D, visualization, and multimedia; speech recognition; computer-aided diagnosis; facilities design; imaging vocabularies and ontologies; Transforming the Radiological Interpretation Process (TRIP™); DICOM and other standards; workflow and process modeling and simulation; quality assurance; archive integrity and security; teleradiology; digital mammography; and radiological informatics education.