Lorraine Abel, Jakob Wasserthal, Manfred T. Meyer, Jan Vosshenrich, Shan Yang, Ricardo Donners, Markus Obmann, Daniel Boll, Elmar Merkle, Hanns-Christian Breit, Martin Segeroth
{"title":"Intra-Individual Reproducibility of Automated Abdominal Organ Segmentation—Performance of TotalSegmentator Compared to Human Readers and an Independent nnU-Net Model","authors":"Lorraine Abel, Jakob Wasserthal, Manfred T. Meyer, Jan Vosshenrich, Shan Yang, Ricardo Donners, Markus Obmann, Daniel Boll, Elmar Merkle, Hanns-Christian Breit, Martin Segeroth","doi":"10.1007/s10278-024-01265-w","DOIUrl":"https://doi.org/10.1007/s10278-024-01265-w","url":null,"abstract":"<p>The purpose of this study is to assess segmentation reproducibility of artificial intelligence-based algorithm, TotalSegmentator, across 34 anatomical structures using multiphasic abdominal CT scans comparing unenhanced, arterial, and portal venous phases in the same patients. A total of 1252 multiphasic abdominal CT scans acquired at our institution between January 1, 2012, and December 31, 2022, were retrospectively included. TotalSegmentator was used to derive volumetric measurements of 34 abdominal organs and structures from the total of 3756 CT series. Reproducibility was evaluated across three contrast phases per CT and compared to two human readers and an independent nnU-Net trained on the BTCV dataset. Relative deviation in segmented volumes and absolute volume deviations (AVD) were reported. Volume deviation within 5% was considered reproducible. Thus, non-inferiority testing was conducted using a 5% margin. Twenty-nine out of 34 structures had volume deviations within 5% and were considered reproducible. Volume deviations for the adrenal glands, gallbladder, spleen, and duodenum were above 5%. Highest reproducibility was observed for bones (− 0.58% [95% CI: − 0.58, − 0.57]) and muscles (− 0.33% [− 0.35, − 0.32]). Among abdominal organs, volume deviation was 1.67% (1.60, 1.74). TotalSegmentator outperformed the reproducibility of the nnU-Net trained on the BTCV dataset with an AVD of 6.50% (6.41, 6.59) vs. 10.03% (9.86, 10.20; <i>p</i> < 0.0001), most notably in cases with pathologic findings. Similarly, TotalSegmentator’s AVD between different contrast phases was superior compared to the interreader AVD for the same contrast phase (<i>p</i> = 0.036). TotalSegmentator demonstrated high intra-individual reproducibility for most abdominal structures in multiphasic abdominal CT scans. Although reproducibility was lower in pathologic cases, it outperforms both human readers and a nnU-Net trained on the BTCV dataset.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142259282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Teleradiology-Based Referrals for Patients with Gastroenterological Diseases Between Tertiary and Regional Hospitals: A Hospital-to-Hospital Approach","authors":"Kosuke Suzuki, Hiroaki Saito, Yoshika Saito, Akashi Endo, Daichi Togo, Risa Hanada, Rie Iwaya, Toshinori Sato, Kei Niida, Ryuta Suzuki, Junichi Togashi, Satoshi Ito, Yukari Tanaka, Yoshitaka Nawata, Kimihiro Igarashi, Hidetaka Hamamoto, Akihiko Ozaki, Tetsuya Tanimoto, Yasuteru Shimamura, Shunichi Sugawara, Masaki Nakashima, Toru Okuzono, Masato Nakahori, Akimichi Chonan, Tomoki Matsuda","doi":"10.1007/s10278-024-01264-x","DOIUrl":"https://doi.org/10.1007/s10278-024-01264-x","url":null,"abstract":"<p>Teleradiology is recognized for fostering collaboration between regional and tertiary hospitals. However, its application in gastroenterological diseases remains underexplored. This study aimed to assess the effectiveness of teleradiology in improving gastroenterological care. This retrospective study analyzed patients with gastroenterological diseases in a tertiary hospital who were referred from a regional hospital using a cloud-based radiology image-sharing system between July 2020 and June 2023. Our primary focus was to conduct a descriptive statistical analysis to evaluate patient characteristics and the referral process and analyze the timeframes from referral to transfer and from the start of treatment to discharge and the outcomes. We analyzed 56 patients, with 45 (80.4%) presenting hepatobiliary pancreatic disease. The most frequent condition was common bile duct stones (17 cases). Forty-nine cases were transferred for inpatient treatments, four underwent endoscopic examinations as outpatients, and two had imaging consultation without subsequent hospital visits. On referral day, 16 patients were transferred, and the remaining 33 (67.3%) were placed on a waiting list starting from the subsequent day. The median time from referral to admission was 1 day (range: 0–14 days), and the median time from referral to treatment was 2 days (range: 0–14 days). Remote image-sharing systems ensure accurate imaging at referral, preventing care delays. In collaboration with regional and tertiary hospitals, teleradiology may also be useful for gastroenterological diseases.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142259283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vishal Deepak, Haroon Ahmed, Joseph Minardi, Sunil Sharma
{"title":"A Multi-step Integrative Workflow Implementation to Improve Documentation of Point of Care Ultrasound in Medical Intensive Care Unit","authors":"Vishal Deepak, Haroon Ahmed, Joseph Minardi, Sunil Sharma","doi":"10.1007/s10278-024-01260-1","DOIUrl":"https://doi.org/10.1007/s10278-024-01260-1","url":null,"abstract":"<p>Point of care ultrasound (POCUS) provides quick bedside assessment for diagnosing and managing life-threatening conditions in critical care medicine. There has been increasing interest in developing infrastructure to archive images, record clinical interpretation, assess quality, and recoup revenue for POCUS. We present a simple workflow by systems integration of electronic medical record, ultrasound machines, picture archiving, and communication system to facilitate POCUS documentation and billing. We recorded a trend on the number of POCUS performed before and after introduction of the structured integration. We observed and recorded a linear increase over time post-intervention. Our innovative and integrated POCUS workflow is an effective way to document and bill POCUS.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142259289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Assessment of Age-Related Differences in Lower Leg Muscles Quality Using Radiomic Features of Magnetic Resonance Images","authors":"Takuro Shiiba, Suzumi Mori, Takuya Shimozono, Shuji Ito, Kazuki Takano","doi":"10.1007/s10278-024-01268-7","DOIUrl":"https://doi.org/10.1007/s10278-024-01268-7","url":null,"abstract":"<p>Sarcopenia, characterised by a decline in muscle mass and strength, affects the health of the elderly, leading to increased falls, hospitalisation, and mortality rates. Muscle quality, reflecting microscopic and macroscopic muscle changes, is a critical determinant of physical function. To utilise radiomic features extracted from magnetic resonance (MR) images to assess age-related changes in muscle quality, a dataset of 24 adults, divided into older (male/female: 6/6, 66–79 years) and younger (male/female: 6/6, 21–31 years) groups, was used to investigate the radiomics features of the dorsiflexor and plantar flexor muscles of the lower leg that are critical for mobility. MR images were processed using MaZda software for feature extraction. Dimensionality reduction was performed using principal component analysis and recursive feature elimination, followed by classification using machine learning models, such as support vector machine, extreme gradient boosting, and naïve Bayes. A leave-one-out validation test was used to train and test the classifiers, and the area under the receiver operating characteristic curve (AUC) was used to evaluate the classification performance. The analysis revealed that significant differences in radiomic feature distributions were found between age groups, with older adults showing higher complexity and variability in muscle texture. The plantar flexors showed similar or higher AUC than the dorsiflexors in all models. When the dorsiflexor muscles were combined with the plantar flexor muscles, they tended to have a higher AUC than when they were used alone. Radiomic features in lower-leg MR images reflect ageing, especially in the plantar flexor muscles. Radiomic analysis can offer a deeper understanding of age-related muscle quality than traditional muscle mass assessments.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142259286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Esteban Ortiz, Juan Rivera, Manuel Granja, Nelson Agudelo, Marcela Hernández Hoyos, Antonio Salazar
{"title":"Automated ASPECTS Segmentation and Scoring Tool: a Method Tailored for a Colombian Telestroke Network","authors":"Esteban Ortiz, Juan Rivera, Manuel Granja, Nelson Agudelo, Marcela Hernández Hoyos, Antonio Salazar","doi":"10.1007/s10278-024-01258-9","DOIUrl":"https://doi.org/10.1007/s10278-024-01258-9","url":null,"abstract":"<p>To evaluate our two non-machine learning (non-ML)-based algorithmic approaches for detecting early ischemic infarcts on brain CT images of patients with acute ischemic stroke symptoms, tailored to our local population, to be incorporated in our telestroke software. One-hundred and thirteen acute stroke patients, excluding hemorrhagic, subacute, and chronic patients, with accessible brain CT images were divided into calibration and test sets. The gold standard was determined through consensus among three neuroradiologist. Four neuroradiologist independently reported Alberta Stroke Program Early CT Scores (ASPECTSs). ASPECTSs were also obtained using a commercial ML solution (CMLS), and our two methods, namely the Mean Hounsfield Unit (HU) relative difference (RELDIF) and the density distribution equivalence test (DDET), which used statistical analyze the of the HUs of each region and its contralateral side. Automated segmentation was perfect for cortical regions, while minimal adjustment was required for basal ganglia regions. For dichotomized-ASPECTSs (ASPECTS < 6) in the test set, the area under the receiver operating characteristic curve (AUC) was 0.85 for the DDET method, 0.84 for the RELDIF approach, 0.64 for the CMLS, and ranged from 0.71–0.89 for the neuroradiologist. The accuracy was 0.85 for the DDET method, 0.88 for the RELDIF approach, and was ranged from 0.83 − 0.96 for the neuroradiologist. Equivalence at a margin of 5% was documented among the DDET, RELDIF, and gold standard on mean ASPECTSs. Noninferiority tests of the AUC and accuracy of infarct detection revealed similarities between both DDET and RELDIF, and the CMLS, and with at least one neuroradiologist. The alignment of our methods with the evaluations of neuroradiologist and the CMLS indicates the potential of our methods to serve as supportive tools in clinical settings, facilitating prompt and accurate stroke diagnosis, especially in health care settings, such as Colombia, where neuroradiologist are limited.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142259287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tongxin Yang, Qilin Huang, Fenglin Cai, Jie Li, Li Jiang, Yulong Xia
{"title":"Vital Characteristics Cellular Neural Network (VCeNN) for Melanoma Lesion Segmentation: A Biologically Inspired Deep Learning Approach","authors":"Tongxin Yang, Qilin Huang, Fenglin Cai, Jie Li, Li Jiang, Yulong Xia","doi":"10.1007/s10278-024-01257-w","DOIUrl":"https://doi.org/10.1007/s10278-024-01257-w","url":null,"abstract":"<p>Cutaneous melanoma is a highly lethal form of cancer. Developing a medical image segmentation model capable of accurately delineating melanoma lesions with high robustness and generalization presents a formidable challenge. This study draws inspiration from cellular functional characteristics and natural selection, proposing a novel medical segmentation model named the vital characteristics cellular neural network. This model incorporates vital characteristics observed in multicellular organisms, including memory, adaptation, apoptosis, and division. Memory module enables the network to rapidly adapt to input data during the early stages of training, accelerating model convergence. Adaptation module allows neurons to select the appropriate activation function based on varying environmental conditions. Apoptosis module reduces the risk of overfitting by pruning neurons with low activation values. Division module enhances the network’s learning capacity by duplicating neurons with high activation values. Experimental evaluations demonstrate the efficacy of this model in enhancing the performance of neural networks for medical image segmentation. The proposed method achieves outstanding results across numerous publicly available datasets, indicating its potential to contribute significantly to the field of medical image analysis and facilitating accurate and efficient segmentation of medical imagery. The proposed method achieves outstanding results across numerous publicly available datasets, with an F1 score of 0.901, Intersection over Union of 0.841, and Dice coefficient of 0.913, indicating its potential to contribute significantly to the field of medical image analysis and facilitating accurate and efficient segmentation of medical imagery.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142259284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Septic Arthritis Modeling Using Sonographic Fusion with Attention and Selective Transformation: a Preliminary Study","authors":"Chung-Ming Lo, Kuo-Lung Lai","doi":"10.1007/s10278-024-01259-8","DOIUrl":"https://doi.org/10.1007/s10278-024-01259-8","url":null,"abstract":"<p>Conventionally diagnosing septic arthritis relies on detecting the causal pathogens in samples of synovial fluid, synovium, or blood. However, isolating these pathogens through cultures takes several days, thus delaying both diagnosis and treatment. Establishing a quantitative classification model from ultrasound images for rapid septic arthritis diagnosis is mandatory. For the study, a database composed of 342 images of non-septic arthritis and 168 images of septic arthritis produced by grayscale (GS) and power Doppler (PD) ultrasound was constructed. In the proposed architecture of fusion with attention and selective transformation (FAST), both groups of images were combined in a vision transformer (ViT) with the convolutional block attention module, which incorporates spatial, modality, and channel features. Fivefold cross-validation was applied to evaluate the generalized ability. The FAST architecture achieved the accuracy, sensitivity, specificity, and area under the curve (AUC) of 86.33%, 80.66%, 90.25%, and 0.92, respectively. These performances were higher than using conventional ViT (82.14%) and significantly better than using one modality alone (GS 73.88%, PD 72.02%), with the <i>p</i>-value being less than 0.01. Through the integration of multi-modality and the extraction of multiple channel features, the established model provided promising accuracy and AUC in septic arthritis classification. The end-to-end learning of ultrasound features can provide both rapid and objective assessment suggestions for future clinic use.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142259285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wei Jia, Hailong Li, Redha Ali, Krishna P. Shanbhogue, William R. Masch, Anum Aslam, David T. Harris, Scott B. Reeder, Jonathan R. Dillman, Lili He
{"title":"Investigation of ComBat Harmonization on Radiomic and Deep Features from Multi-Center Abdominal MRI Data","authors":"Wei Jia, Hailong Li, Redha Ali, Krishna P. Shanbhogue, William R. Masch, Anum Aslam, David T. Harris, Scott B. Reeder, Jonathan R. Dillman, Lili He","doi":"10.1007/s10278-024-01253-0","DOIUrl":"https://doi.org/10.1007/s10278-024-01253-0","url":null,"abstract":"<p>ComBat harmonization has been developed to remove non-biological variations for data in multi-center research applying artificial intelligence (AI). We investigated the effectiveness of ComBat harmonization on radiomic and deep features extracted from large, multi-center abdominal MRI data. A retrospective study was conducted on T2-weighted (T2W) abdominal MRI data retrieved from individual patients with suspected or known chronic liver disease at three study sites. MRI data were acquired using systems from three manufacturers and two field strengths. Radiomic features and deep features were extracted using the PyRadiomics pipeline and a Swin Transformer. ComBat was used to harmonize radiomic and deep features across different manufacturers and field strengths. Student’s <i>t</i>-test, ANOVA test, and Cohen’s F score were applied to assess the difference in individual features before and after ComBat harmonization. Between two field strengths, 76.7%, 52.9%, and 26.7% of radiomic features, and 89.0%, 56.5%, and 0.1% of deep features from three manufacturers were significantly different. Among the three manufacturers, 90.1% and 75.0% of radiomic features and 89.3% and 84.1% of deep features from two field strengths were significantly different. After ComBat harmonization, there were no significant differences in radiomic and deep features among manufacturers or field strengths based on <i>t-</i>tests or ANOVA tests. Reduced Cohen’s F scores were consistently observed after ComBat harmonization. ComBat harmonization effectively harmonizes radiomic and deep features by removing the non-biological variations due to system manufacturers and/or field strengths in large multi-center clinical abdominal MRI datasets.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142259318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ghada Zamzmi, Kesavan Venkatesh, Brandon Nelson, Smriti Prathapan, Paul Yi, Berkman Sahiner, Jana G. Delfino
{"title":"Out-of-Distribution Detection and Radiological Data Monitoring Using Statistical Process Control","authors":"Ghada Zamzmi, Kesavan Venkatesh, Brandon Nelson, Smriti Prathapan, Paul Yi, Berkman Sahiner, Jana G. Delfino","doi":"10.1007/s10278-024-01212-9","DOIUrl":"https://doi.org/10.1007/s10278-024-01212-9","url":null,"abstract":"<p>Machine learning (ML) models often fail with data that deviates from their training distribution. This is a significant concern for ML-enabled devices as data drift may lead to unexpected performance. This work introduces a new framework for out of distribution (OOD) detection and data drift monitoring that combines ML and geometric methods with statistical process control (SPC). We investigated different design choices, including methods for extracting feature representations and drift quantification for OOD detection in individual images and as an approach for input data monitoring. We evaluated the framework for both identifying OOD images and demonstrating the ability to detect shifts in data streams over time. We demonstrated a proof-of-concept via the following tasks: 1) differentiating axial vs. non-axial CT images, 2) differentiating CXR vs. other radiographic imaging modalities, and 3) differentiating adult CXR vs. pediatric CXR. For the identification of individual OOD images, our framework achieved high sensitivity in detecting OOD inputs: 0.980 in CT, 0.984 in CXR, and 0.854 in pediatric CXR. Our framework is also adept at monitoring data streams and identifying the time a drift occurred. In our simulations tracking drift over time, it effectively detected a shift from CXR to non-CXR instantly, a transition from axial to non-axial CT within few days, and a drift from adult to pediatric CXRs within a day—all while maintaining a low false positive rate. Through additional experiments, we demonstrate the framework is modality-agnostic and independent from the underlying model structure, making it highly customizable for specific applications and broadly applicable across different imaging modalities and deployed ML models.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142259288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuntong Ma, Justin L. Bauer, Acacia H. Yoon, Christopher F. Beaulieu, Luke Yoon, Bao H. Do, Charles X. Fang
{"title":"Deep Learning for Automated Classification of Hip Hardware on Radiographs","authors":"Yuntong Ma, Justin L. Bauer, Acacia H. Yoon, Christopher F. Beaulieu, Luke Yoon, Bao H. Do, Charles X. Fang","doi":"10.1007/s10278-024-01263-y","DOIUrl":"https://doi.org/10.1007/s10278-024-01263-y","url":null,"abstract":"<p>Purpose: To develop a deep learning model for automated classification of orthopedic hardware on pelvic and hip radiographs, which can be clinically implemented to decrease radiologist workload and improve consistency among radiology reports.</p><p>Materials and Methods: Pelvic and hip radiographs from 4279 studies in 1073 patients were retrospectively obtained and reviewed by musculoskeletal radiologists. Two convolutional neural networks, EfficientNet-B4 and NFNet-F3, were trained to perform the image classification task into the following most represented categories: no hardware, total hip arthroplasty (THA), hemiarthroplasty, intramedullary nail, femoral neck cannulated screws, dynamic hip screw, lateral blade/plate, THA with additional femoral fixation, and post-infectious hip. Model performance was assessed on an independent test set of 851 studies from 262 patients and compared to individual performance of five subspecialty-trained radiologists using leave-one-out analysis against an aggregate gold standard label.</p><p>Results: For multiclass classification, the area under the receiver operating characteristic curve (AUC) for NFNet-F3 was 0.99 or greater for all classes, and EfficientNet-B4 0.99 or greater for all classes except post-infectious hip, with an AUC of 0.97. When compared with human observers, models achieved an accuracy of 97%, which is non-inferior to four out of five radiologists and outperformed one radiologist. Cohen’s kappa coefficient for both models ranged from 0.96 to 0.97, indicating excellent inter-reader agreement.</p><p>Conclusion: A deep learning model can be used to classify a range of orthopedic hip hardware with high accuracy and comparable performance to subspecialty-trained radiologists.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142216217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}