Min Hong, Ziying Lin, Hua Zhong, Yan Zhang, Dan Yang, Sihui Zhong, Xiangrong Zhuang, Xin Yue
{"title":"Improved Diagnostic Performance Using Dual-Energy CT-Derived Slope Parameter Images in Crohn's Disease.","authors":"Min Hong, Ziying Lin, Hua Zhong, Yan Zhang, Dan Yang, Sihui Zhong, Xiangrong Zhuang, Xin Yue","doi":"10.1007/s10278-024-01330-4","DOIUrl":"https://doi.org/10.1007/s10278-024-01330-4","url":null,"abstract":"<p><p>The objective of the study is to explore the image quality and diagnosis performance of the dual-energy CT-derived slope parameter images (SPI) generated by the algorithm based on the slope function in the diagnosis of Crohn's disease (CD). Seventy-six CD patients and 53 disease-free control group subjects who underwent dual-energy CT enterography were retrospectively collected. Portal venous phase 120kVp-like and virtual monoenergetic images at 40-100 keV (VMI<sub>40-100</sub>) were reconstructed. SPIs corresponding to the spectral curve between 40 and 100 keV (SPI<sub>40-100</sub>) were generated using Python. Signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) of normal and abnormal intestinal walls were calculated. Image quality, noise, and contrast were independently scored by two radiologists using a 5-point scale. Four radiologists conducted CD diagnosis with three reading models (120kVp-like, 120kVp-like with optimal VMI, and 120kVp-like with SPI<sub>40-100</sub>). The diagnostic performances of the three reading models for diagnosing CD were evaluated using receiver operating characteristic (ROC) curves. The CNR in SPI<sub>40-100</sub> was higher than in the other images (P < 0.05). The subjective evaluation showed that there was no statistical difference between the contrast of SPI<sub>40-100</sub> and VMI<sub>40</sub> (P > 0.05), but that of the two images was higher than the other images (P < 0.05). The scoring on the overall image quality of VMI<sub>50</sub> was superior to that of other images (P < 0.05). The combined model of 120kVp-like with SPI<sub>40-100</sub> had the strongest confidence (cases with high confidence: 36, 58, 49, 47 for radiologists 1, 2, 3, 4) and the highest efficiency in diagnosing CD (areas under the ROC curve: 0.973, 0.977, 0.982, 0.991 for radiologists 1, 2, 3, 4). SPI<sub>40-100</sub> generated by the algorithm based on the slope function exhibited good image quality. The combined model of 120kVp-like with SPI<sub>40-100</sub> could improve radiologists' diagnostic efficiency and confidence in diagnosing CD.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142635739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"MRI Radiomics-Based Machine Learning to Predict Lymphovascular Invasion of HER2-Positive Breast Cancer.","authors":"Fang Han, Wenfei Li, Yurui Hu, Huiping Wang, Tianyu Liu, Jianlin Wu","doi":"10.1007/s10278-024-01329-x","DOIUrl":"https://doi.org/10.1007/s10278-024-01329-x","url":null,"abstract":"<p><p>This study aims to develop and prospectively validate radiomic models based on MRI to predict lymphovascular invasion (LVI) status in patients with HER2-positive breast cancer. A total of 225 patients with HER2-positive breast cancer who preoperatively underwent breast MRI were selected, forming the training set (n = 99 LVI-positive, n = 126 LVI-negative). A prospective validation cohort included 130 patients with breast cancer from the Affiliated Zhongshan Hospital of Dalian University (n = 57 LVI-positive, n = 73 LVI-negative). A total of 390 radiomic features and eight conventional radiological characteristics were extracted. For the optimum feature selection phase, the LASSO regression model with tenfold cross-validation (CV) was employed to identify features with non-zero coefficients. The conventional radiological (CR) model was determined based on visual morphological (VM) features and the optimal radiomic features correlated with LVI, identified through multivariate logistic analyses. Subsequently, various machine learning (ML) models were developed using algorithms such as support vector machine (SVM), k-nearest neighbor (KNN), gradient boosting machine (GBM), and random forest (RF). The performance of ML and CR models. The results show that the AUC of the CR model in the training and validation sets were 0.81 (95% confidence interval [CI], 0.74-0.86) and 0.82 (95% CI, 0.69-0.89), respectively. The ML model achieved the best performance, with AUCs of 0.96 (95% CI, 0.99-1.00) in the training set and 0.95 (95% CI, 0.89-0.96) in the validation set. There were significant differences between the CR and ML models in predicting LVI status. Our study demonstrated that the machine learning models exhibited superior performance in predicting LVI status based on pretreatment MRI compared to the CR model, which does not necessarily rely on a priori knowledge of visual morphology.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142633013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Technical Note: Neural Network Architectures for Self-Supervised Body Part Regression Models with Automated Localized Segmentation Application.","authors":"Michael Fei, Alan B McMillan","doi":"10.1007/s10278-024-01319-z","DOIUrl":"https://doi.org/10.1007/s10278-024-01319-z","url":null,"abstract":"<p><p>The advancement of medical image deep learning necessitates tools that can accurately identify body regions from whole-body scans to serve as an essential pre-processing step for downstream tasks. Typically, these deep learning models rely on labeled data and supervised learning, which is labor-intensive. However, the emergence of self-supervised learning is revolutionizing the field by eliminating the need for labels. The purpose of this study was to compare neural network architectures of self-supervised models that produced a body part regression (BPR) slice score to aid in the development of anatomically localized segmentation models. VGG, ResNet, DenseNet, ConvNext, and EfficientNet BPR models were implemented in the MONAI/Pytorch framework. Landmark organs were correlated to slice scores and mean absolute error (MAE) was calculated from the predicted slice and the actual slice of various organ landmarks. Four localized DynUNet segmentation models (thorax, upper abdomen, lower abdomen, and pelvis) were developed using the BPR slice scores. Dice similarity coefficient (DSC) was compared between the localized and baseline segmentation models. The best performing BPR model was the EfficientNet architecture with an overall 3.18 MAE, compared to the VGG baseline model with a MAE of 6.29. The localized segmentation model significantly outperformed the baseline in 16 out of 20 organs with a DSC of 0.88. Enhanced neural networks like EfficientNet have a large performance increase in localizing anatomical structures in a CT compared in BPR task. Utilizing BPR slice score is shown to be effective in anatomically localized segmentation tasks with improved performance.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142634978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automatic Classification of Focal Liver Lesions Based on Multi-Sequence MRI.","authors":"Mingfang Hu, Shuxin Wang, Mingjie Wu, Ting Zhuang, Xiaoqing Liu, Yuqin Zhang","doi":"10.1007/s10278-024-01326-0","DOIUrl":"https://doi.org/10.1007/s10278-024-01326-0","url":null,"abstract":"<p><p>Accurate and automated diagnosis of focal liver lesions is critical for effective radiological practice and patient treatment planning. This study presents a deep learning model specifically developed for classifying focal liver lesions across eight different MRI sequences, categorizing them into seven distinct classes. The model includes a feature extraction module that derives multi-level representations of the lesions, a feature fusion attention module to integrate contextual information from the various sequences, and an attention-guided data augmentation module to enrich the training dataset. The proposed model achieved a patient-wise classification accuracy of 0.9302 and a lesion-wise accuracy of 0.8592, along with an F1-score of 0.8395, a recall of 0.8296, and a precision of 0.8551. These findings demonstrate the effectiveness of combining multi-sequence MRI with advanced deep learning methodologies, providing a robust tool to support radiologists in accurately classifying liver lesions in clinical settings.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142635735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Impact of Artificial Intelligence on Radiologists' Reading Time in Bone Age Radiograph Assessment: A Preliminary Retrospective Observational Study.","authors":"Sejin Jeong, Kyunghwa Han, Yaeseul Kang, Eun-Kyung Kim, Kyungchul Song, Shreyas Vasanawala, Hyun Joo Shin","doi":"10.1007/s10278-024-01323-3","DOIUrl":"https://doi.org/10.1007/s10278-024-01323-3","url":null,"abstract":"<p><p>To evaluate the real-world impact of artificial intelligence (AI) on radiologists' reading time during bone age (BA) radiograph assessments. Patients (<19 year-old) who underwent left-hand BA radiographs between December 2021 and October 2023 were retrospectively included. A commercial AI software was installed from October 2022. Radiologists' reading times, automatically recorded in the PACS log, were compared between the AI-unaided and AI-aided periods using linear regression tests and factors affecting reading time were identified. A total of 3643 radiographs (M:F=1295:2348, mean age 9.12 ± 2.31 years) were included and read by three radiologists, with 2937 radiographs (80.6%) in the AI-aided period. Overall reading times were significantly shorter in the AI-aided period compared to the AI-unaided period (mean 17.2 ± 12.9 seconds vs. mean 22.3 ± 14.7 seconds, p < 0.001). Staff reading times significantly decreased in the AI-aided period (mean 15.9 ± 11.4 seconds vs. mean 19.9 ± 13.4 seconds, p < 0.001), while resident reading times increased (mean 38.3 ± 16.4 seconds vs. 33.6 ± 15.3 seconds, p = 0.013). The use of AI and years of experience in radiology were significant factors affecting reading time (all, p≤0.001). The degree of decrease in reading time as experience increased was larger when utilizing AI (-1.151 for AI-unaided, -1.866 for AI-aided, difference =-0.715, p<0.001). In terms of AI exposure time, the staff's reading time decreased by 0.62 seconds per month (standard error 0.07, p<0.001) during the AI-aided period. The reading time of radiologists for BA assessment was influenced by AI. The time-saving effect of utilizing AI became more pronounced as the radiologists' experience and AI exposure time increased.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142635281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Matthew Silbergleit, Adrienn Tóth, Jordan H Chamberlin, Mohamed Hamouda, Dhiraj Baruah, Sydney Derrick, U Joseph Schoepf, Jeremy R Burt, Ismail M Kabakus
{"title":"ChatGPT vs Gemini: Comparative Accuracy and Efficiency in CAD-RADS Score Assignment from Radiology Reports.","authors":"Matthew Silbergleit, Adrienn Tóth, Jordan H Chamberlin, Mohamed Hamouda, Dhiraj Baruah, Sydney Derrick, U Joseph Schoepf, Jeremy R Burt, Ismail M Kabakus","doi":"10.1007/s10278-024-01328-y","DOIUrl":"https://doi.org/10.1007/s10278-024-01328-y","url":null,"abstract":"<p><p>This study aimed to evaluate the accuracy and efficiency of ChatGPT-3.5, ChatGPT-4o, Google Gemini, and Google Gemini Advanced in generating CAD-RADS scores based on radiology reports. This retrospective study analyzed 100 consecutive coronary computed tomography angiography reports performed between March 15, 2024, and April 1, 2024, at a single tertiary center. Each report containing a radiologist-assigned CAD-RADS score was processed using four large language models (LLMs) without fine-tuning. The findings section of each report was input into the LLMs, and the models were tasked with generating CAD-RADS scores. The accuracy of LLM-generated scores was compared to the radiologist's score. Additionally, the time taken by each model to complete the task was recorded. Statistical analyses included Mann-Whitney U test and interobserver agreement using unweighted Cohen's Kappa and Krippendorff's Alpha. ChatGPT-4o demonstrated the highest accuracy, correctly assigning CAD-RADS scores in 87% of cases (κ = 0.838, α = 0.886), followed by Gemini Advanced with 82.6% accuracy (κ = 0.784, α = 0.897). ChatGPT-3.5, although the fastest (median time = 5 s), was the least accurate (50.5% accuracy, κ = 0.401, α = 0.787). Gemini exhibited a higher failure rate (12%) compared to the other models, with Gemini Advanced slightly improving upon its predecessor. ChatGPT-4o outperformed other LLMs in both accuracy and agreement with radiologist-assigned CAD-RADS scores, though ChatGPT-3.5 was significantly faster. Despite their potential, current publicly available LLMs require further refinement before being deployed for clinical decision-making in CAD-RADS scoring.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142635736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Meirong Ren, Peng Xue, Huizhong Ji, Zhili Zhang, Enqing Dong
{"title":"Pulmonary CT Registration Network Based on Deformable Cross Attention.","authors":"Meirong Ren, Peng Xue, Huizhong Ji, Zhili Zhang, Enqing Dong","doi":"10.1007/s10278-024-01324-2","DOIUrl":"https://doi.org/10.1007/s10278-024-01324-2","url":null,"abstract":"<p><p>Current Transformer structure utilizes the self-attention mechanism to model global contextual relevance within image, which makes an impact on medical image registration. However, the use of Transformer in handling large deformation lung CT registration is relatively straightforwardly. These models only focus on single image feature representation neglecting to employ attention mechanism to capture the across image correspondence. This hinders further improvement in registration performance. To address the above limitations, we propose a novel registration method in a cascaded manner, Cascaded Swin Deformable Cross Attention Transformer based U-shape structure (SD-CATU), to address the challenge of large deformations in lung CT registration. In SD-CATU, we introduce a Cross Attention-based Transformer (CAT) block that incorporates the Shifted Regions Multihead Cross-attention (SR-MCA) mechanism to flexibly exchange feature information and thus reduce the computational complexity. Besides, a consistency constraint in the loss function is used to ensure the preservation of topology and inverse consistency of the transformations. Experiments with public lung datasets demonstrate that the Cascaded SD-CATU outperforms current state-of-the-art registration methods (Dice Similarity Coefficient of 93.19% and Target registration error of 0.98 mm). The results further highlight the potential for obtaining excellent registration accuracy while assuring desirable smoothness and consistency in the deformed images.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142633863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Urine Sediment Detection Algorithm Based on Channel Enhancement and Deformable Convolution.","authors":"Shihao Zhang, Xu Bao, Yun Wang, Feng Lin","doi":"10.1007/s10278-024-01321-5","DOIUrl":"https://doi.org/10.1007/s10278-024-01321-5","url":null,"abstract":"<p><p>Urine sediment detection is a vital method in clinical urine analysis for evaluating an individual's kidney and urinary system health, as well as identifying potential diseases. Nevertheless, urine sediment images exhibit the characteristic of diverse shapes for the same category of targets. These characteristics pose a considerable challenge to the accurate identification of the visible components within the images. We approach urine sediment detection as an object detection task and have introduced the specialized YOLOv7-CSD algorithm for this purpose. In particular, we have integrated channel enhancement feature pyramid network (CE-FPN) and selective kernel (SK) into the YOLOv7 model to address the issue of model confusion in classifying and identifying tasks caused by the feature aliasing effects of feature pyramid network (FPN). Furthermore, we enhance the efficient layer aggregation networks (ELAN) network by adding a second channel, enabling the model to acquire a more extensive set of feature information. On top of this, we introduce the deformable convolutional v3 (DCNv3) operator, allowing the model to dynamically adjust its receptive field, addressing the issue of variable shapes. Tested on the USE dataset and a dataset for urine crystals, YOLOv7-CSD achieves accuracies of 92.8 <math><mo>%</mo></math> and 89.6 <math><mo>%</mo></math> , respectively.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142635188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Özge Dönmez Tarakçı, Hatice Cansu Kış, Hakan Amasya, İrem Öztürk, Emre Karahan, Kaan Orhan
{"title":"Radiomics-Based Diagnosis in Dentomaxillofacial Radiology: A Systematic Review.","authors":"Özge Dönmez Tarakçı, Hatice Cansu Kış, Hakan Amasya, İrem Öztürk, Emre Karahan, Kaan Orhan","doi":"10.1007/s10278-024-01307-3","DOIUrl":"https://doi.org/10.1007/s10278-024-01307-3","url":null,"abstract":"<p><p>Radiomics is a quantitative tool for digital image analysis. This systematic review aims to investigate the scientific articles to evaluate the potential implications of Radiomics analysis in Dentomaxillofacial Radiology (DMFR). Studies regarding Radiomics applications in DMFR and human samples, in vivo study, a case reports/series if ≧5 samples were included, while case reports/series if < 5 samples, articles other than in English, abstracts without full text, and studies published before 2015 were excluded. Fifty-one articles were selected from 3789 literatures. The QUADAS-2 tool was used for risk of bias assessment. The accuracy of predicting dentomaxillofacial pathologies was considered as the primary outcome, and the modeling type of Radiomics was considered as the secondary outcome. A meta-analysis could not be performed due to the lack of information and standardization among the reported accuracies. The reported accuracies were found between 0.66 and 99.65%. Logistic regression (n = 6) was found to be the most common Radiomics modeling type, followed by Support Vector Machine and Decision Tree (n = 5). Second-order statistics (n = 38) was the most common type of Radiomics application, followed by first-order (n = 26), higher-order (n = 20), and shape-based (n = 15) statistics. Further work is needed to increase standardization in the Radiomics workflow. Quantitative image analysis is an alternative tool for conventional visual radiographic evaluation. Radiomics systems depend on elements such as imaging modality, feature type, data mining, or statistical method. Radiomics applications do not justify digital transformation on their own, but the potential of its integration into the digital workflow is considerable.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142634143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Impact of Harmonization on MRI Radiomics Feature Variability Across Preprocessing Methods for Parkinson's Disease Motor Subtype Classification.","authors":"Mehdi Panahi, Mahboube Sadat Hosseini","doi":"10.1007/s10278-024-01320-6","DOIUrl":"https://doi.org/10.1007/s10278-024-01320-6","url":null,"abstract":"<p><p>This study aimed to assess the reproducibility of MRI-derived radiomic features across multiple preprocessing methods for classifying Parkinson's disease (PD) motor subtypes and to evaluate the impact of ComBat harmonization on feature stability and machine learning performance. T1-weighted MRI scans from 140 PD patients (70 tremor-dominant and 70 postural instability gait difficulty) and 70 healthy controls were obtained from the Parkinson's Progression Markers Initiative (PPMI) database, acquired using different scanner models. Radiomic features were extracted from 16 brain regions using various preprocessing pipelines. ComBat harmonization was applied using a combined batch variable incorporating both scanner models and preprocessing methods. Intraclass correlation coefficients (ICC) and Kruskal-Wallis tests assessed feature reproducibility before and after harmonization. Feature selection was performed using Linear Support Vector Classifier with L1 regularization. Support vector machine classifiers were used for PD subtype classification. ComBat harmonization significantly improved feature reproducibility across all feature groups. The percentage of features showing excellent robustness (ICC ≥ 0.90) increased from 40.2 to 56.3% after harmonization. First-order statistic features showed the highest robustness, with 71.11% demonstrating excellent ICC after harmonization. The proportion of features significantly affected by preprocessing methods was reduced following harmonization. Classification accuracy improved dramatically, from a range of 34-75% before harmonization to 89-96% after harmonization across all preprocessing methods. AUC values similarly increased from 0.28-0.87 to 0.95-0.99 after harmonization. ComBat harmonization significantly enhanced the reproducibility of radiomic features across preprocessing methods and improved PD motor subtype classification performance. This study highlights the importance of harmonization in radiomics research for PD and suggests potential clinical applications in personalized treatment planning.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142635738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}