{"title":"Federated Learning Framework for Brain Tumor Detection Using MRI Images in Non-IID Data Distributions.","authors":"M D Zahin Muntaqim, Tangin Amir Smrity","doi":"10.1007/s10278-025-01484-9","DOIUrl":"https://doi.org/10.1007/s10278-025-01484-9","url":null,"abstract":"<p><p>Brain tumor detection from medical images, especially magnetic resonance imaging (MRI) scans, is a critical task in early diagnosis and treatment planning. Traditional machine learning approaches often rely on centralized data, raising concerns about data privacy, security, and the difficulty of obtaining large annotated datasets. Federated learning (FL) has emerged as a promising solution for training models across decentralized devices while maintaining data privacy. However, challenges remain in dealing with non-IID (independent and identically distributed) data, which is common in real-world scenarios. In this research, we used a client-server-based federated learning framework for brain tumor detection using MRI images, leveraging VGG19 as the backbone model. To improve clinical relevance and model interpretability, we have included explainability techniques, particularly Grad-CAM. We trained our model across four clients with non-IID data distribution to simulate real-world conditions. For performance evaluation, we used a centralized test dataset, consisting of 20% of the original data, with the test set used collectively for evaluating model performance after completing federated learning rounds. Using a separate test dataset ensures that all models are evaluated on the same data, making comparisons fair. Since the test dataset is not part of the FL training process, it does not violate the privacy-preserving nature of FL. The experimental results demonstrate that the VGG19 model achieves a high test accuracy of 97.18% (FedAVG), 98.24% (FedProx), and 98.45% (Scaffold) than other state-of-the-art models, showcasing the effectiveness of federated learning in handling distributed and non-IID data. Our findings highlight the potential of federated learning to address privacy concerns in medical image analysis while maintaining high performance even in non-IID settings. This approach provides a promising direction for future research in privacy-preserving AI for healthcare applications.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143702620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Weiling Li, Tianci Zhou, Ani Dong, Liang Xiong, Qianhao Luo, Ling Mou, Xin Liu
{"title":"Highly Accurate Occupational Pneumoconiosis Staging via Dark Channel Prior-Inspired Lesion Area Enhancement.","authors":"Weiling Li, Tianci Zhou, Ani Dong, Liang Xiong, Qianhao Luo, Ling Mou, Xin Liu","doi":"10.1007/s10278-025-01472-z","DOIUrl":"https://doi.org/10.1007/s10278-025-01472-z","url":null,"abstract":"<p><p>Occupational pneumoconiosis (OP) staging is the core for OP diagnosis. It is essentially an image classification task concerning patients' lung condition by analyzing their chest X-ray. To perform artificial intelligence-assisted OP staging, the chest X-ray film representational learning and classification are commonly adopted, where a convolutional neural network (CNN) has proven to be very efficient. However, unlike commonly encountered image classification tasks, the OP staging relies heavily on the profusion level of opacities, i.e., the OP lesion reflection on the X-ray film. The OP lesions overlap with other tissues in the chest, making the opacities hard to be represented by a standard CNN and thus leading to inaccurate staging results. Inspired by the similarity between OP lesion and haze, i.e., they are both read like dusts in a space, this study proposes a dark channel prior-inspired lesion area enhancement (DCP-LAE)-based OP staging method with high accuracy. Its ideas are twofold: a) enhancing the OP lesion areas with an OP X-ray film restore method inspired by the dark channel prior-based de-hazing method, and b) implementing the multiple feature fusion via a bi-branch network structure to obtain high staging accuracy. Experimental results from real OP cases collected in hospitals demonstrate that the DCP-LAE-based OP staging model achieves an accuracy of 83.8%, surpassing existing state-of-the-art models.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143702621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anna Breger, Ander Biguri, Malena Sabaté Landman, Ian Selby, Nicole Amberg, Elisabeth Brunner, Janek Gröhl, Sepideh Hatamikia, Clemens Karner, Lipeng Ning, Sören Dittmer, Michael Roberts, Carola-Bibiane Schönlieb
{"title":"A Study of Why We Need to Reassess Full Reference Image Quality Assessment with Medical Images.","authors":"Anna Breger, Ander Biguri, Malena Sabaté Landman, Ian Selby, Nicole Amberg, Elisabeth Brunner, Janek Gröhl, Sepideh Hatamikia, Clemens Karner, Lipeng Ning, Sören Dittmer, Michael Roberts, Carola-Bibiane Schönlieb","doi":"10.1007/s10278-025-01462-1","DOIUrl":"https://doi.org/10.1007/s10278-025-01462-1","url":null,"abstract":"<p><p>Image quality assessment (IQA) is indispensable in clinical practice to ensure high standards, as well as in the development stage of machine learning algorithms that operate on medical images. The popular full reference (FR) IQA measures PSNR and SSIM are known and tested for working successfully in many natural imaging tasks, but discrepancies in medical scenarios have been reported in the literature, highlighting the gap between development and actual clinical application. Such inconsistencies are not surprising, as medical images have very different properties than natural images, and PSNR and SSIM have neither been targeted nor properly tested for medical images. This may cause unforeseen problems in clinical applications due to wrong judgement of novel methods. This paper provides a structured and comprehensive overview of examples where PSNR and SSIM prove to be unsuitable for the assessment of novel algorithms using different kinds of medical images, including real-world MRI, CT, OCT, X-Ray, digital pathology and photoacoustic imaging data. Therefore, improvement is urgently needed in particular in this era of AI to increase reliability and explainability in machine learning for medical imaging and beyond. Lastly, we will provide ideas for future research as well as suggest guidelines for the usage of FR-IQA measures applied to medical images.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143702583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Felix Busch, Lukas Kaibel, Hai Nguyen, Tristan Lemke, Sebastian Ziegelmayer, Markus Graf, Alexander W Marka, Lukas Endrös, Philipp Prucker, Daniel Spitzl, Markus Mergen, Marcus R Makowski, Keno K Bressem, Sebastian Petzoldt, Lisa C Adams, Tim Landgraf
{"title":"Evaluation of a Retrieval-Augmented Generation-Powered Chatbot for Pre-CT Informed Consent: a Prospective Comparative Study.","authors":"Felix Busch, Lukas Kaibel, Hai Nguyen, Tristan Lemke, Sebastian Ziegelmayer, Markus Graf, Alexander W Marka, Lukas Endrös, Philipp Prucker, Daniel Spitzl, Markus Mergen, Marcus R Makowski, Keno K Bressem, Sebastian Petzoldt, Lisa C Adams, Tim Landgraf","doi":"10.1007/s10278-025-01483-w","DOIUrl":"https://doi.org/10.1007/s10278-025-01483-w","url":null,"abstract":"<p><p>This study aims to investigate the feasibility, usability, and effectiveness of a Retrieval-Augmented Generation (RAG)-powered Patient Information Assistant (PIA) chatbot for pre-CT information counseling compared to the standard physician consultation and informed consent process. This prospective comparative study included 86 patients scheduled for CT imaging between November and December 2024. Patients were randomly assigned to either the PIA group (n = 43), who received pre-CT information via the PIA chat app, or the control group (n = 43), with standard doctor-led consultation. Patient satisfaction, information clarity and comprehension, and concerns were assessed using six ten-point Likert-scale questions after information counseling with PIA or the doctor's consultation. Additionally, consultation duration was measured, and PIA group patients were asked about their preference for pre-CT consultation, while two radiologists rated each PIA chat in five categories. Both groups reported similarly high ratings for information clarity (PIA: 8.64 ± 1.69; control: 8.86 ± 1.28; p = 0.82) and overall comprehension (PIA: 8.81 ± 1.40; control: 8.93 ± 1.61; p = 0.35). However, the doctor consultation group showed greater effectiveness in alleviating patient concerns (8.30 ± 2.63 versus 6.46 ± 3.29; p = 0.003). The PIA group demonstrated significantly shorter subsequent consultation times (median: 120 s [interquartile range (IQR): 100-140] versus 195 s [IQR: 170-220]; p = 0.04). Both radiologists rated overall quality, scientific and clinical evidence, clinical usefulness and relevance, consistency, and up-to-dateness of PIA high. The RAG-powered PIA effectively provided pre-CT information while significantly reducing physician consultation time. While both methods achieved comparable patient satisfaction and comprehension, physicians were more effective at addressing worries or concerns regarding the examination.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143677437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deep Learning Neural Network Based on PSO for Leukemia Cell Disease Diagnosis from Microscope Images.","authors":"Hamsa Almahdawi, Ayhan Akbas, Javad Rahebi","doi":"10.1007/s10278-025-01474-x","DOIUrl":"https://doi.org/10.1007/s10278-025-01474-x","url":null,"abstract":"<p><p>Leukemia is a kind of cancer characterized by the proliferation of abnormal, immature White Blood Cells (WBCs) produced in the bone marrow, which subsequently circulate throughout the body. Prompt leukemia diagnosis is vital in determining the optimal treatment plan, as different types of leukemia require distinct treatments. Early detection is therefore instrumental in facilitating the use of the most effective therapies. The identification of leukemia cells from microscopic images is considered a challenging task due to the complexity of the image features. This paper presents a deep learning neural network approach that utilizes the Particle Swarm Optimization (PSO) method to diagnose leukemia cell disease from microscope images. Initially, deep learning is employed to extract features from the leukemia images, which are then optimized by the PSO method to select the most relevant features for machine learning. Three different machine learning algorithms, namely Decision Tree (DT), Support Vector Machine (SVM), and K-Nearest Neighbors (K-NN) methods, are utilized to analyze the selected features. The results of the experiments demonstrate PSO accuracies of 97.4%, 92.3%, and 85.9% for SVM, K-NN, and DT algorithms with GoogLeNet, respectively. The proposed method achieved accuracies of 100%, 94.9%, and 92.3% for SVM, K-NN, and DT methods respectively, with Ant Colony Optimization (ACO) feature extraction and ResNet-50 employed as revealed by the experimental results. These findings suggest that the proposed approach is a promising tool for accurate diagnosis of leukemia cell disease using microscopic images.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143672086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hamza Sekkat, Abdellah Khallouqi, Omar El Rhazouani, Abdellah Halimi
{"title":"Automated Detection of Hydrocephalus in Pediatric Head Computed Tomography Using VGG 16 CNN Deep Learning Architecture and Based Automated Segmentation Workflow for Ventricular Volume Estimation.","authors":"Hamza Sekkat, Abdellah Khallouqi, Omar El Rhazouani, Abdellah Halimi","doi":"10.1007/s10278-025-01482-x","DOIUrl":"https://doi.org/10.1007/s10278-025-01482-x","url":null,"abstract":"<p><p>Hydrocephalus, particularly congenital hydrocephalus in infants, remains underexplored in deep learning research. While deep learning has been widely applied to medical image analysis, few studies have specifically addressed the automated classification of hydrocephalus. This study proposes a convolutional neural network (CNN) model based on the VGG16 architecture to detect hydrocephalus in infant head CT images. The model integrates an automated method for ventricular volume extraction, applying windowing, histogram equalization, and thresholding techniques to segment the ventricles from surrounding brain structures. Morphological operations refine the segmentation and contours are extracted for visualization and volume measurement. The dataset consists of 105 head CT scans, each with 60 slices covering the ventricular volume, resulting in 6300 slices. Manual segmentation by three trained radiologists served as the reference standard. The automated method showed a high correlation with manual measurements, with R<sup>2</sup> values ranging from 0.94 to 0.99. The mean absolute percentage error (MAPE) ranged 3.99 to 11.13%, while the root mean square error (RRMSE) from 4.56 to 13.74%. To improve model robustness, the dataset was preprocessed, normalized, and augmented with rotation, shifting, zooming, and flipping. The VGG16-based CNN used pre-trained convolutional layers with additional fully connected layers for classification, predicting hydrocephalus or normal labels. Performance evaluation using a multi-split strategy (15 independent splits) achieved a mean accuracy of 90.4% ± 1.2%. This study presents an automated approach for ventricular volume extraction and hydrocephalus detection, offering a promising tool for clinical and research applications with high accuracy and reduced observer bias.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143665814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Olivier Paalvast, Merlijn Sevenster, Omar Hertgers, Hubrecht de Bliek, Victor Wijn, Vincent Buil, Jaap Knoester, Sandra Vosbergen, Hildo Lamb
{"title":"Radiology AI Lab: Evaluation of Radiology Applications with Clinical End-Users.","authors":"Olivier Paalvast, Merlijn Sevenster, Omar Hertgers, Hubrecht de Bliek, Victor Wijn, Vincent Buil, Jaap Knoester, Sandra Vosbergen, Hildo Lamb","doi":"10.1007/s10278-025-01453-2","DOIUrl":"https://doi.org/10.1007/s10278-025-01453-2","url":null,"abstract":"<p><p>Despite the approval of over 200 artificial intelligence (AI) applications for radiology in the European Union, widespread adoption in clinical practice remains limited. Current assessments of AI applications often rely on post-hoc evaluations, lacking the granularity to capture real-time radiologist-AI interactions. The purpose of the study is to realise the Radiology AI lab for real-time, objective measurement of the impact of AI applications on radiologists' workflows. We proposed the user-state sensing framework (USSF) to structure the sensing of radiologist-AI interactions in terms of personal, interactional, and contextual states. Guided by the USSF, a lab was established using three non-invasive biometric measurement techniques: eye-tracking, heart rate monitoring, and facial expression analysis. We conducted a pilot test with four radiologists of varying experience levels, who read ultra-low-dose (ULD) CT cases in (1) standard PACS and (2) manually annotated (to mimic AI) PACS workflows. Interpretation time, eye-tracking metrics, heart rate variability (HRV), and facial expressions were recorded and analysed. The Radiology AI lab was successfully realised as an initial physical iteration of the USSF at a tertiary referral centre. Radiologists participating in the pilot test read 32 ULDCT cases (mean age, 52 years ± 23 (SD); 17 male; 16 cases with abnormalities). Cases were read on average in 4.1 ± 2.2 min (standard PACS) and 3.9 ± 1.9 min (AI-annotated PACS), with no significant difference (p = 0.48). Three out of four radiologists showed significant shifts (p < 0.02) in eye-tracking metrics, including saccade duration, saccade quantity, fixation duration, fixation quantity, and pupil diameter, when using the AI-annotated workflow. These changes align with prior findings linking such metrics to increased competency and reduced cognitive load, suggesting a more efficient visual search strategy in AI-assisted interpretation. Although HRV metrics did not correlate with experience, when combined with facial expression analysis, they helped identify key moments during the pilot test. The Radiology AI lab was successfully realised, implementing personal, interactional, and contextual states of the user-state sensing framework, enabling objective analysis of radiologists' workflows, and effectively capturing relevant biometrics. Future work will focus on expanding sensing of the contextual state of the user-state sensing framework, refining baseline determination, and continuing investigation of AI-enabled tools in radiology workflows.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143653020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Philine Reisdorf, Jonathan Gavrysh, Clemens Ammann, Maximilian Fenski, Christoph Kolbitsch, Steffen Lange, Anja Hennemuth, Jeanette Schulz-Menger, Thomas Hadler
{"title":"Lumos: Software for Multi-level Multi-reader Comparison of Cardiovascular Magnetic Resonance Late Gadolinium Enhancement Scar Quantification.","authors":"Philine Reisdorf, Jonathan Gavrysh, Clemens Ammann, Maximilian Fenski, Christoph Kolbitsch, Steffen Lange, Anja Hennemuth, Jeanette Schulz-Menger, Thomas Hadler","doi":"10.1007/s10278-025-01437-2","DOIUrl":"https://doi.org/10.1007/s10278-025-01437-2","url":null,"abstract":"<p><p>Cardiovascular magnetic resonance imaging (CMR) offers state-of-the-art myocardial tissue differentiation. The CMR technique late gadolinium enhancement (LGE) currently provides the noninvasive gold standard for the detection of myocardial fibrosis. Typically, thresholding methods are used for fibrotic scar tissue quantification. A major challenge for standardized CMR assessment is large variations in the estimated scar for different methods. The aim was to improve quality assurance for LGE scar quantification, a multi-reader comparison tool \"Lumos\" was developed to support quality control for scar quantification methods. The thresholding methods and an exact rasterization approach were implemented, as well as a graphical user interface (GUI) with statistical and case-specific tabs. Twenty LGE cases were considered with half of them including artifacts and clinical results for eight scar quantification methods computed. Lumos was successfully implemented as a multi-level multi-reader comparison software, and differences between methods can be seen in the statistical results. Histograms visualize confounding effects of different methods. Connecting the statistical level with the case level allows for backtracking statistical differences to sources of differences in the threshold calculation. Being able to visualize the underlying groundwork for the different methods in the myocardial histogram gives the opportunity to identify causes for different thresholds. Lumos showed the differences in the clinical results between cases with artifacts and cases without artifacts. A video demonstration of Lumos is offered as supplementary material 1. Lumos allows for a multi-reader comparison for LGE scar quantification that offers insights into the origin of reader differences.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143653110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sae Byeol Mun, Sang Tae Choi, Young Jae Kim, Kwang Gi Kim, Won Suk Lee
{"title":"AI-Based 3D Liver Segmentation and Volumetric Analysis in Living Donor Data.","authors":"Sae Byeol Mun, Sang Tae Choi, Young Jae Kim, Kwang Gi Kim, Won Suk Lee","doi":"10.1007/s10278-025-01468-9","DOIUrl":"https://doi.org/10.1007/s10278-025-01468-9","url":null,"abstract":"<p><p>This study investigated the application of deep learning for 3-dimensional (3D) liver segmentation and volumetric analysis in living donor liver transplantation. Using abdominal computed tomography data from 55 donors, this study aimed to evaluate the liver segmentation performance of various U-Net-based models, including 3D U-Net, RU-Net, DU-Net, and RDU-Net, before and after hepatectomy. Accurate liver volume measurement is critical in liver transplantation to ensure adequate functional recovery and minimize postoperative complications. The models were trained and validated using a fivefold cross-validation approach. Performance metrics such as Dice similarity coefficient (DSC), recall, specificity, precision, and accuracy were used to assess the segmentation results. The highest segmentation accuracy was achieved in preoperative images with a DSC of 95.73 ± 1.08%, while postoperative day 7 images showed the lowest performance with a DSC of 93.14 ± 2.10%. A volumetric analysis conducted to measure hepatic resection and regeneration rates revealed an average liver resection rate of 40.52 ± 8.89% and a regeneration rate of 13.50 ± 8.95% by postoperative day 63. A regression analysis was performed on the volumetric results of the artificial intelligence model's liver resection rate and regeneration rate, and all results were statistically significant at p < 0.0001. The results indicate high reliability and clinical applicability of deep learning models in accurately measuring liver volume and assessing regenerative capacity, thus enhancing the management and recovery of liver donors.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143635069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Radiomics with Ultrasound Radiofrequency Data for Improving Evaluation of Duchenne Muscular Dystrophy.","authors":"Dong Yan, Qiang Li, Ya-Wen Chuang, Chia-Wei Lin, Jeng-Yi Shieh, Wen-Chin Weng, Po-Hsiang Tsui","doi":"10.1007/s10278-025-01450-5","DOIUrl":"https://doi.org/10.1007/s10278-025-01450-5","url":null,"abstract":"<p><p>Duchenne muscular dystrophy (DMD) is a rare and severe genetic neuromuscular disease, characterized by rapid progression and high mortality, highlighting the need for accurate ambulatory function assessment tools. Ultrasound imaging methods have been widely used for quantitative analysis. Radiomics, which converts medical images into data, combined with machine learning (ML), offers a promising solution. This study is aimed at utilizing radiomics to analyze different stages of data generated during B-mode image processing to evaluate the ambulatory function of DMD patients. The study included 85 participants, categorized into ambulatory and non-ambulatory groups based on their functional status. Ultrasound scans were utilized to capture backscattered radiofrequency data, which were then processed to generate envelope, normalized, and B-mode images. Radiomics analysis involved the manual segmentation of grayscale images and automatic feature extraction using specialized software, followed by feature selection using the maximal relevance and minimal redundancy method. The selected features were input into five ML algorithms, with model evaluation conducted via area under the receiver operating characteristic curve (AUROC). To ensure robustness, both leave-one-out cross-validation and repeated data splitting methods were employed. Additionally, multiple ML models were constructed and tested to assess their performance. The intensity values across all image types increased as walking ability declined, with significant differences observed between the ambulatory and non-ambulatory groups (p < 0.001). These groups exhibited similar diagnostic performance levels, with AUROC values below 0.8. However, radiofrequency (RF) images outperformed other types when radiomics was applied, notably achieving an AUROC value of 0.906. Additionally, combining multiple ML algorithms yielded a higher AUROC value of 0.912 using RF images as input. Radiomics analysis of RF data surpasses conventional B-mode imaging and other ultrasound-derived images in evaluating ambulatory function in DMD. Moreover, integrating multiple machine learning models further enhances classification performance. The proposed method in this study offers a promising framework for improving the accuracy and reliability of clinical follow-up evaluations, supporting more effective management of DMD. The code is available at https://github.com/Goldenyan/radiomicsUS .</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143635070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}