Journal of imaging informatics in medicine最新文献

筛选
英文 中文
Automatic Segmentation of Ultrasound-Guided Transverse Thoracic Plane Block Using Convolutional Neural Networks. 基于卷积神经网络的超声引导横胸平面块自动分割。
Journal of imaging informatics in medicine Pub Date : 2025-06-06 DOI: 10.1007/s10278-025-01565-9
Wancheng Liu, Xinwei Ma, Xiaolin Han, Jie Yu, Bowen Zhang, Linjie Liu, Yang Liu, Fengyu Chu, Yucheng Liu, Shijing Wei, Bin Li, Zhenchao Tang, Jingying Jiang, Qiang Wang
{"title":"Automatic Segmentation of Ultrasound-Guided Transverse Thoracic Plane Block Using Convolutional Neural Networks.","authors":"Wancheng Liu, Xinwei Ma, Xiaolin Han, Jie Yu, Bowen Zhang, Linjie Liu, Yang Liu, Fengyu Chu, Yucheng Liu, Shijing Wei, Bin Li, Zhenchao Tang, Jingying Jiang, Qiang Wang","doi":"10.1007/s10278-025-01565-9","DOIUrl":"https://doi.org/10.1007/s10278-025-01565-9","url":null,"abstract":"<p><p>Ultrasound-guided transverse thoracic plane (TTP) block has been shown to be highly effective in relieving postoperative pain in a variety of surgeries involving the anterior chest wall. Accurate identification of the target structure on ultrasound images is key to the successful implementation of TTP block. Nevertheless, the complexity of anatomical structures in the targeted blockade area coupled with the potential for adverse clinical incidents presents considerable challenges, particularly for anesthesiologists who are less experienced. This study applied deep learning methods to TTP block and developed a deep learning model to achieve real-time region segmentation in ultrasound to assist doctors in the accurate identification of the target nerve. Using 2329 images from 155 patients, we successfully segmented key structures associated with TTP areas and nerve blocks, including the transversus thoracis muscle, lungs, and bones. The achieved IoU (Intersection over Union) scores are 0.7272, 0.9736, and 0.8244 in that order. Recall metrics were 0.8305, 0.9896, and 0.9336 respectively, whilst Dice coefficients reached 0.8421, 0.9866, and 0.9037, particularly with an accuracy surpassing 97% in the identification of perilous lung regions. The real-time segmentation frame rate of the model for ultrasound video was as high as 42.7 fps, thus meeting the exigencies of performing nerve blocks under real-time ultrasound guidance in clinical practice. This study introduces TTP-Unet, a deep learning model specifically designed for TTP block, capable of automatically identifying crucial anatomical structures within ultrasound images of TTP block, thereby offering a practicable solution to attenuate the clinical difficulty associated with TTP block technique.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144251802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Centargo Equipped with Smart Protocols Versus Stellant Injectors: A Comparison of the Impact on Contrast Media Usage, Sustainability, and Workflow Efficiency in CT Scan Suite. Centargo配备智能协议与Stellant注射器:CT扫描套件对造影剂使用、可持续性和工作流程效率的影响比较
Journal of imaging informatics in medicine Pub Date : 2025-06-04 DOI: 10.1007/s10278-025-01563-x
Yohan Anquetil, Thibaut Leturgez, Jerome Jacquin, Francois Kruta, François Jambon
{"title":"Centargo Equipped with Smart Protocols Versus Stellant Injectors: A Comparison of the Impact on Contrast Media Usage, Sustainability, and Workflow Efficiency in CT Scan Suite.","authors":"Yohan Anquetil, Thibaut Leturgez, Jerome Jacquin, Francois Kruta, François Jambon","doi":"10.1007/s10278-025-01563-x","DOIUrl":"https://doi.org/10.1007/s10278-025-01563-x","url":null,"abstract":"<p><p>This study evaluates the impact of Multi-use practices in computed tomography (CT) and MEDRAD® Centargo injectors equipped with Smart Protocols and other functionalities aimed for optimizing contrast media consumption, workflow efficiency, patient care time, and environmental sustainability at the CIMROD center. Data were collected across three measurement periods, assessing contrast volume per patient, distribution of activity on several tasks, injector preparation time, time for Iodinated contrast media documentation for traceability, and waste measurements. The Centargo injector equipped with Smart Protocols and kV optimization reduced contrast volume from a target of 80 mL to an average of 72.98 mL using Smart Protocols, then further optimized to 69.95 mL using kV optimization. Compared to previous injector generation, there was a decrease in non-added value tasks such as injector preparation time by 47.3%, measured with Udubu, or 72.35 s, measured by chronometer, per injected procedure. The multi-use implementation reduced waste production by 69% and 74%, respectively, for Centargo and Stellant multi-use versus single-use injectors. Even if the study has limitations, injectors such as Centargo equipped with Smart Protocols and other functionalities enhance operational and financial efficiency along with sustainability in radiology and contributes to clinical practice evolution.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144228259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Best Practices and Checklist for Reviewing Artificial Intelligence-Based Medical Imaging Papers: Classification. 基于人工智能的医学影像学论文评审的最佳实践和清单:分类。
Journal of imaging informatics in medicine Pub Date : 2025-06-04 DOI: 10.1007/s10278-025-01548-w
Timothy L Kline, Felipe Kitamura, Daniel Warren, Ian Pan, Amine M Korchi, Neil Tenenholtz, Linda Moy, Judy Wawira Gichoya, Igor Santos, Kamyar Moradi, Atlas Haddadi Avval, Dana Alkhulaifat, Steven L Blumer, Misha Ysabel Hwang, Kim-Ann Git, Abishek Shroff, Joseph Stember, Elad Walach, George Shih, Steve G Langer
{"title":"Best Practices and Checklist for Reviewing Artificial Intelligence-Based Medical Imaging Papers: Classification.","authors":"Timothy L Kline, Felipe Kitamura, Daniel Warren, Ian Pan, Amine M Korchi, Neil Tenenholtz, Linda Moy, Judy Wawira Gichoya, Igor Santos, Kamyar Moradi, Atlas Haddadi Avval, Dana Alkhulaifat, Steven L Blumer, Misha Ysabel Hwang, Kim-Ann Git, Abishek Shroff, Joseph Stember, Elad Walach, George Shih, Steve G Langer","doi":"10.1007/s10278-025-01548-w","DOIUrl":"https://doi.org/10.1007/s10278-025-01548-w","url":null,"abstract":"<p><p>Recent advances in Artificial Intelligence (AI) methodologies and their application to medical imaging has led to an explosion of related research programs utilizing AI to produce state-of-the-art classification performance. Ideally, research culminates in dissemination of the findings in peer-reviewed journals. To date, acceptance or rejection criteria are often subjective; however, reproducible science requires reproducible review. The Machine Learning Education Sub-Committee of the Society for Imaging Informatics in Medicine (SIIM) has identified a knowledge gap and need to establish guidelines for reviewing these studies. This present work, written from the machine learning practitioner standpoint, follows a similar approach to our previous paper related to segmentation. In this series, the committee will address best practices to follow in AI-based studies and present the required sections with examples and discussion of requirements to make the studies cohesive, reproducible, accurate, and self-contained. This entry in the series focuses on image classification. Elements like dataset curation, data pre-processing steps, reference standard identification, data partitioning, model architecture, and training are discussed. Sections are presented as in a typical manuscript. The content describes the information necessary to ensure the study is of sufficient quality for publication consideration and, compared with other checklists, provides a focused approach with application to image classification tasks. The goal of this series is to provide resources to not only help improve the review process for AI-based medical imaging papers, but to facilitate a standard for the information that should be presented within all components of the research study.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144218019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PARADIM: A Platform to Support Research at the Interface of Data Science and Medical Imaging. paradigm:一个支持数据科学和医学成像接口研究的平台。
Journal of imaging informatics in medicine Pub Date : 2025-06-03 DOI: 10.1007/s10278-025-01554-y
Yannick Lemaréchal, Gabriel Couture, François Pelletier, Ronan Lefol, Pierre-Luc Asselin, Samuel Ouellet, Jérémie Bernard, Leyla Ebrahimpour, Venkata S K Manem, Johanna Topalis, Balthasar Schachtner, Sébastien Jodogne, Philippe Joubert, Katharina Jeblick, Michael Ingrisch, Philippe Després
{"title":"PARADIM: A Platform to Support Research at the Interface of Data Science and Medical Imaging.","authors":"Yannick Lemaréchal, Gabriel Couture, François Pelletier, Ronan Lefol, Pierre-Luc Asselin, Samuel Ouellet, Jérémie Bernard, Leyla Ebrahimpour, Venkata S K Manem, Johanna Topalis, Balthasar Schachtner, Sébastien Jodogne, Philippe Joubert, Katharina Jeblick, Michael Ingrisch, Philippe Després","doi":"10.1007/s10278-025-01554-y","DOIUrl":"https://doi.org/10.1007/s10278-025-01554-y","url":null,"abstract":"<p><p>This paper describes PARADIM, a digital infrastructure designed to support research at the interface of data science and medical imaging, with a focus on Research Data Management best practices. The platform is built from open-source components and rooted in the FAIR principles through strict compliance with the DICOM standard. It addresses key needs in data curation, governance, privacy, and scalable resource management. Supporting every stage of the data science discovery cycle, the platform offers robust functionalities for user identity and access management, data de-identification, storage, annotation, as well as model training and evaluation. Rich metadata are generated all along the research lifecycle to ensure the traceability and reproducibility of results. PARADIM hosts several medical image collections and allows the automation of large-scale, computationally intensive pipelines (e.g., automatic segmentation, dose calculations, AI model evaluation). The platform fills a gap at the interface of data science and medical imaging, where digital infrastructures are key in the development, evaluation, and deployment of innovative solutions in the real world.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144218020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Novel Deep Learning Framework for Nipple Segmentation in Digital Mammography. 数字乳房x线摄影中乳头分割的一种新的深度学习框架。
Journal of imaging informatics in medicine Pub Date : 2025-06-03 DOI: 10.1007/s10278-025-01567-7
Marcos Rogozinski, Jan Hurtado, Cesar A Sierra-Franco, Carlos R Hall Barbosa, Alberto Raposo
{"title":"A Novel Deep Learning Framework for Nipple Segmentation in Digital Mammography.","authors":"Marcos Rogozinski, Jan Hurtado, Cesar A Sierra-Franco, Carlos R Hall Barbosa, Alberto Raposo","doi":"10.1007/s10278-025-01567-7","DOIUrl":"https://doi.org/10.1007/s10278-025-01567-7","url":null,"abstract":"<p><p>This study introduces a novel methodology to enhance nipple segmentation in digital mammography, a critical component for accurate medical analysis and computer-aided detection systems. The nipple is a key anatomical landmark for multi-view and multi-modality breast image registration, where accurate localization is vital for ensuring image quality and enabling precise registration of anomalies across different mammographic views. The proposed approach significantly outperforms baseline methods, particularly in challenging cases where previous techniques failed. It achieved successful detection across all cases and reached a mean Intersection over Union (mIoU) of 0.63 in instances where the baseline failed entirely. Additionally, it yielded nearly a tenfold improvement in Hausdorff distance and consistent gains in overlap-based metrics, with the mIoU increasing from 0.7408 to 0.8011 in the craniocaudal (CC) view and from 0.7488 to 0.7767 in the mediolateral oblique (MLO) view. Furthermore, its generalizability suggests the potential for application to other breast imaging modalities and related domains facing challenges such as class imbalance and high variability in object characteristics.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144218109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhanced Vision Transformer with Custom Attention Mechanism for Automated Idiopathic Scoliosis Classification. 带有自定义注意机制的增强视觉变压器用于特发性脊柱侧凸自动分类。
Journal of imaging informatics in medicine Pub Date : 2025-06-02 DOI: 10.1007/s10278-025-01564-w
Nevzat Yeşilmen, Çağla Danacı, Merve Parlak Baydoğan, Seda Arslan Tuncer, Ahmet Çınar, Taner Tuncer
{"title":"Enhanced Vision Transformer with Custom Attention Mechanism for Automated Idiopathic Scoliosis Classification.","authors":"Nevzat Yeşilmen, Çağla Danacı, Merve Parlak Baydoğan, Seda Arslan Tuncer, Ahmet Çınar, Taner Tuncer","doi":"10.1007/s10278-025-01564-w","DOIUrl":"https://doi.org/10.1007/s10278-025-01564-w","url":null,"abstract":"<p><p>Scoliosis is a three-dimensional spinal deformity that is the most common among spinal deformities and causes extremely serious posture disorders in advanced stages. Scoliosis can lead to various health problems, including pain, respiratory dysfunction, heart problems, mental health disorders, stress, and emotional difficulties. The current gold standard for grading scoliosis and planning treatment is based on the Cobb angle measurement on X-rays. The Cobb angle measurement is performed by physical medicine and rehabilitation specialists, orthopedists, radiologists, etc., in branches dealing with the musculoskeletal system. Manual calculation of the Cobb angle for this process is subjective and takes more time. Deep learning-based systems that can evaluate the Cobb angle objectively have been frequently used recently. In this article, we propose an enhanced ViT that allows doctors to evaluate the diagnosis of scoliosis more objectively without wasting time. The proposed model uses a custom attention mechanism instead of the standard multi-head attention mechanism for the ViT model. A dataset with 7 different classes was obtained from 1456 patients in total from Elazığ Fethi Sekin City Hospital Physical Medicine and Rehabilitation Clinic. Multiple models were used to compare the proposed architecture in the classification of scoliosis disease. The proposed improved ViT architecture exhibited the best performance with 95.21% accuracy. This result shows that a superior classification success was achieved compared to ResNet50, Swin Transformer, and standard ViT models.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144210659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust Detection of Out-of-Distribution Shifts in Chest X-ray Imaging. 胸部x线成像中偏离分布位移的鲁棒检测。
Journal of imaging informatics in medicine Pub Date : 2025-06-02 DOI: 10.1007/s10278-025-01559-7
Fatemeh Karimi, Farzan Farnia, Kyongtae Tyler Bae
{"title":"Robust Detection of Out-of-Distribution Shifts in Chest X-ray Imaging.","authors":"Fatemeh Karimi, Farzan Farnia, Kyongtae Tyler Bae","doi":"10.1007/s10278-025-01559-7","DOIUrl":"https://doi.org/10.1007/s10278-025-01559-7","url":null,"abstract":"<p><p>This study addresses the critical challenge of detecting out-of-distribution (OOD) chest X-rays, where subtle view differences between lateral and frontal radiographs can lead to diagnostic errors. We develop a GAN-based framework that learns the inherent feature distribution of frontal views from the MIMIC-CXR dataset through latent space optimization and Kolmogorov-Smirnov statistical testing. Our approach generates similarity scores to reliably identify OOD cases, achieving exceptional performance with 100% precision, and 97.5% accuracy in detecting lateral views. The method demonstrates consistent reliability across operating conditions, maintaining accuracy above 92.5% and precision exceeding 93% under varying detection thresholds. These results provide both theoretical insights and practical solutions for OOD detection in medical imaging, demonstrating how GANs can establish feature representations for identifying distributional shifts. By significantly improving model reliability when encountering view-based anomalies, our framework enhances the clinical applicability of deep learning systems, ultimately contributing to improved diagnostic safety and patient outcomes.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144210661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial Intelligence Framework for Automated Facial Asymmetry Detection Using Key-Point Analysis and Neural Networks. 基于关键点分析和神经网络的人脸不对称自动检测的人工智能框架。
Journal of imaging informatics in medicine Pub Date : 2025-06-02 DOI: 10.1007/s10278-025-01558-8
Shahab Kavousinejad, Yasamin Vazirizadeh, Mohammad Behnaz, Asghar Ebadifar, Hoori Mirmohammadsadeghi
{"title":"Artificial Intelligence Framework for Automated Facial Asymmetry Detection Using Key-Point Analysis and Neural Networks.","authors":"Shahab Kavousinejad, Yasamin Vazirizadeh, Mohammad Behnaz, Asghar Ebadifar, Hoori Mirmohammadsadeghi","doi":"10.1007/s10278-025-01558-8","DOIUrl":"https://doi.org/10.1007/s10278-025-01558-8","url":null,"abstract":"<p><p>Accurate facial asymmetry assessment is essential in orthodontics, maxillofacial surgery, and plastic surgery. While minor asymmetry is common, severe cases often result from congenital conditions or trauma. Traditional methods struggle to comprehensively quantify asymmetry's extent and direction. This study developed and compared artificial neural networks (ANN) and Siamese neural networks (SNN) to detect facial asymmetry and determine deviation direction (horizontal/vertical). A dataset of 1200 frontal photographs, annotated by three orthodontists, was used. The MediaPipe model facilitated facial landmark detection and midline alignment. Two approaches were employed: (1) extracting features from facial landmarks and using them to train an ANN, and (2) SNN-based comparison of mirrored facial halves. Exploratory data analysis (EDA) was used to quantify facial asymmetry in both vertical and horizontal dimensions. The ANN and SNN performance was evaluated using accuracy, recall, and F1-score. The SNN outperformed ANN, achieving 97% accuracy and strong agreement with expert evaluations (Cohen's Kappa: 0.84 for asymmetry detection, 0.73 for horizontal deviation, and 0.80 for vertical asymmetry). The symmetry group showed 96.14% mean similarity, while the asymmetry group had 83.97%. The SNN's ROC curve yielded an AUC of 0.98, indicating high diagnostic performance. This study demonstrates the potential of AI-driven methods, particularly SNN, for reliable and objective facial asymmetry assessment in clinical settings. Future research should focus on expanding datasets and refining midline alignment to improve accuracy, especially in cases with vertical eye asymmetry.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144210658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance Comparison of Machine Learning Using Radiomic Features and CNN-Based Deep Learning in Benign and Malignant Classification of Vertebral Compression Fractures Using CT Scans. 基于放射学特征的机器学习与基于cnn的深度学习在CT扫描椎体压缩性骨折良恶性分类中的性能比较
Journal of imaging informatics in medicine Pub Date : 2025-06-02 DOI: 10.1007/s10278-025-01553-z
Jong Chan Yeom, So Hyun Park, Young Jae Kim, Tae Ran Ahn, Kwang Gi Kim
{"title":"Performance Comparison of Machine Learning Using Radiomic Features and CNN-Based Deep Learning in Benign and Malignant Classification of Vertebral Compression Fractures Using CT Scans.","authors":"Jong Chan Yeom, So Hyun Park, Young Jae Kim, Tae Ran Ahn, Kwang Gi Kim","doi":"10.1007/s10278-025-01553-z","DOIUrl":"https://doi.org/10.1007/s10278-025-01553-z","url":null,"abstract":"<p><p>Distinguishing benign from malignant vertebral compression fractures is critical for clinical management but remains challenging on contrast-enhanced abdominal CT, which lacks the soft tissue contrast of MRI. This study evaluates and compares radiomic feature-based machine learning and convolutional neural network-based deep learning models for classifying VCFs using abdominal CT. A retrospective cohort of 447 vertebral compression fractures (196 benign, 251 malignant) from 286 patients was analyzed. Radiomic features were extracted using PyRadiomics, with Recursive Feature Elimination selecting six key texture-based features (e.g., Run Variance, Dependence Non-Uniformity Normalized), highlighting textural heterogeneity as a malignancy marker. Machine learning models (XGBoost, SVM, KNN, Random Forest) and a 3D CNN were trained on CT data, with performance assessed via precision, recall, F1 score, accuracy, and AUC. The deep learning model achieved marginally superior overall performance, with a statistically significant higher AUC (77.66% vs. 75.91%, p < 0.05) and better precision, F1 score, and accuracy compared to the top-performing machine learning model (XGBoost). Deep learning's attention maps localized diagnostically relevant regions, mimicking radiologists' focus, whereas radiomics lacked spatial interpretability despite offering quantifiable biomarkers. This study underscores the complementary strengths of machine learning and deep learning: radiomics provides interpretable features tied to tumor heterogeneity, while DL autonomously extracts high-dimensional patterns with spatial explainability. Integrating both approaches could enhance diagnostic accuracy and clinician trust in abdominal CT-based VCF assessment. Limitations include retrospective single-center data and potential selection bias. Future multi-center studies with diverse protocols and histopathological validation are warranted to generalize these findings.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144210660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Robust Deep Learning Method with Uncertainty Estimation for the Pathological Classification of Renal Cell Carcinoma Based on CT Images. 基于 CT 图像的稳健深度学习方法与不确定性估计用于肾细胞癌的病理分类
Journal of imaging informatics in medicine Pub Date : 2025-06-01 Epub Date: 2024-09-23 DOI: 10.1007/s10278-024-01276-7
Ni Yao, Hang Hu, Kaicong Chen, Huan Huang, Chen Zhao, Yuan Guo, Boya Li, Jiaofen Nan, Yanting Li, Chuang Han, Fubao Zhu, Weihua Zhou, Li Tian
{"title":"A Robust Deep Learning Method with Uncertainty Estimation for the Pathological Classification of Renal Cell Carcinoma Based on CT Images.","authors":"Ni Yao, Hang Hu, Kaicong Chen, Huan Huang, Chen Zhao, Yuan Guo, Boya Li, Jiaofen Nan, Yanting Li, Chuang Han, Fubao Zhu, Weihua Zhou, Li Tian","doi":"10.1007/s10278-024-01276-7","DOIUrl":"10.1007/s10278-024-01276-7","url":null,"abstract":"<p><p>This study developed and validated a deep learning-based diagnostic model with uncertainty estimation to aid radiologists in the preoperative differentiation of pathological subtypes of renal cell carcinoma (RCC) based on computed tomography (CT) images. Data from 668 consecutive patients with pathologically confirmed RCC were retrospectively collected from Center 1, and the model was trained using fivefold cross-validation to classify RCC subtypes into clear cell RCC (ccRCC), papillary RCC (pRCC), and chromophobe RCC (chRCC). An external validation with 78 patients from Center 2 was conducted to evaluate the performance of the model. In the fivefold cross-validation, the area under the receiver operating characteristic curve (AUC) for the classification of ccRCC, pRCC, and chRCC was 0.868 (95% CI, 0.826-0.923), 0.846 (95% CI, 0.812-0.886), and 0.839 (95% CI, 0.802-0.88), respectively. In the external validation set, the AUCs were 0.856 (95% CI, 0.838-0.882), 0.787 (95% CI, 0.757-0.818), and 0.793 (95% CI, 0.758-0.831) for ccRCC, pRCC, and chRCC, respectively. The model demonstrated robust performance in predicting the pathological subtypes of RCC, while the incorporated uncertainty emphasized the importance of understanding model confidence. The proposed approach, integrated with uncertainty estimation, offers clinicians a dual advantage: accurate RCC subtype predictions complemented by diagnostic confidence metrics, thereby promoting informed decision-making for patients with RCC.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"1323-1333"},"PeriodicalIF":0.0,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12092889/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142309616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信