Computerized Medical Imaging and Graphics最新文献

筛选
英文 中文
Meta-learning guidance for robust medical image synthesis: Addressing the real-world misalignment and corruptions 鲁棒医学图像合成的元学习指导:解决现实世界的偏差和腐败
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-02-01 DOI: 10.1016/j.compmedimag.2025.102506
Jaehun Lee , Daniel Kim , Taehun Kim , Mohammed A. Al-masni , Yoseob Han , Dong-Hyun Kim , Kanghyun Ryu
{"title":"Meta-learning guidance for robust medical image synthesis: Addressing the real-world misalignment and corruptions","authors":"Jaehun Lee ,&nbsp;Daniel Kim ,&nbsp;Taehun Kim ,&nbsp;Mohammed A. Al-masni ,&nbsp;Yoseob Han ,&nbsp;Dong-Hyun Kim ,&nbsp;Kanghyun Ryu","doi":"10.1016/j.compmedimag.2025.102506","DOIUrl":"10.1016/j.compmedimag.2025.102506","url":null,"abstract":"<div><div>Deep learning-based image synthesis for medical imaging is currently an active research topic with various clinically relevant applications. Recently, methods allowing training with misaligned data have started to emerge, yet current solution lack robustness and cannot handle other corruptions in the dataset. In this work, we propose a solution to this problem for training synthesis network for datasets affected by mis-registration, artifacts, and deformations. Our proposed method consists of three key innovations: meta-learning inspired re-weighting scheme to directly decrease the influence of corrupted instances in a mini-batch by assigning lower weights in the loss function, non-local feature-based loss function, and joint training of image synthesis network together with spatial transformer (STN)-based registration networks with specially designed regularization. Efficacy of our method is validated in a controlled synthetic scenario, as well as public dataset with such corruptions. This work introduces a new framework that may be applicable to challenging scenarios and other more difficult datasets.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"121 ","pages":"Article 102506"},"PeriodicalIF":5.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143179160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient diagnosis of retinal disorders using dual-branch semi-supervised learning (DB-SSL): An enhanced multi-class classification approach 利用双分支半监督学习(DB-SSL)高效诊断视网膜疾病:一种增强的多类分类方法
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-02-01 DOI: 10.1016/j.compmedimag.2025.102494
Muhammad Hammad Malik , Zishuo Wan , Yu Gao , Da-Wei Ding
{"title":"Efficient diagnosis of retinal disorders using dual-branch semi-supervised learning (DB-SSL): An enhanced multi-class classification approach","authors":"Muhammad Hammad Malik ,&nbsp;Zishuo Wan ,&nbsp;Yu Gao ,&nbsp;Da-Wei Ding","doi":"10.1016/j.compmedimag.2025.102494","DOIUrl":"10.1016/j.compmedimag.2025.102494","url":null,"abstract":"<div><div>The early diagnosis of retinal disorders is essential in preventing permanent or partial blindness. Identifying these conditions promptly guarantees early treatment and prevents blindness. However, the challenge lies in accurately diagnosing these conditions, especially with limited labeled data. This study aims to enhance the diagnostic accuracy of retinal disorders using a novel Dual-Branch Semi-Supervised Learning (DB-SSL) approach that leverages both labeled and unlabeled data for multi-class classification of eye diseases. Employing Color Fundus Photography (CFP), our research integrates a Convolutional Neural Network (CNN) that integrates features from two parallel branches. This framework effectively handles the complexity of ocular imaging by utilizing self-training-based semi-supervised learning to explore relationships within unlabeled data. We propose and evaluate six CNN models: ResNet50, DenseNet121, MobileNetV2, EfficientNetB0, SqueezeNet1_0, and a hybrid of ResNet50 and MobileNetV2 on their ability to classify four key eye conditions: cataract, diabetic retinopathy, glaucoma, and normal, using a large, diverse OIH dataset containing 4217 fundus images. Among the evaluated models, ResNet50 emerged as the most accurate, achieving 93.14 % accuracy on unseen data. The model demonstrates robustness with a sensitivity of 93 % and specificity of 98.37 %, along with a precision and F1 Score of 93 % each, and a Cohen’s Kappa of 90.85 %. Additionally, it exhibits an AUC score of 97.75 % nearing perfection. Systematically removing certain components from the ResNet50 model further validates its efficacy. Our findings underscore the potential of advanced CNN architectures combined with semi-supervised learning in enhancing the accuracy of eye disease classification systems, particularly in resource-constrained environments where the procurement of large labeled datasets is challenging and expensive. This approach is well-suited for integration into Clinical Decision Support Systems (CDSS), providing valuable diagnostic assistance in real-world clinical settings.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"121 ","pages":"Article 102494"},"PeriodicalIF":5.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143299555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interpretable multi-stage attention network to predict cancer subtype, microsatellite instability, TP53 mutation and TMB of endometrial and colorectal cancer 可解释的多阶段关注网络预测子宫内膜癌和结直肠癌的癌症亚型、微卫星不稳定性、TP53突变和TMB
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-01-30 DOI: 10.1016/j.compmedimag.2025.102499
Ching-Wei Wang , Hikam Muzakky , Yu-Ching Lee , Yu-Pang Chung , Yu-Chi Wang , Mu-Hsien Yu , Chia-Hua Wu , Tai-Kuang Chao
{"title":"Interpretable multi-stage attention network to predict cancer subtype, microsatellite instability, TP53 mutation and TMB of endometrial and colorectal cancer","authors":"Ching-Wei Wang ,&nbsp;Hikam Muzakky ,&nbsp;Yu-Ching Lee ,&nbsp;Yu-Pang Chung ,&nbsp;Yu-Chi Wang ,&nbsp;Mu-Hsien Yu ,&nbsp;Chia-Hua Wu ,&nbsp;Tai-Kuang Chao","doi":"10.1016/j.compmedimag.2025.102499","DOIUrl":"10.1016/j.compmedimag.2025.102499","url":null,"abstract":"<div><div>Mismatch repair deficiency (dMMR), also known as high-grade microsatellite instability (MSI-H), is a well-established biomarker for predicting the immunotherapy response in endometrial cancer (EC) and colorectal cancer (CRC). Tumor mutational burden (TMB) has also emerged as an important quantitative genomic biomarker for assessing the efficacy of immune checkpoint inhibitors. Although next-generation sequencing (NGS) can be used to assess MSI and TMB, the high costs, low sample throughput, and significant DNA requirements make NGS impractical for routine clinical screening. In this study, an interpretable, multi-stage attention deep learning (DL) network is introduced to predict pathological subtypes, MSI, TP53 mutations, and TMB directly from low-cost, routinely used histopathological whole slide images of EC and CRC slides. Experimental results showed that this method consistently outperformed seven state-of-the-art approaches in cancer subtyping and molecular status prediction across EC and CRC datasets. Fisher’s Least Significant Difference test confirmed a strong correlation between model predictions and actual molecular statuses (MSI, TP53, and TMB) (<span><math><mrow><mi>p</mi><mo>&lt;</mo><mn>0</mn><mo>.</mo><mn>001</mn></mrow></math></span>). Furthermore, Kaplan–Meier disease-free survival analysis revealed that CRC patients with model-predicted high TMB had significantly longer disease-free survival than those with low TMB (<span><math><mrow><mi>p</mi><mo>&lt;</mo><mn>0</mn><mo>.</mo><mn>05</mn></mrow></math></span>). These findings demonstrate that the proposed DL-based approach holds significant potential for directly predicting immunotherapy-related pathological diagnoses and molecular statuses from routine WSIs, supporting personalized cancer immunotherapy treatment decisions in EC and CRC.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"121 ","pages":"Article 102499"},"PeriodicalIF":5.4,"publicationDate":"2025-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143386336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adjacent point aided vertebral landmark detection and Cobb angle measurement for automated AIS diagnosis 邻点辅助椎体地标检测和Cobb角测量用于AIS自动诊断
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-01-30 DOI: 10.1016/j.compmedimag.2025.102496
Xiaopeng Du , Hongyu Wang , Lihang Jiang , Changlin Lv , Yongming Xi , Huan Yang
{"title":"Adjacent point aided vertebral landmark detection and Cobb angle measurement for automated AIS diagnosis","authors":"Xiaopeng Du ,&nbsp;Hongyu Wang ,&nbsp;Lihang Jiang ,&nbsp;Changlin Lv ,&nbsp;Yongming Xi ,&nbsp;Huan Yang","doi":"10.1016/j.compmedimag.2025.102496","DOIUrl":"10.1016/j.compmedimag.2025.102496","url":null,"abstract":"<div><div>Adolescent Idiopathic Scoliosis (AIS) is a prevalent structural deformity disease of human spine, and accurate assessment of spinal anatomical parameters is essential for clinical diagnosis and treatment planning. In recent years, significant progress has been made in automatic AIS diagnosis based on deep learning methods. However, effectively utilizing spinal structure information to improve the parameter measurement and diagnosis accuracy from spinal X-ray images remains challenging. This paper proposes a novel spine keypoint detection framework to complete the intelligent diagnosis of AIS, with the assistance of spine rigid structure information. Specifically, a deep learning architecture called Landmark and Adjacent offset Detection (LAD-Net) is designed to predict spine centre and corner points as well as their related offset vectors, based on which error-detected landmarks can be effectively corrected via the proposed Adjacent Centre Iterative Correction (ACIC) and Corner Feature Optimization and Fusion (CFOF) modules. Based on the detected spine landmarks, spine key parameters (<em>i.e</em>. Cobb angles) can be computed to finish the AIS Lenke diagnosis. Experimental results demonstrate the superiority of the proposed framework on spine landmark detection and Lenke classification, providing strong support for AIS diagnosis and treatment.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"121 ","pages":"Article 102496"},"PeriodicalIF":5.4,"publicationDate":"2025-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143179161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multimodal Cross Global Learnable Attention Network for MR images denoising with arbitrary modal missing 基于多模态跨全局可学习注意网络的任意模态缺失MR图像去噪
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-01-30 DOI: 10.1016/j.compmedimag.2025.102497
Mingfu Jiang , Shuai Wang , Ka-Hou Chan , Yue Sun , Yi Xu , Zhuoneng Zhang , Qinquan Gao , Zhifan Gao , Tong Tong , Hing-Chiu Chang , Tao Tan
{"title":"Multimodal Cross Global Learnable Attention Network for MR images denoising with arbitrary modal missing","authors":"Mingfu Jiang ,&nbsp;Shuai Wang ,&nbsp;Ka-Hou Chan ,&nbsp;Yue Sun ,&nbsp;Yi Xu ,&nbsp;Zhuoneng Zhang ,&nbsp;Qinquan Gao ,&nbsp;Zhifan Gao ,&nbsp;Tong Tong ,&nbsp;Hing-Chiu Chang ,&nbsp;Tao Tan","doi":"10.1016/j.compmedimag.2025.102497","DOIUrl":"10.1016/j.compmedimag.2025.102497","url":null,"abstract":"<div><div>Magnetic Resonance Imaging (MRI) generates medical images of multiple sequences, i.e., multimodal, from different contrasts. However, noise will reduce the quality of MR images, and then affect the doctor’s diagnosis of diseases. Existing filtering methods, transform-domain methods, statistical methods and Convolutional Neural Network (CNN) methods main aim to denoise individual sequences of images without considering the relationships between multiple different sequences. They cannot balance the extraction of high-dimensional and low-dimensional features in MR images, and hard to maintain a good balance between preserving image texture details and denoising strength. To overcome these challenges, this work proposes a controllable Multimodal Cross-Global Learnable Attention Network (MMCGLANet) for MR image denoising with Arbitrary Modal Missing. Specifically, Encoder is employed to extract the shallow features of the image which share weight module, and Convolutional Long Short-Term Memory(ConvLSTM) is employed to extract the associated features between different frames within the same modal. Cross Global Learnable Attention Network(CGLANet) is employed to extract and fuse image features between multimodal and within the same modality. In addition, sequence code is employed to label missing modalities, which allows for Arbitrary Modal Missing during model training, validation, and testing. Experimental results demonstrate that our method has achieved good denoising results on different public and real MR image dataset.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"121 ","pages":"Article 102497"},"PeriodicalIF":5.4,"publicationDate":"2025-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143179162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Entity-level multiple instance learning for mesoscopic histopathology images classification with Bayesian collaborative learning and pathological prior transfer 基于贝叶斯协同学习和病理先验迁移的实体级多实例学习用于介观组织病理图像分类
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-01-27 DOI: 10.1016/j.compmedimag.2025.102495
Qiming He , Yingming Xu , Qiang Huang , Jing Li , Yonghong He , Zhe Wang , Tian Guan
{"title":"Entity-level multiple instance learning for mesoscopic histopathology images classification with Bayesian collaborative learning and pathological prior transfer","authors":"Qiming He ,&nbsp;Yingming Xu ,&nbsp;Qiang Huang ,&nbsp;Jing Li ,&nbsp;Yonghong He ,&nbsp;Zhe Wang ,&nbsp;Tian Guan","doi":"10.1016/j.compmedimag.2025.102495","DOIUrl":"10.1016/j.compmedimag.2025.102495","url":null,"abstract":"<div><h3>Background:</h3><div>Entity-level pathologic structures with independent structures and functions are at a mesoscopic scale between the cell-level and slide-level, containing limited structures thus providing fewer instances for multiple instance learning. This restricts the perception of local pathologic features and their relationships, causing semantic ambiguity and inefficiency of entity embedding.</div></div><div><h3>Method:</h3><div>This study proposes a novel entity-level multiple instance learning. To realize entity-level augmentation, entity component mixup enhances the capture of relationships of contextually localized pathology features. To strengthen the semantic synergy of global and local pathological features, Bayesian collaborative learning is proposed to construct co-optimization of instance and bag embedding. Additionally, pathological prior transfer implement the initial optimization of the global attention pooling thereby fundamentally improving entity embedding.</div></div><div><h3>Results:</h3><div>This study constructed a glomerular image dataset containing up to 23 types of lesion patterns. Intensive experiments demonstrate that the proposed framework achieves the best on 19 out of 23 types, with AUC exceeding 90<span><math><mtext>%</mtext></math></span> and 95<span><math><mtext>%</mtext></math></span> on 20 and 11 types, respectively. Moreover, the proposed model achieves up to 18.9<span><math><mtext>%</mtext></math></span> and 14.7<span><math><mtext>%</mtext></math></span> improvements compared to the thumbnail-level and slide-level methods. Ablation study and visualization further reveals this method synergistically strengthens the feature representations under the condition of fewer instances.</div></div><div><h3>Conclusion:</h3><div>The proposed entity-level multiple instance learning enables accurate recognition of 23 types of lesion patterns, providing an effective tool for mesoscopic histopathology images classification. This proves it is capable of capturing salient pathologic features and contextual relationships from the fewer instances, which can be extended to classify other pathologic entities.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"121 ","pages":"Article 102495"},"PeriodicalIF":5.4,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143234795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Feature-targeted deep learning framework for pulmonary tumorous Cone-beam CT (CBCT) enhancement with multi-task customized perceptual loss and feature-guided CycleGAN 基于多任务定制感知损失和特征引导的CycleGAN的肺肿瘤锥形束CT (CBCT)增强特征目标深度学习框架
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-01-26 DOI: 10.1016/j.compmedimag.2024.102487
Jiarui Zhu , Hongfei Sun , Weixing Chen , Shaohua Zhi , Chenyang Liu , Mayang Zhao , Yuanpeng Zhang , Ta Zhou , Yu Lap Lam , Tao Peng , Jing Qin , Lina Zhao , Jing Cai , Ge Ren
{"title":"Feature-targeted deep learning framework for pulmonary tumorous Cone-beam CT (CBCT) enhancement with multi-task customized perceptual loss and feature-guided CycleGAN","authors":"Jiarui Zhu ,&nbsp;Hongfei Sun ,&nbsp;Weixing Chen ,&nbsp;Shaohua Zhi ,&nbsp;Chenyang Liu ,&nbsp;Mayang Zhao ,&nbsp;Yuanpeng Zhang ,&nbsp;Ta Zhou ,&nbsp;Yu Lap Lam ,&nbsp;Tao Peng ,&nbsp;Jing Qin ,&nbsp;Lina Zhao ,&nbsp;Jing Cai ,&nbsp;Ge Ren","doi":"10.1016/j.compmedimag.2024.102487","DOIUrl":"10.1016/j.compmedimag.2024.102487","url":null,"abstract":"<div><div>Thoracic Cone-beam computed tomography (CBCT) is routinely collected during image-guided radiation therapy (IGRT) to provide updated patient anatomy information for lung cancer treatments. However, CBCT images often suffer from streaking artifacts and noise caused by under-rate sampling projections and low-dose exposure, resulting in loss of lung anatomy which contains crucial pulmonary tumorous and functional information. While recent deep learning-based CBCT enhancement methods have shown promising results in suppressing artifacts, they have limited performance on preserving anatomical details containing crucial tumorous information due to lack of targeted guidance. To address this issue, we propose a novel feature-targeted deep learning framework which generates ultra-quality pulmonary imaging from CBCT of lung cancer patients via a multi-task customized feature-to-feature perceptual loss function and a feature-guided CycleGAN. The framework comprises two main components: a multi-task learning feature-selection network (MTFS-Net) for building up a customized feature-to-feature perceptual loss function (CFP-loss); and a feature-guided CycleGan network. Our experiments showed that the proposed framework can generate synthesized CT (sCT) images for the lung that achieved a high similarity to CT images, with an average SSIM index of 0.9747 and an average PSNR index of 38.5995 globally, and an average Pearman’s coefficient of 0.8929 within the tumor region on multi-institutional datasets. The sCT images also achieved visually pleasing performance with effective artifacts suppression, noise reduction, and distinctive anatomical details preservation. Functional imaging tests further demonstrated the pulmonary texture correction performance of the sCT images, and the similarity of the functional imaging generated from sCT and CT images has reached an average DSC value of 0.9147, SCC value of 0.9615 and R value of 0.9661. Comparison experiments with pixel-to-pixel loss also showed that the proposed perceptual loss significantly enhances the performance of involved generative models. Our experiment results indicate that the proposed framework outperforms the state-of-the-art models for pulmonary CBCT enhancement. This framework holds great promise for generating high-quality pulmonary imaging from CBCT that is suitable for supporting further analysis of lung cancer treatment.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"121 ","pages":"Article 102487"},"PeriodicalIF":5.4,"publicationDate":"2025-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143076326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Contrastive learning in brain imaging 脑成像中的对比学习。
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-01-26 DOI: 10.1016/j.compmedimag.2025.102500
Xiaoyin Xu , Stephen T.C. Wong
{"title":"Contrastive learning in brain imaging","authors":"Xiaoyin Xu ,&nbsp;Stephen T.C. Wong","doi":"10.1016/j.compmedimag.2025.102500","DOIUrl":"10.1016/j.compmedimag.2025.102500","url":null,"abstract":"<div><div>Contrastive learning is a type of deep learning technique trying to classify data or examples without requiring data labeling. Instead, it learns about the most representative features that contrast positive and negative pairs of examples. In literature of contrastive learning, terms of positive examples and negative examples do not mean whether the examples themselves are positive or negative of certain characteristics as one might encounter in medicine. Rather, positive examples just mean that the examples are of the same class, while negative examples mean that the examples are of different classes. Contrastive learning maps data to a latent space and works under the assumption that examples of the same class should be located close to each other in the latent space; and examples from different classes would locate far from each other. In other words, contrastive learning can be considered as a discriminator that tries to group examples of the same class together while separating examples of different classes from each other, preferably as far as possible. Since its inception, contrastive learning has been constantly evolving and can be realized as self-supervised, semi-supervised, or unsupervised learning. Contrastive learning has found wide applications in medical imaging and it is expected it will play an increasingly important role in medical image processing and analysis.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"121 ","pages":"Article 102500"},"PeriodicalIF":5.4,"publicationDate":"2025-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143076316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Opportunistic AI for enhanced cardiovascular disease risk stratification using abdominal CT scans 利用腹部CT扫描增强心血管疾病风险分层的机会性人工智能。
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-01-20 DOI: 10.1016/j.compmedimag.2025.102493
Azka Rehman , Jaewon Kim , Lee Hyeokjong , Jooyoung Chang , Sang Min Park
{"title":"Opportunistic AI for enhanced cardiovascular disease risk stratification using abdominal CT scans","authors":"Azka Rehman ,&nbsp;Jaewon Kim ,&nbsp;Lee Hyeokjong ,&nbsp;Jooyoung Chang ,&nbsp;Sang Min Park","doi":"10.1016/j.compmedimag.2025.102493","DOIUrl":"10.1016/j.compmedimag.2025.102493","url":null,"abstract":"<div><div>This study introduces the Deep Learning-based Cardiovascular Disease Incident (DL-CVDi) score, a novel biomarker derived from routine abdominal CT scans, optimized to predict cardiovascular disease (CVD) risk using deep survival learning. CT imaging, frequently used for diagnosing various conditions, contains opportunistic biomarkers that can be leveraged beyond their initial diagnostic purpose. Using a Cox proportional hazards-based survival loss, the DL-CVDi score captures complex, non-linear relationships between anatomical features and CVD risk. Clinical validation demonstrated that participants with high DL-CVDi scores had a significantly elevated risk of CVD incidents (hazard ratio [HR]: 2.75, 95% CI: 1.27–5.95, p-trend <span><math><mo>&lt;</mo></math></span>0.005) after adjusting for traditional risk factors. Additionally, the DL-CVDi score improved the concordance of baseline models, such as age and sex (from 0.662 to 0.700) and the Framingham Risk Score (from 0.697 to 0.742). Given its reliance on widely available abdominal CT data, the DL-CVDi score has substantial potential as an opportunistic screening tool for CVD risk in diverse clinical settings. Future research should validate these findings across multi-ethnic cohorts and explore its utility in patients with comorbid conditions, establishing the DL-CVDi score as a valuable addition to current CVD risk assessment strategies.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"120 ","pages":"Article 102493"},"PeriodicalIF":5.4,"publicationDate":"2025-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143043246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A graph neural network-based model with out-of-distribution robustness for enhancing antiretroviral therapy outcome prediction for HIV-1 基于分布外鲁棒性的图神经网络模型用于增强HIV-1抗逆转录病毒治疗结果预测。
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-01-10 DOI: 10.1016/j.compmedimag.2024.102484
Giulia Di Teodoro , Federico Siciliano , Valerio Guarrasi , Anne-Mieke Vandamme , Valeria Ghisetti , Anders Sönnerborg , Maurizio Zazzi , Fabrizio Silvestri , Laura Palagi
{"title":"A graph neural network-based model with out-of-distribution robustness for enhancing antiretroviral therapy outcome prediction for HIV-1","authors":"Giulia Di Teodoro ,&nbsp;Federico Siciliano ,&nbsp;Valerio Guarrasi ,&nbsp;Anne-Mieke Vandamme ,&nbsp;Valeria Ghisetti ,&nbsp;Anders Sönnerborg ,&nbsp;Maurizio Zazzi ,&nbsp;Fabrizio Silvestri ,&nbsp;Laura Palagi","doi":"10.1016/j.compmedimag.2024.102484","DOIUrl":"10.1016/j.compmedimag.2024.102484","url":null,"abstract":"<div><div>Predicting the outcome of antiretroviral therapies (ART) for HIV-1 is a pressing clinical challenge, especially when the ART includes drugs with limited effectiveness data. This scarcity of data can arise either due to the introduction of a new drug to the market or due to limited use in clinical settings, resulting in clinical dataset with highly unbalanced therapy representation. To tackle this issue, we introduce a novel joint fusion model, which combines features from a Fully Connected (FC) Neural Network and a Graph Neural Network (GNN) in a multi-modality fashion. Our model uses both tabular data about genetic sequences and a knowledge base derived from Stanford drug-resistance mutation tables, which serve as benchmark references for deducing in-vivo treatment efficacy based on the viral genetic sequence. By leveraging this knowledge base structured as a graph, the GNN component enables our model to adapt to imbalanced data distributions and account for Out-of-Distribution (OoD) drugs. We evaluated these models’ robustness against OoD drugs in the test set. Our comprehensive analysis demonstrates that the proposed model consistently outperforms the FC model. These results underscore the advantage of integrating Stanford scores in the model, thereby enhancing its generalizability and robustness, but also extending its utility in contributing in more informed clinical decisions with limited data availability. The source code is available at <span><span>https://github.com/federicosiciliano/graph-ood-hiv</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"120 ","pages":"Article 102484"},"PeriodicalIF":5.4,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142985489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信