Computerized Medical Imaging and Graphics最新文献

筛选
英文 中文
SFPGCL: Specificity-preserving federated population graph contrastive learning for multi-site ASD identification using rs-fMRI data SFPGCL:利用rs-fMRI数据进行多位点ASD识别的特异性保留联邦群体图对比学习
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-05-16 DOI: 10.1016/j.compmedimag.2025.102558
Yudan Ren , Zihan Ma , Zhenqing Ding , Ruonan Yang , Xiao Li , Xiaowei He , Tianming Liu
{"title":"SFPGCL: Specificity-preserving federated population graph contrastive learning for multi-site ASD identification using rs-fMRI data","authors":"Yudan Ren ,&nbsp;Zihan Ma ,&nbsp;Zhenqing Ding ,&nbsp;Ruonan Yang ,&nbsp;Xiao Li ,&nbsp;Xiaowei He ,&nbsp;Tianming Liu","doi":"10.1016/j.compmedimag.2025.102558","DOIUrl":"10.1016/j.compmedimag.2025.102558","url":null,"abstract":"<div><div>Autism spectrum disorder (ASD) is a severe neurodevelopmental disorder that affects people’s social communication and daily routine. Most existing imaging studies on ASD use single site resting-state functional magnetic resonance imaging (rs-fMRI) data, which may suffer from limited samples and geographic bias. Improving the generalization ability of the diagnostic models often necessitates a large-scale dataset from multiple imaging sites. However, centralizing multi-site data generally faces inherent challenges related to privacy, security, and storage burden. Federated learning (FL) can address these issues by enabling collaborative model training without centralizing data. Nevertheless, multi-site rs-fMRI data introduces site variations, causing unfavorable data heterogeneity. This negatively impacts biomarker identification and diagnostic decision. Moreover, previous FL approaches for fMRI analysis often ignore site-specific demographic information, such as age, gender, and full intelligence quotient (FIQ), providing useful information as non-imaging features. On the other hand, Graph Neural Networks (GNNs) are gaining popularity in fMRI representation learning due to their powerful graph representation capabilities. However, existing methods often focus on extracting subject-specific connectivity patterns and overlook inter-subject relationships in brain functional topology. In this study, we propose a specificity-preserving federated population graph contrastive learning (SFPGCL) framework for rs-fMRI analysis and multi-site ASD identification, including a server and multiple clients/sites for federated model aggregation. At each client, our model consists of a shared branch and a personalized branch, where parameters of the shared branch are sent to the sever, while those of the personalized branch remain local. This setup facilitates invariant knowledge sharing among sites and also helps preserve site specificity. In the shared branch, we employ a spatio-temporal attention graph neural network to learn temporal dynamics in fMRI data invariant to each site, and introduce a model-contrastive learning method to mitigate client data heterogeneity. In the personalized branch, we utilize population graph structure to fully integrate demographic information and functional network connectivity to preserve site-specific characteristics. Then, a site-invariant population graph is built to derive site-invariant representations based on the dynamic representations acquired from the shared branch. Finally, representations generated by the two branches are fused for classification. Experimental results on Autism Brain Imaging Data Exchange (ABIDE) show that SFPGCL achieves 80.0 % accuracy and 79.7 % AUC for ASD identification, which outperforms several other state-of-the art approaches.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"124 ","pages":"Article 102558"},"PeriodicalIF":5.4,"publicationDate":"2025-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144134879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast cortical thickness estimation using deep learning-based anatomy segmentation and diffeomorphic registration 基于深度学习的解剖分割和差胚配准快速皮质厚度估计
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-05-13 DOI: 10.1016/j.compmedimag.2025.102569
Jiong Wu , Shuang Zhou
{"title":"Fast cortical thickness estimation using deep learning-based anatomy segmentation and diffeomorphic registration","authors":"Jiong Wu ,&nbsp;Shuang Zhou","doi":"10.1016/j.compmedimag.2025.102569","DOIUrl":"10.1016/j.compmedimag.2025.102569","url":null,"abstract":"<div><div>Accurately and efficiently estimating the cortical thickness from magnetic resonance images (MRIs) is crucial for neuroscientific studies and clinical applications with various large-scale datasets. Diffeomorphic registration-based cortical thickness estimation (DiReCT) is a prominent traditional method of calculating such measures directly from original MRIs by applying diffeomorphic registration on segmented tissues. However, it suffers from prolonged computational time and limited reproducibility, impediments to its application in large-scale studies or real-time environments. This paper proposes a framework for cortical thickness estimation using deep learning-based anatomy segmentation and diffeomorphic registration. The framework begins by applying a convolutional neural network (CNN) segmentation model to the original image, generating a segmentation map that accurately delineates the cortical boundaries. Subsequently, a pair of distance maps generated from the segmentation map is injected into an unsupervised learning-based registration network for fast and diffeomorphic registration. A novel algorithm based on diffeomorphisms of different time points is proposed to calculate the final thickness map. We systematically evaluated and compared our method with surface-based measures from FreeSurfer on two distinct datasets. The experimental results demonstrated a superior performance of the proposed method, surpassing the performance of DiReCT and DL+DiReCT in terms of time efficiency and consistency with FreeSurfer. Our code and pre-trained models are publicly available at: <span><span>https://github.com/wujiong-hub/DL-CTE.git</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"123 ","pages":"Article 102569"},"PeriodicalIF":5.4,"publicationDate":"2025-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143947346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Trustworthy AI for stage IV non-small cell lung cancer: Automatic segmentation and uncertainty quantification 可信赖的AI用于IV期非小细胞肺癌:自动分割和不确定度量化
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-05-13 DOI: 10.1016/j.compmedimag.2025.102567
Sacha Dedeken , Pierre-Henri Conze , Vera Damerjian Pieters , Olivier Gallinato , Jérôme Faure , Thierry Colin , Dimitris Visvikis
{"title":"Trustworthy AI for stage IV non-small cell lung cancer: Automatic segmentation and uncertainty quantification","authors":"Sacha Dedeken ,&nbsp;Pierre-Henri Conze ,&nbsp;Vera Damerjian Pieters ,&nbsp;Olivier Gallinato ,&nbsp;Jérôme Faure ,&nbsp;Thierry Colin ,&nbsp;Dimitris Visvikis","doi":"10.1016/j.compmedimag.2025.102567","DOIUrl":"10.1016/j.compmedimag.2025.102567","url":null,"abstract":"<div><div>Accurate segmentation of lung tumors is essential for advancing personalized medicine in non-small cell lung cancer (NSCLC). However, stage IV NSCLC presents significant challenges due to heterogeneous tumor morphology and the presence of associated conditions including infection, atelectasis and pleural effusion. The complexity of multicentric datasets further complicates robust segmentation across diverse clinical settings. In this study, we evaluate deep-learning-based approaches for automated segmentation of advanced-stage lung tumors using 3D architectures on 387 CT scans from the Deep-Lung-IV study. Through comprehensive experiments, we assess the impact of model design, HU windowing, and dataset size on delineation performance, providing practical guidelines for robust implementation. Additionally, we propose a confidence score using deep ensembles to quantify prediction uncertainty and automate the identification of complex cases that require further review. Our results demonstrate the potential of attention-based architectures and specific preprocessing strategies to improve segmentation quality in such a challenging clinical scenario, while emphasizing the importance of uncertainty estimation to build trustworthy AI systems in medical imaging. Code is available at: <span><span>https://github.com/Sacha-Dedeken/SegStageIVNSCLC</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"123 ","pages":"Article 102567"},"PeriodicalIF":5.4,"publicationDate":"2025-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144070616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bi-VesTreeFormer: A bidirectional topology-aware transformer framework for coronary vFFR estimation
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-05-11 DOI: 10.1016/j.compmedimag.2025.102564
Congyu Tian , Zehua Liu , Linyuan Wang , Liang Shao , Yongzhi Deng , Xiangyun Liao , Weixin Si
{"title":"Bi-VesTreeFormer: A bidirectional topology-aware transformer framework for coronary vFFR estimation","authors":"Congyu Tian ,&nbsp;Zehua Liu ,&nbsp;Linyuan Wang ,&nbsp;Liang Shao ,&nbsp;Yongzhi Deng ,&nbsp;Xiangyun Liao ,&nbsp;Weixin Si","doi":"10.1016/j.compmedimag.2025.102564","DOIUrl":"10.1016/j.compmedimag.2025.102564","url":null,"abstract":"<div><div>Fractional Flow Reserve (FFR) serves as the gold standard for evaluating the functional significance of coronary artery stenosis. However, traditional FFR involves the injection of vasodilator drugs and the utilization of additional guidewires, which consequently can lead to patient risks and increased costs. Computational fluid dynamics-based approaches can enable non-invasive virtual FFR (vFFR) estimation, but they are computationally intensive and time-consuming. Although deep learning can remarkably enhance computational efficiency, the existing vFFR methods rely heavily on manually crafted features and face difficulties in capturing long-distance dependencies within the vessel structure. In this study, we propose a novel framework for estimating coronary vFFR, which circumvents the laborious preprocessing procedures of previous methods. Specifically, a novel bidirectional topology-aware transformer network (Bi-VesTreeFormer) is proposed to conduct fully automated topological stenotic feature extraction of the vessel tree and capture the global dependencies among branches. Additionally, a contextual vFFR decoder is introduced to establish the correlation of FFR values between adjacent branches and achieve a stable mapping of FFR values to the latent vector space. To validate and train our method, we gathered FFR data from 43 patients with coronary artery stenosis and simulated 15,000 coronary artery centerline data with a reduced-order hemodynamic model. The results show that the proposed method attains root mean square errors of 0.038 and 0.048 for simulated and real data respectively, surpassing the state-of-the-art methods.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"123 ","pages":"Article 102564"},"PeriodicalIF":5.4,"publicationDate":"2025-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143943170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mamba-based deformable medical image registration with an annotated brain MR-CT dataset 基于曼巴的可变形医学图像配准与注释脑MR-CT数据集
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-05-10 DOI: 10.1016/j.compmedimag.2025.102566
Yinuo Wang , Tao Guo , Weimin Yuan , Shihao Shu , Cai Meng , Xiangzhi Bai
{"title":"Mamba-based deformable medical image registration with an annotated brain MR-CT dataset","authors":"Yinuo Wang ,&nbsp;Tao Guo ,&nbsp;Weimin Yuan ,&nbsp;Shihao Shu ,&nbsp;Cai Meng ,&nbsp;Xiangzhi Bai","doi":"10.1016/j.compmedimag.2025.102566","DOIUrl":"10.1016/j.compmedimag.2025.102566","url":null,"abstract":"<div><div>Deformable registration is essential in medical image analysis, especially for handling various multi- and mono-modal registration tasks in neuroimaging. Existing studies lack exploration of brain MR-CT registration, and face challenges in both accuracy and efficiency improvements of learning-based methods. To enlarge the practice of multi-modal registration in brain, we present SR-Reg, a new benchmark dataset comprising 180 volumetric paired MR-CT images and annotated anatomical regions. Building on this foundation, we introduce MambaMorph, a novel deformable registration network based on an efficient state space model Mamba for global feature learning, with a fine-grained feature extractor for low-level embedding. Experimental results demonstrate that MambaMorph surpasses advanced ConvNet-based and Transformer-based networks across several multi- and mono-modal tasks, showcasing impressive enhancements of efficacy and efficiency. Code and dataset are available at <span><span>https://github.com/mileswyn/MambaMorph</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"123 ","pages":"Article 102566"},"PeriodicalIF":5.4,"publicationDate":"2025-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144070615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A deep learning framework for reconstructing Breast Amide Proton Transfer weighted imaging sequences from sparse frequency offsets to dense frequency offsets
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-05-09 DOI: 10.1016/j.compmedimag.2025.102563
Qiuhui Yang , Shu Su , Tianyu Zhang , Meng Wang , Weiqiang Dou , Kefeng Li , Ya Ren , Yijia Zheng , Mingwei Wang , Yi Xu , Yue Sun , Zhou Liu , Tao Tan
{"title":"A deep learning framework for reconstructing Breast Amide Proton Transfer weighted imaging sequences from sparse frequency offsets to dense frequency offsets","authors":"Qiuhui Yang ,&nbsp;Shu Su ,&nbsp;Tianyu Zhang ,&nbsp;Meng Wang ,&nbsp;Weiqiang Dou ,&nbsp;Kefeng Li ,&nbsp;Ya Ren ,&nbsp;Yijia Zheng ,&nbsp;Mingwei Wang ,&nbsp;Yi Xu ,&nbsp;Yue Sun ,&nbsp;Zhou Liu ,&nbsp;Tao Tan","doi":"10.1016/j.compmedimag.2025.102563","DOIUrl":"10.1016/j.compmedimag.2025.102563","url":null,"abstract":"<div><div>Amide Proton Transfer (APT) technique is a novel functional MRI technique that enables quantification of protein metabolism, but its wide application is largely limited in clinical settings by its long acquisition time. One way to reduce the scanning time is to obtain fewer frequency offset images during image acquisition. However, sparse frequency offset images are not inadequate to fit the z-spectral, a curve essential to quantifying the APT effect, which might compromise its quantification. In our study, we develop a deep learning-based model that allows for reconstructing dense frequency offsets from sparse ones, potentially reducing scanning time. We propose to leverage time-series convolution to extract both short and long-range spatial and frequency features of the APT imaging sequence. Our proposed model outperforms other seq2seq models, achieving superior reconstruction with a peak signal-to-noise ratio of 45.8 (95% confidence interval (CI): [44.9 46.7]), and a structural similarity index of 0.989 (95% CI:[0.987 0.993]) for the tumor region. We have integrated a weighted layer into our model to evaluate the impact of individual frequency offset on the reconstruction process. The weights assigned to the frequency offset at ±6.5 ppm, 0 ppm, and 3.5 ppm demonstrate higher significance as learned by the model. Experimental results demonstrate that our proposed model effectively reconstructs dense frequency offsets (n = 29, from 7 to -7 with 0.5 ppm as an interval) from data with 21 frequency offsets, reducing scanning time by 25%. This work presents a method for shortening the APT imaging acquisition time, offering potential guidance for parameter settings in APT imaging and serving as a valuable reference for clinicians.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"123 ","pages":"Article 102563"},"PeriodicalIF":5.4,"publicationDate":"2025-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143943171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CALIMAR-GAN: An unpaired mask-guided attention network for metal artifact reduction in CT scans CALIMAR-GAN:一种用于减少CT扫描中金属伪影的非配对掩模引导注意网络
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-05-09 DOI: 10.1016/j.compmedimag.2025.102565
Roberto Maria Scardigno, Antonio Brunetti, Pietro Maria Marvulli, Raffaele Carli, Mariagrazia Dotoli, Vitoantonio Bevilacqua, Domenico Buongiorno
{"title":"CALIMAR-GAN: An unpaired mask-guided attention network for metal artifact reduction in CT scans","authors":"Roberto Maria Scardigno,&nbsp;Antonio Brunetti,&nbsp;Pietro Maria Marvulli,&nbsp;Raffaele Carli,&nbsp;Mariagrazia Dotoli,&nbsp;Vitoantonio Bevilacqua,&nbsp;Domenico Buongiorno","doi":"10.1016/j.compmedimag.2025.102565","DOIUrl":"10.1016/j.compmedimag.2025.102565","url":null,"abstract":"<div><div>High-quality computed tomography (CT) scans are essential for accurate diagnostic and therapeutic decisions, but the presence of metal objects within the body can produce distortions that lower image quality. Deep learning (DL) approaches using image-to-image translation for metal artifact reduction (MAR) show promise over traditional methods but often introduce secondary artifacts. Additionally, most rely on paired simulated data due to limited availability of real paired clinical data, restricting evaluation on clinical scans to qualitative analysis. This work presents CALIMAR-GAN, a generative adversarial network (GAN) model that employs a guided attention mechanism and the linear interpolation algorithm to reduce artifacts using unpaired simulated and clinical data for targeted artifact reduction. Quantitative evaluations on simulated images demonstrated superior performance, achieving a PSNR of 31.7, SSIM of 0.877, and Fréchet inception distance (FID) of 22.1, outperforming state-of-the-art methods. On real clinical images, CALIMAR-GAN achieved the lowest FID (32.7), validated as a valuable complement to qualitative assessments through correlation with pixel-based metrics (<span><math><mrow><mi>r</mi><mo>=</mo><mo>−</mo><mn>0</mn><mo>.</mo><mn>797</mn></mrow></math></span> with PSNR, <span><math><mrow><mi>p</mi><mo>&lt;</mo><mn>0</mn><mo>.</mo><mn>01</mn></mrow></math></span>; <span><math><mrow><mi>r</mi><mo>=</mo><mo>−</mo><mn>0</mn><mo>.</mo><mn>767</mn></mrow></math></span> with MS-SSIM, <span><math><mrow><mi>p</mi><mo>&lt;</mo><mn>0</mn><mo>.</mo><mn>01</mn></mrow></math></span>). This work advances DL-based artifact reduction into clinical practice with high-fidelity reconstructions that enhance diagnostic accuracy and therapeutic outcomes. Code is available at <span><span>https://github.com/roberto722/calimar-gan</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"123 ","pages":"Article 102565"},"PeriodicalIF":5.4,"publicationDate":"2025-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143947345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Establishment of an intelligent analysis system for clinical image features of melanonychia based on deep learning image segmentation 基于深度学习图像分割的黑甲癣临床图像特征智能分析系统的建立
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-05-06 DOI: 10.1016/j.compmedimag.2025.102543
WengIoi Mio , Ruiyue Chen , Jiayan Lv , Sien Mai , Yanqing Chen , Mengwen He , Xin Zhang , Han Ma
{"title":"Establishment of an intelligent analysis system for clinical image features of melanonychia based on deep learning image segmentation","authors":"WengIoi Mio ,&nbsp;Ruiyue Chen ,&nbsp;Jiayan Lv ,&nbsp;Sien Mai ,&nbsp;Yanqing Chen ,&nbsp;Mengwen He ,&nbsp;Xin Zhang ,&nbsp;Han Ma","doi":"10.1016/j.compmedimag.2025.102543","DOIUrl":"10.1016/j.compmedimag.2025.102543","url":null,"abstract":"<div><div>Melanonychia, a condition that can be indicative of malignant melanoma, presents a significant challenge in early diagnosis due to the invasive nature and equipment dependency of traditional diagnostic methods such as nail biopsy and dermatoscope imaging. This study introduces, non-invasive intelligent analysis and follow-up system for melanonychia using smartphone imagery, harnessing the power of deep learning to facilitate early detection and monitoring. Through a cross-sectional study, Research Group developed a comprehensive nail image dataset and a two-stage model comprising a YOLOv8-based nail detection system and a UNet-based image segmentation system. The integrated YOLOv8 and UNet model achieved high accuracy and reliability in detecting and segmenting melanonychia lesions, with performance metrics such as F1, Dice, Specificity and Sensitivity significantly outperforming traditional methods and closely aligning with dermatoscopic assessments. This Artificial Intelligence-based (AI-based) system offers a user-friendly, accessible tool for both clinicians and patients, enhancing the ability to diagnose and monitor melanonychia, and holds the potential to improve early detection and treatment outcomes.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"123 ","pages":"Article 102543"},"PeriodicalIF":5.4,"publicationDate":"2025-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143937850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A diffusion-stimulated CT-US registration model with self-supervised learning and synthetic-to-real domain adaptation 具有自监督学习和合成域自适应的扩散刺激CT-US配准模型
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-05-06 DOI: 10.1016/j.compmedimag.2025.102562
Shangxuan Li , Biao Jia , Weiming Huang , Xiaobo Zhang , Wu Zhou , Cheng Wang , Gaojun Teng
{"title":"A diffusion-stimulated CT-US registration model with self-supervised learning and synthetic-to-real domain adaptation","authors":"Shangxuan Li ,&nbsp;Biao Jia ,&nbsp;Weiming Huang ,&nbsp;Xiaobo Zhang ,&nbsp;Wu Zhou ,&nbsp;Cheng Wang ,&nbsp;Gaojun Teng","doi":"10.1016/j.compmedimag.2025.102562","DOIUrl":"10.1016/j.compmedimag.2025.102562","url":null,"abstract":"<div><div>In abdominal interventional procedures, achieving precise registration of 2D ultrasound (US) frames with 3D computed tomography (CT) scans presents a significant challenge. Traditional tracking methods often rely on high-precision sensors, which can be prohibitively expensive. Furthermore, the clinical need for real-time registration with a broad capture range frequently exceeds the performance of standard image-based optimization techniques. Current automatic registration methods that utilize deep learning are either heavily reliant on manual annotations for training or struggle to effectively bridge the gap between different imaging domains. To address these challenges, we propose a novel diffusion-stimulated CT-US registration model. This model harnesses the physical diffusion properties of US to generate synthetic US images from preoperative CT data. Additionally, we introduce a synthetic-to-real domain adaptation strategy using a diffusion model to mitigate the discrepancies between real and synthetic US images. A dual-stream self-supervised regression neural network, trained on these synthetic images, is then used to estimate the pose within the CT space. The effectiveness of our proposed approach is verified through validation using US and CT scans from a dual-modality human abdominal phantom. The results of our experiments confirm that our method can accurately initialize the US image pose within an acceptable range of error and subsequently refine it to achieve precise alignment. This enables real-time, tracker-independent, and robust rigid registration of CT and US images.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"123 ","pages":"Article 102562"},"PeriodicalIF":5.4,"publicationDate":"2025-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143921566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhanced glaucoma classification through advanced segmentation by integrating cup-to-disc ratio and neuro-retinal rim features 通过整合杯盘比和神经视网膜边缘特征的高级分割增强青光眼分类
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-04-28 DOI: 10.1016/j.compmedimag.2025.102559
Rabia Pannu , Muhammad Zubair , Muhammad Owais , Shoaib Hassan , Muhammad Umair , Syed Muhammad Usman , Mousa Ahmed Albashrawi , Irfan Hussain
{"title":"Enhanced glaucoma classification through advanced segmentation by integrating cup-to-disc ratio and neuro-retinal rim features","authors":"Rabia Pannu ,&nbsp;Muhammad Zubair ,&nbsp;Muhammad Owais ,&nbsp;Shoaib Hassan ,&nbsp;Muhammad Umair ,&nbsp;Syed Muhammad Usman ,&nbsp;Mousa Ahmed Albashrawi ,&nbsp;Irfan Hussain","doi":"10.1016/j.compmedimag.2025.102559","DOIUrl":"10.1016/j.compmedimag.2025.102559","url":null,"abstract":"<div><div>Glaucoma is a progressive eye condition caused by high intraocular fluid pressure, damaging the optic nerve, leading to gradual, irreversible vision loss, often without noticeable symptoms. Subtle signs like mild eye redness, slightly blurred vision, and eye pain may go unnoticed, earning it the nickname “silent thief of sight.” Its prevalence is rising with an aging population, driven by increased life expectancy. Most computer-aided diagnosis (CAD) systems rely on the cup-to-disc ratio (CDR) for glaucoma diagnosis. This study introduces a novel approach by integrating CDR with the neuro-retinal rim ratio (NRR), which quantifies rim thickness within the optic disc (OD). NRR enhances diagnostic accuracy by capturing additional optic nerve head changes, such as rim thinning and tissue loss, which were overlooked using CDR alone. A modified ResUNet architecture for OD and optic cup (OC) segmentation, combining residual learning and U-Net to capture spatial context for semantic segmentation. For OC segmentation, the model achieved Dice Coefficient (DC) scores of 0.942 and 0.872 and Intersection over Union (IoU) values of 0.891 and 0.773 for DRISHTI-GS and RIM-ONE, respectively. For OD segmentation, the model achieved DC of 0.972 and 0.950 and IoU values of 0.945 and 0.940 for DRISHTI-GS and RIM-ONE, respectively. External evaluation on ORIGA and REFUGE confirmed the model’s robustness and generalizability. CDR and NRR were calculated from segmentation masks and used to train an SVM with a radial basis function, classifying the eyes as healthy or glaucomatous. The model achieved accuracies of 0.969 on DRISHTI-GS and 0.977 on RIM-ONE.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"123 ","pages":"Article 102559"},"PeriodicalIF":5.4,"publicationDate":"2025-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143894871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信