Computerized Medical Imaging and Graphics最新文献

筛选
英文 中文
Deep learning model for malignancy prediction of TI-RADS 4 thyroid nodules with high-risk characteristics using multimodal ultrasound: A multicentre study 多模态超声对高危特征TI-RADS 4甲状腺结节恶性预测的深度学习模型:一项多中心研究
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-05-26 DOI: 10.1016/j.compmedimag.2025.102576
Xuan Chu , Tengfei Wang , Meiwen Chen , Jingyu Li , Luyao Wang , Chengjie Wang , Hongzhi Wang , Stephen TC Wong , Yongchao Chen , Hai Li
{"title":"Deep learning model for malignancy prediction of TI-RADS 4 thyroid nodules with high-risk characteristics using multimodal ultrasound: A multicentre study","authors":"Xuan Chu ,&nbsp;Tengfei Wang ,&nbsp;Meiwen Chen ,&nbsp;Jingyu Li ,&nbsp;Luyao Wang ,&nbsp;Chengjie Wang ,&nbsp;Hongzhi Wang ,&nbsp;Stephen TC Wong ,&nbsp;Yongchao Chen ,&nbsp;Hai Li","doi":"10.1016/j.compmedimag.2025.102576","DOIUrl":"10.1016/j.compmedimag.2025.102576","url":null,"abstract":"<div><div>The automatic screening of thyroid nodules using computer-aided diagnosis holds great promise in reducing missed and misdiagnosed cases in clinical practice. However, most current research focuses on single-modal images and does not fully leverage the comprehensive information from multimodal medical images, limiting model performance. To enhance screening accuracy, this study uses a deep learning framework that integrates high-dimensional convolutions of B-mode ultrasound (BMUS) and strain elastography (SE) images to predict the malignancy of TI-RADS 4 thyroid nodules with high-risk features. First, we extract nodule regions from the images and expand the boundary areas. Then, adaptive particle swarm optimization (APSO) and contrast limited adaptive histogram equalization (CLAHE) algorithms are applied to enhance ultrasound image contrast. Finally, deep learning techniques are used to extract and fuse high-dimensional features from both ultrasound modalities to classify benign and malignant thyroid nodules. The proposed model achieved an AUC of 0.937 (95 % CI 0.917–0.949) and 0.927 (95 % CI 0.907–0.948) in the test and external validation sets, respectively, demonstrating strong generalization ability. When compared with the diagnostic performance of three groups of radiologists, the model outperformed them significantly. Meanwhile, with the model's assistance, all three radiologist groups showed improved diagnostic performance. Furthermore, heatmaps generated by the model show a high alignment with radiologists' expertise, further confirming its credibility. The results indicate that our model can assist in clinical thyroid nodule diagnosis, reducing the risk of missed and misdiagnosed diagnoses, particularly for high-risk populations, and holds significant clinical value.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"124 ","pages":"Article 102576"},"PeriodicalIF":5.4,"publicationDate":"2025-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144169270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PDS-UKAN: Subdivision hopping connected to the U-KAN network for medical image segmentation PDS-UKAN:连接到U-KAN网络的细分跳频,用于医学图像分割
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-05-23 DOI: 10.1016/j.compmedimag.2025.102568
Liwei Deng , Wenbo Wang , Songyu Chen , Xin Yang , Sijuan Huang , Jing Wang
{"title":"PDS-UKAN: Subdivision hopping connected to the U-KAN network for medical image segmentation","authors":"Liwei Deng ,&nbsp;Wenbo Wang ,&nbsp;Songyu Chen ,&nbsp;Xin Yang ,&nbsp;Sijuan Huang ,&nbsp;Jing Wang","doi":"10.1016/j.compmedimag.2025.102568","DOIUrl":"10.1016/j.compmedimag.2025.102568","url":null,"abstract":"<div><div>Accurate and efficient segmentation of medical images plays a vital role in clinical tasks, such as diagnostic procedures and planning treatments. Traditional U-shaped encoder-decoder architectures, built on convolutional and transformer-based networks, have shown strong performance in medical image processing. However, the simple skip connections commonly used in these networks face limitations, such as insufficient nonlinear modeling capacity, weak global multiscale context modeling, and limited interpretability. To address these challenges, this study proposes the PDS-UKAN network, an innovative subdivision-based U-KAN architecture, designed to improve segmentation accuracy. The PDS-UKAN incorporates a PKAN module—comprising partial convolutions and Kolmogorov - Arnold network layers—into the encoder bottleneck, enhancing the network's nonlinear modeling and interpretability. Additionally, the proposed Dual-Branch Convolutional Boundary Enhancement Module (DBE) focuses on pixel-level boundary refinement, improving edge detail preservation in shallow skip connections. Meanwhile, the Skip Connection Channel Spatial Attention Module (SCCSA) mechanism is applied in the deeper skip connections to strengthen cross-dimensional interactions between channels and spatial features, mitigating the loss of spatial information due to downsampling. Extensive experiments across multiple medical imaging datasets demonstrate that PDS-UKAN consistently achieves superior performance compared to state-of-the-art (SOTA) methods.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"124 ","pages":"Article 102568"},"PeriodicalIF":5.4,"publicationDate":"2025-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144138122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MFFUNet: A hybrid model with cross-attention-guided multi-feature fusion for automated segmentation of organs at risk in cervical cancer brachytherapy MFFUNet:一种交叉注意引导多特征融合的混合模型,用于宫颈癌近距离治疗中危险器官的自动分割
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-05-22 DOI: 10.1016/j.compmedimag.2025.102571
Yin Gu , Huimin Guo , Jiahao Zhang , Yuhua Gao , Yuexian Li , Ming Cui , Wei Qian , He Ma
{"title":"MFFUNet: A hybrid model with cross-attention-guided multi-feature fusion for automated segmentation of organs at risk in cervical cancer brachytherapy","authors":"Yin Gu ,&nbsp;Huimin Guo ,&nbsp;Jiahao Zhang ,&nbsp;Yuhua Gao ,&nbsp;Yuexian Li ,&nbsp;Ming Cui ,&nbsp;Wei Qian ,&nbsp;He Ma","doi":"10.1016/j.compmedimag.2025.102571","DOIUrl":"10.1016/j.compmedimag.2025.102571","url":null,"abstract":"<div><div>Brachytherapy is a common treatment option for cervical cancer. An important step involved in brachytherapy is the delineation of organs at risk (OARs) based on computed tomography (CT) images. Automating OARs segmentation in brachytherapy has the benefit of both reducing the time and improving the quality of radiation therapy planning. This paper introduces a novel segmentation model named MFFUNet for the automatic contour delineation of OARs in cervical cancer brachytherapy. The proposed model employs a staged encoder–decoder structure, integrating the self-attention mechanism of Transformer with the CNN framework. A novel multi-features fusion (MFF) block with a cross-attention-guided feature fusion mechanism is also proposed, which efficiently extracts and cross-fuses features from multiple receptive fields, enriching the semantic information of the features and thus improving the performance of complex segmentation tasks. A private CT image dataset of 95 patients with cervical cancer undergoing brachytherapy is used to evaluate the segmentation performance of the proposed method. The OARs in the data consist of the bladder, rectum, and colon surrounding the cervix. The proposed model surpasses current mainstream OARs segmentation models in terms of segmentation accuracy. The mean Dice similarity coefficient (DSC) score of all three OARs has achieved 73.69%. Among them, the DSC score for the bladder is 92.65%, for the rectum is 66.55%, and for the colon is 61.86%. Moreover, we also conducted experiments on two common public thoracoabdominal multi-organ CT datasets. The excellent segmentation performance further demonstrates the generalization ability of our model. In conclusion, MFFUNet has demonstrated outstanding effectiveness in segmenting OARs for cervical cancer brachytherapy. By accurately delineating OARs, it enhances radiotherapy planning precision and helps reduce radiation toxicity, improving patient outcomes.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"124 ","pages":"Article 102571"},"PeriodicalIF":5.4,"publicationDate":"2025-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144147666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SFPGCL: Specificity-preserving federated population graph contrastive learning for multi-site ASD identification using rs-fMRI data SFPGCL:利用rs-fMRI数据进行多位点ASD识别的特异性保留联邦群体图对比学习
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-05-16 DOI: 10.1016/j.compmedimag.2025.102558
Yudan Ren , Zihan Ma , Zhenqing Ding , Ruonan Yang , Xiao Li , Xiaowei He , Tianming Liu
{"title":"SFPGCL: Specificity-preserving federated population graph contrastive learning for multi-site ASD identification using rs-fMRI data","authors":"Yudan Ren ,&nbsp;Zihan Ma ,&nbsp;Zhenqing Ding ,&nbsp;Ruonan Yang ,&nbsp;Xiao Li ,&nbsp;Xiaowei He ,&nbsp;Tianming Liu","doi":"10.1016/j.compmedimag.2025.102558","DOIUrl":"10.1016/j.compmedimag.2025.102558","url":null,"abstract":"<div><div>Autism spectrum disorder (ASD) is a severe neurodevelopmental disorder that affects people’s social communication and daily routine. Most existing imaging studies on ASD use single site resting-state functional magnetic resonance imaging (rs-fMRI) data, which may suffer from limited samples and geographic bias. Improving the generalization ability of the diagnostic models often necessitates a large-scale dataset from multiple imaging sites. However, centralizing multi-site data generally faces inherent challenges related to privacy, security, and storage burden. Federated learning (FL) can address these issues by enabling collaborative model training without centralizing data. Nevertheless, multi-site rs-fMRI data introduces site variations, causing unfavorable data heterogeneity. This negatively impacts biomarker identification and diagnostic decision. Moreover, previous FL approaches for fMRI analysis often ignore site-specific demographic information, such as age, gender, and full intelligence quotient (FIQ), providing useful information as non-imaging features. On the other hand, Graph Neural Networks (GNNs) are gaining popularity in fMRI representation learning due to their powerful graph representation capabilities. However, existing methods often focus on extracting subject-specific connectivity patterns and overlook inter-subject relationships in brain functional topology. In this study, we propose a specificity-preserving federated population graph contrastive learning (SFPGCL) framework for rs-fMRI analysis and multi-site ASD identification, including a server and multiple clients/sites for federated model aggregation. At each client, our model consists of a shared branch and a personalized branch, where parameters of the shared branch are sent to the sever, while those of the personalized branch remain local. This setup facilitates invariant knowledge sharing among sites and also helps preserve site specificity. In the shared branch, we employ a spatio-temporal attention graph neural network to learn temporal dynamics in fMRI data invariant to each site, and introduce a model-contrastive learning method to mitigate client data heterogeneity. In the personalized branch, we utilize population graph structure to fully integrate demographic information and functional network connectivity to preserve site-specific characteristics. Then, a site-invariant population graph is built to derive site-invariant representations based on the dynamic representations acquired from the shared branch. Finally, representations generated by the two branches are fused for classification. Experimental results on Autism Brain Imaging Data Exchange (ABIDE) show that SFPGCL achieves 80.0 % accuracy and 79.7 % AUC for ASD identification, which outperforms several other state-of-the art approaches.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"124 ","pages":"Article 102558"},"PeriodicalIF":5.4,"publicationDate":"2025-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144134879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast cortical thickness estimation using deep learning-based anatomy segmentation and diffeomorphic registration 基于深度学习的解剖分割和差胚配准快速皮质厚度估计
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-05-13 DOI: 10.1016/j.compmedimag.2025.102569
Jiong Wu , Shuang Zhou
{"title":"Fast cortical thickness estimation using deep learning-based anatomy segmentation and diffeomorphic registration","authors":"Jiong Wu ,&nbsp;Shuang Zhou","doi":"10.1016/j.compmedimag.2025.102569","DOIUrl":"10.1016/j.compmedimag.2025.102569","url":null,"abstract":"<div><div>Accurately and efficiently estimating the cortical thickness from magnetic resonance images (MRIs) is crucial for neuroscientific studies and clinical applications with various large-scale datasets. Diffeomorphic registration-based cortical thickness estimation (DiReCT) is a prominent traditional method of calculating such measures directly from original MRIs by applying diffeomorphic registration on segmented tissues. However, it suffers from prolonged computational time and limited reproducibility, impediments to its application in large-scale studies or real-time environments. This paper proposes a framework for cortical thickness estimation using deep learning-based anatomy segmentation and diffeomorphic registration. The framework begins by applying a convolutional neural network (CNN) segmentation model to the original image, generating a segmentation map that accurately delineates the cortical boundaries. Subsequently, a pair of distance maps generated from the segmentation map is injected into an unsupervised learning-based registration network for fast and diffeomorphic registration. A novel algorithm based on diffeomorphisms of different time points is proposed to calculate the final thickness map. We systematically evaluated and compared our method with surface-based measures from FreeSurfer on two distinct datasets. The experimental results demonstrated a superior performance of the proposed method, surpassing the performance of DiReCT and DL+DiReCT in terms of time efficiency and consistency with FreeSurfer. Our code and pre-trained models are publicly available at: <span><span>https://github.com/wujiong-hub/DL-CTE.git</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"123 ","pages":"Article 102569"},"PeriodicalIF":5.4,"publicationDate":"2025-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143947346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Trustworthy AI for stage IV non-small cell lung cancer: Automatic segmentation and uncertainty quantification 可信赖的AI用于IV期非小细胞肺癌:自动分割和不确定度量化
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-05-13 DOI: 10.1016/j.compmedimag.2025.102567
Sacha Dedeken , Pierre-Henri Conze , Vera Damerjian Pieters , Olivier Gallinato , Jérôme Faure , Thierry Colin , Dimitris Visvikis
{"title":"Trustworthy AI for stage IV non-small cell lung cancer: Automatic segmentation and uncertainty quantification","authors":"Sacha Dedeken ,&nbsp;Pierre-Henri Conze ,&nbsp;Vera Damerjian Pieters ,&nbsp;Olivier Gallinato ,&nbsp;Jérôme Faure ,&nbsp;Thierry Colin ,&nbsp;Dimitris Visvikis","doi":"10.1016/j.compmedimag.2025.102567","DOIUrl":"10.1016/j.compmedimag.2025.102567","url":null,"abstract":"<div><div>Accurate segmentation of lung tumors is essential for advancing personalized medicine in non-small cell lung cancer (NSCLC). However, stage IV NSCLC presents significant challenges due to heterogeneous tumor morphology and the presence of associated conditions including infection, atelectasis and pleural effusion. The complexity of multicentric datasets further complicates robust segmentation across diverse clinical settings. In this study, we evaluate deep-learning-based approaches for automated segmentation of advanced-stage lung tumors using 3D architectures on 387 CT scans from the Deep-Lung-IV study. Through comprehensive experiments, we assess the impact of model design, HU windowing, and dataset size on delineation performance, providing practical guidelines for robust implementation. Additionally, we propose a confidence score using deep ensembles to quantify prediction uncertainty and automate the identification of complex cases that require further review. Our results demonstrate the potential of attention-based architectures and specific preprocessing strategies to improve segmentation quality in such a challenging clinical scenario, while emphasizing the importance of uncertainty estimation to build trustworthy AI systems in medical imaging. Code is available at: <span><span>https://github.com/Sacha-Dedeken/SegStageIVNSCLC</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"123 ","pages":"Article 102567"},"PeriodicalIF":5.4,"publicationDate":"2025-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144070616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bi-VesTreeFormer: A bidirectional topology-aware transformer framework for coronary vFFR estimation
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-05-11 DOI: 10.1016/j.compmedimag.2025.102564
Congyu Tian , Zehua Liu , Linyuan Wang , Liang Shao , Yongzhi Deng , Xiangyun Liao , Weixin Si
{"title":"Bi-VesTreeFormer: A bidirectional topology-aware transformer framework for coronary vFFR estimation","authors":"Congyu Tian ,&nbsp;Zehua Liu ,&nbsp;Linyuan Wang ,&nbsp;Liang Shao ,&nbsp;Yongzhi Deng ,&nbsp;Xiangyun Liao ,&nbsp;Weixin Si","doi":"10.1016/j.compmedimag.2025.102564","DOIUrl":"10.1016/j.compmedimag.2025.102564","url":null,"abstract":"<div><div>Fractional Flow Reserve (FFR) serves as the gold standard for evaluating the functional significance of coronary artery stenosis. However, traditional FFR involves the injection of vasodilator drugs and the utilization of additional guidewires, which consequently can lead to patient risks and increased costs. Computational fluid dynamics-based approaches can enable non-invasive virtual FFR (vFFR) estimation, but they are computationally intensive and time-consuming. Although deep learning can remarkably enhance computational efficiency, the existing vFFR methods rely heavily on manually crafted features and face difficulties in capturing long-distance dependencies within the vessel structure. In this study, we propose a novel framework for estimating coronary vFFR, which circumvents the laborious preprocessing procedures of previous methods. Specifically, a novel bidirectional topology-aware transformer network (Bi-VesTreeFormer) is proposed to conduct fully automated topological stenotic feature extraction of the vessel tree and capture the global dependencies among branches. Additionally, a contextual vFFR decoder is introduced to establish the correlation of FFR values between adjacent branches and achieve a stable mapping of FFR values to the latent vector space. To validate and train our method, we gathered FFR data from 43 patients with coronary artery stenosis and simulated 15,000 coronary artery centerline data with a reduced-order hemodynamic model. The results show that the proposed method attains root mean square errors of 0.038 and 0.048 for simulated and real data respectively, surpassing the state-of-the-art methods.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"123 ","pages":"Article 102564"},"PeriodicalIF":5.4,"publicationDate":"2025-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143943170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mamba-based deformable medical image registration with an annotated brain MR-CT dataset 基于曼巴的可变形医学图像配准与注释脑MR-CT数据集
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-05-10 DOI: 10.1016/j.compmedimag.2025.102566
Yinuo Wang , Tao Guo , Weimin Yuan , Shihao Shu , Cai Meng , Xiangzhi Bai
{"title":"Mamba-based deformable medical image registration with an annotated brain MR-CT dataset","authors":"Yinuo Wang ,&nbsp;Tao Guo ,&nbsp;Weimin Yuan ,&nbsp;Shihao Shu ,&nbsp;Cai Meng ,&nbsp;Xiangzhi Bai","doi":"10.1016/j.compmedimag.2025.102566","DOIUrl":"10.1016/j.compmedimag.2025.102566","url":null,"abstract":"<div><div>Deformable registration is essential in medical image analysis, especially for handling various multi- and mono-modal registration tasks in neuroimaging. Existing studies lack exploration of brain MR-CT registration, and face challenges in both accuracy and efficiency improvements of learning-based methods. To enlarge the practice of multi-modal registration in brain, we present SR-Reg, a new benchmark dataset comprising 180 volumetric paired MR-CT images and annotated anatomical regions. Building on this foundation, we introduce MambaMorph, a novel deformable registration network based on an efficient state space model Mamba for global feature learning, with a fine-grained feature extractor for low-level embedding. Experimental results demonstrate that MambaMorph surpasses advanced ConvNet-based and Transformer-based networks across several multi- and mono-modal tasks, showcasing impressive enhancements of efficacy and efficiency. Code and dataset are available at <span><span>https://github.com/mileswyn/MambaMorph</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"123 ","pages":"Article 102566"},"PeriodicalIF":5.4,"publicationDate":"2025-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144070615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A deep learning framework for reconstructing Breast Amide Proton Transfer weighted imaging sequences from sparse frequency offsets to dense frequency offsets
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-05-09 DOI: 10.1016/j.compmedimag.2025.102563
Qiuhui Yang , Shu Su , Tianyu Zhang , Meng Wang , Weiqiang Dou , Kefeng Li , Ya Ren , Yijia Zheng , Mingwei Wang , Yi Xu , Yue Sun , Zhou Liu , Tao Tan
{"title":"A deep learning framework for reconstructing Breast Amide Proton Transfer weighted imaging sequences from sparse frequency offsets to dense frequency offsets","authors":"Qiuhui Yang ,&nbsp;Shu Su ,&nbsp;Tianyu Zhang ,&nbsp;Meng Wang ,&nbsp;Weiqiang Dou ,&nbsp;Kefeng Li ,&nbsp;Ya Ren ,&nbsp;Yijia Zheng ,&nbsp;Mingwei Wang ,&nbsp;Yi Xu ,&nbsp;Yue Sun ,&nbsp;Zhou Liu ,&nbsp;Tao Tan","doi":"10.1016/j.compmedimag.2025.102563","DOIUrl":"10.1016/j.compmedimag.2025.102563","url":null,"abstract":"<div><div>Amide Proton Transfer (APT) technique is a novel functional MRI technique that enables quantification of protein metabolism, but its wide application is largely limited in clinical settings by its long acquisition time. One way to reduce the scanning time is to obtain fewer frequency offset images during image acquisition. However, sparse frequency offset images are not inadequate to fit the z-spectral, a curve essential to quantifying the APT effect, which might compromise its quantification. In our study, we develop a deep learning-based model that allows for reconstructing dense frequency offsets from sparse ones, potentially reducing scanning time. We propose to leverage time-series convolution to extract both short and long-range spatial and frequency features of the APT imaging sequence. Our proposed model outperforms other seq2seq models, achieving superior reconstruction with a peak signal-to-noise ratio of 45.8 (95% confidence interval (CI): [44.9 46.7]), and a structural similarity index of 0.989 (95% CI:[0.987 0.993]) for the tumor region. We have integrated a weighted layer into our model to evaluate the impact of individual frequency offset on the reconstruction process. The weights assigned to the frequency offset at ±6.5 ppm, 0 ppm, and 3.5 ppm demonstrate higher significance as learned by the model. Experimental results demonstrate that our proposed model effectively reconstructs dense frequency offsets (n = 29, from 7 to -7 with 0.5 ppm as an interval) from data with 21 frequency offsets, reducing scanning time by 25%. This work presents a method for shortening the APT imaging acquisition time, offering potential guidance for parameter settings in APT imaging and serving as a valuable reference for clinicians.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"123 ","pages":"Article 102563"},"PeriodicalIF":5.4,"publicationDate":"2025-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143943171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CALIMAR-GAN: An unpaired mask-guided attention network for metal artifact reduction in CT scans CALIMAR-GAN:一种用于减少CT扫描中金属伪影的非配对掩模引导注意网络
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-05-09 DOI: 10.1016/j.compmedimag.2025.102565
Roberto Maria Scardigno, Antonio Brunetti, Pietro Maria Marvulli, Raffaele Carli, Mariagrazia Dotoli, Vitoantonio Bevilacqua, Domenico Buongiorno
{"title":"CALIMAR-GAN: An unpaired mask-guided attention network for metal artifact reduction in CT scans","authors":"Roberto Maria Scardigno,&nbsp;Antonio Brunetti,&nbsp;Pietro Maria Marvulli,&nbsp;Raffaele Carli,&nbsp;Mariagrazia Dotoli,&nbsp;Vitoantonio Bevilacqua,&nbsp;Domenico Buongiorno","doi":"10.1016/j.compmedimag.2025.102565","DOIUrl":"10.1016/j.compmedimag.2025.102565","url":null,"abstract":"<div><div>High-quality computed tomography (CT) scans are essential for accurate diagnostic and therapeutic decisions, but the presence of metal objects within the body can produce distortions that lower image quality. Deep learning (DL) approaches using image-to-image translation for metal artifact reduction (MAR) show promise over traditional methods but often introduce secondary artifacts. Additionally, most rely on paired simulated data due to limited availability of real paired clinical data, restricting evaluation on clinical scans to qualitative analysis. This work presents CALIMAR-GAN, a generative adversarial network (GAN) model that employs a guided attention mechanism and the linear interpolation algorithm to reduce artifacts using unpaired simulated and clinical data for targeted artifact reduction. Quantitative evaluations on simulated images demonstrated superior performance, achieving a PSNR of 31.7, SSIM of 0.877, and Fréchet inception distance (FID) of 22.1, outperforming state-of-the-art methods. On real clinical images, CALIMAR-GAN achieved the lowest FID (32.7), validated as a valuable complement to qualitative assessments through correlation with pixel-based metrics (<span><math><mrow><mi>r</mi><mo>=</mo><mo>−</mo><mn>0</mn><mo>.</mo><mn>797</mn></mrow></math></span> with PSNR, <span><math><mrow><mi>p</mi><mo>&lt;</mo><mn>0</mn><mo>.</mo><mn>01</mn></mrow></math></span>; <span><math><mrow><mi>r</mi><mo>=</mo><mo>−</mo><mn>0</mn><mo>.</mo><mn>767</mn></mrow></math></span> with MS-SSIM, <span><math><mrow><mi>p</mi><mo>&lt;</mo><mn>0</mn><mo>.</mo><mn>01</mn></mrow></math></span>). This work advances DL-based artifact reduction into clinical practice with high-fidelity reconstructions that enhance diagnostic accuracy and therapeutic outcomes. Code is available at <span><span>https://github.com/roberto722/calimar-gan</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"123 ","pages":"Article 102565"},"PeriodicalIF":5.4,"publicationDate":"2025-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143947345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信