IEEE transactions on medical imaging最新文献

筛选
英文 中文
DINO-Reg: Efficient Multimodal Image Registration With Distilled Features Dino-Reg:高效的多模态图像配准与蒸馏特征。
IEEE transactions on medical imaging Pub Date : 2025-03-06 DOI: 10.1109/TMI.2025.3567247
Xinrui Song;Xuanang Xu;Jiajin Zhang;Diego Machado Reyes;Pingkun Yan
{"title":"DINO-Reg: Efficient Multimodal Image Registration With Distilled Features","authors":"Xinrui Song;Xuanang Xu;Jiajin Zhang;Diego Machado Reyes;Pingkun Yan","doi":"10.1109/TMI.2025.3567247","DOIUrl":"10.1109/TMI.2025.3567247","url":null,"abstract":"Medical image registration is a crucial process for aligning anatomical structures, enabling applications such as atlas mapping, longitudinal analysis, and multimodal data fusion. This paper introduces DINO-Reg, an adaptation-free registration method leveraging the vision foundation model, DINOv2, to extract features for deformable 3D medical image alignment. Although DINOv2 was originally trained on natural images, our study links the vision foundation model with medical image registration and demonstrates that the generic image encoder could readily generalize to medical images with state-of-the-art performance. We further propose DINO-Reg-Eco, a knowledge-distilled version using a UNet-structured 3D convolutional neural network (CNN) for feature extraction. The Eco model reduces encoding time by 99% while maintaining state-of-the-art performance, which is essential for resource-limited settings and significantly lowers the carbon footprint associated with intensive computational demands. Benchmarking across diverse datasets shows that both methods outperform existing supervised and unsupervised approaches without fine-tuning, demonstrating the transformative potential of foundation models in medical image registration. Our code is open-sourced at <uri>https://github.com/RPIDIAL/DINO-Reg</uri>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 9","pages":"3809-3819"},"PeriodicalIF":0.0,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143915041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Boundary as the Bridge: Toward Heterogeneous Partially-Labeled Medical Image Segmentation and Landmark Detection 边界作为桥梁:异构部分标记医学图像分割与地标检测
IEEE transactions on medical imaging Pub Date : 2025-03-06 DOI: 10.1109/TMI.2025.3548919
Haifan Gong;Boyao Wan;Luoyao Kang;Xiang Wan;Lingyan Zhang;Haofeng Li
{"title":"Boundary as the Bridge: Toward Heterogeneous Partially-Labeled Medical Image Segmentation and Landmark Detection","authors":"Haifan Gong;Boyao Wan;Luoyao Kang;Xiang Wan;Lingyan Zhang;Haofeng Li","doi":"10.1109/TMI.2025.3548919","DOIUrl":"10.1109/TMI.2025.3548919","url":null,"abstract":"Medical landmark detection and segmentation are crucial elements for computer-aided diagnosis and treatment. However, a common challenge arises because many datasets are exclusively annotated with either landmarks or segmentation masks: a situation we term the ‘heterogeneous partially-labeled’ problem. To address this, we propose a novel yet effective ‘Boundary-as-Bridge’ Loss (BaBLoss) that models the interplay between landmark detection and segmentation tasks. Specifically, our loss function is designed to maximize the correlation between the boundary distance map of the segmentation area and the heatmap deployed for landmark detection. Moreover, we introduce a prompt pipeline to use a segment anything model and landmarks to generate pseudo-segmentation labels for data with landmark annotation. To evaluate the effectiveness of our method, we collect and build two heterogeneous partially-labeled datasets on the brain and knee. Extensive experiments on these datasets using various backbone structures have shown the effectiveness of our method. Code is available at <uri>https://github.com/lhaof/HPL</uri>.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 7","pages":"2747-2756"},"PeriodicalIF":0.0,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143570339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Representation Learning in PET Scans Enhanced by Semantic and 3D Position Specific Characteristics 语义和三维位置特征增强PET扫描表征学习
IEEE transactions on medical imaging Pub Date : 2025-03-05 DOI: 10.1109/TMI.2025.3566996
Theodoros P. Vagenas;Maria Vakalopoulou;Christos Sachpekidis;Antonia Dimitrakopoulou-Strauss;George K. Matsopoulos
{"title":"Representation Learning in PET Scans Enhanced by Semantic and 3D Position Specific Characteristics","authors":"Theodoros P. Vagenas;Maria Vakalopoulou;Christos Sachpekidis;Antonia Dimitrakopoulou-Strauss;George K. Matsopoulos","doi":"10.1109/TMI.2025.3566996","DOIUrl":"10.1109/TMI.2025.3566996","url":null,"abstract":"Representation learning methods that discover task and/or data-specific characteristics are very popular for a variety of applications. However, their application to 3D medical images is restricted by the computational cost and their inherent subtle differences in intensities and appearance. In this paper, a novel representation learning scheme for extracting representations capable of distinguishing high-uptake regions from 3D 18F-Fluorodeoxyglucose positron emission tomography (FDG-PET) images is proposed. In particular, we propose a novel position-enhanced learning scheme effectively incorporating semantic and position-based features through our proposed Position Encoding Block (PEB) to produce highly informative representations. Such representations incorporate both semantic and position-aware features from high-dimensional medical data, leading to general representations with better performance on clinical tasks. To evaluate our method, we conducted experiments on the challenging task of classifying high-uptake regions as either non-tumor or tumor lesions in Metastatic Melanoma (MM). MM is a type of cancer characterized by its rapid spread to various body sites, which leads to low survival rates. Extensive experiments on an in-house and a public dataset of whole-body FDG-PET images indicated an increase of 10.50% in sensitivity and 4.89% in F1-score against the baseline representation learning scheme while also outperforming state-of-the-art methods for classifying MM regions of interest. The source code will be available at <uri>https://github.com/theoVag/Representation-Learning-Sem-Pos</uri>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 9","pages":"3795-3808"},"PeriodicalIF":0.0,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143909861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An End-to-End Deep Learning Generative Framework for Refinable Shape Matching and Generation 基于端到端深度学习的可细化形状匹配生成框架
IEEE transactions on medical imaging Pub Date : 2025-03-05 DOI: 10.1109/TMI.2025.3562756
Soodeh Kalaie;Andy Bulpitt;Alejandro F. Frangi;Ali Gooya
{"title":"An End-to-End Deep Learning Generative Framework for Refinable Shape Matching and Generation","authors":"Soodeh Kalaie;Andy Bulpitt;Alejandro F. Frangi;Ali Gooya","doi":"10.1109/TMI.2025.3562756","DOIUrl":"10.1109/TMI.2025.3562756","url":null,"abstract":"Generative modelling for shapes is a prerequisite for In-Silico Clinical Trials (ISCTs), which aim to cost-effectively validate medical device interventions using synthetic anatomical shapes, often represented as 3D surface meshes. However, constructing AI models to generate shapes closely resembling the real mesh samples is challenging due to variable vertex counts, connectivities, and the lack of dense vertex-wise correspondences across the training data. Employing graph representations for meshes, we develop a novel unsupervised geometric deep-learning model to establish refinable shape correspondences in a latent space, construct a population-derived atlas and generate realistic synthetic shapes. We additionally extend our proposed base model to a joint shape generative-clustering multi-atlas framework to incorporate further variability and preserve more details in the generated shapes. Experimental results using liver and left-ventricular models demonstrate the approach’s applicability to computational medicine, highlighting its suitability for ISCTs through a comparative analysis.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 8","pages":"3323-3344"},"PeriodicalIF":0.0,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143909862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Dual-Task Synergy-Driven Generalization Framework for Pancreatic Cancer Segmentation in CT Scans CT扫描胰腺癌分割的双任务协同驱动泛化框架
IEEE transactions on medical imaging Pub Date : 2025-03-02 DOI: 10.1109/TMI.2025.3566376
Jun Li;Yijue Zhang;Haibo Shi;Minhong Li;Qiwei Li;Xiaohua Qian
{"title":"A Dual-Task Synergy-Driven Generalization Framework for Pancreatic Cancer Segmentation in CT Scans","authors":"Jun Li;Yijue Zhang;Haibo Shi;Minhong Li;Qiwei Li;Xiaohua Qian","doi":"10.1109/TMI.2025.3566376","DOIUrl":"10.1109/TMI.2025.3566376","url":null,"abstract":"Pancreatic cancer, characterized by its notable prevalence and mortality rates, demands accurate lesion delineation for effective diagnosis and therapeutic interventions. The generalizability of extant methods is frequently compromised due to the pronounced variability in imaging and the heterogeneous characteristics of pancreatic lesions, which may mimic normal tissues and exhibit significant inter-patient variability. Thus, we propose a generalization framework that synergizes pixel-level classification and regression tasks, to accurately delineate lesions and improve model stability. This framework not only seeks to align segmentation contours with actual lesions but also uses regression to elucidate spatial relationships between diseased and normal tissues, thereby improving tumor localization and morphological characterization. Enhanced by the reciprocal transformation of task outputs, our approach integrates additional regression supervision within the segmentation context, bolstering the model’s generalization ability from a dual-task perspective. Besides, dual self-supervised learning in feature spaces and output spaces augments the model’s representational capability and stability across different imaging views. Experiments on 594 samples composed of three datasets with significant imaging differences demonstrate that our generalized pancreas segmentation results comparable to mainstream in-domain validation performance (Dice: 84.07%). More importantly, it successfully improves the results of the highly challenging cross-lesion generalized pancreatic cancer segmentation task by 9.51%. Thus, our model constitutes a resilient and efficient foundational technological support for pancreatic disease management and wider medical applications. The codes will be released at <uri>https://github.com/SJTUBME-QianLab/Dual-Task-Seg</uri>.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 9","pages":"3783-3794"},"PeriodicalIF":0.0,"publicationDate":"2025-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143901418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Transfontanelle Thermoacoustic Imaging of Intraventricular Brain Hemorrhages in Live Sheep 活羊脑室内出血的经囟门热声成像
IEEE transactions on medical imaging Pub Date : 2025-03-02 DOI: 10.1109/TMI.2025.3566372
Md Tarikul Islam;Juliana Benavides;Ravi Prakash;Mohsin Zafar;Laura S. McGuire;Fady Charbel;Amanda P. Siegel;Danilo Erricolo;James Lin;Juri G. Gelovani;Kamran Avanaki
{"title":"Transfontanelle Thermoacoustic Imaging of Intraventricular Brain Hemorrhages in Live Sheep","authors":"Md Tarikul Islam;Juliana Benavides;Ravi Prakash;Mohsin Zafar;Laura S. McGuire;Fady Charbel;Amanda P. Siegel;Danilo Erricolo;James Lin;Juri G. Gelovani;Kamran Avanaki","doi":"10.1109/TMI.2025.3566372","DOIUrl":"10.1109/TMI.2025.3566372","url":null,"abstract":"Preterm neonates are vulnerable to periventricular-intraventricular hemorrhage since the periventricular germinal matrix blood vessels are still immature and fragile until around 36 weeks. While cranial ultrasound imaging (cUS) is the standard technique for diagnosing brain injury in neonates, it has limited accuracy in detecting early-stage hemorrhages due to poor sensitivity and specificity. Transfontanelle thermoacoustic imaging (TTAI) combines the advantages of high contrast from electromagnetic absorption with high-resolution from ultrasound imaging and represents a potential alternative to overcome the limitations of conventional cUS systems. We developed a TTAI system and evaluated its performance in a large animal model of intraventricular hemorrhage (IVH) in vivo. Our system demonstrated the ability to detect hemorrhages as small as 0.1 mL, which is better than the previously reported limit of detection for either ultrasound or photoacoustic imaging in the same animal model. These results indicate that TTAI is a novel imaging modality with high potential for accurate detection of IVH in neonates.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 8","pages":"3175-3184"},"PeriodicalIF":0.0,"publicationDate":"2025-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10982353","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143901367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Digital Staining With Knowledge Distillation: A Unified Framework for Unpaired and Paired-but-Misaligned Data 知识蒸馏的数字染色:未配对和配对但不对齐数据的统一框架
IEEE transactions on medical imaging Pub Date : 2025-03-01 DOI: 10.1109/TMI.2025.3565329
Ziwang Xu;Lanqing Guo;Satoshi Tsutsui;Shuyan Zhang;Alex C. Kot;Bihan Wen
{"title":"Digital Staining With Knowledge Distillation: A Unified Framework for Unpaired and Paired-but-Misaligned Data","authors":"Ziwang Xu;Lanqing Guo;Satoshi Tsutsui;Shuyan Zhang;Alex C. Kot;Bihan Wen","doi":"10.1109/TMI.2025.3565329","DOIUrl":"10.1109/TMI.2025.3565329","url":null,"abstract":"Staining is essential in cell imaging and medical diagnostics but poses significant challenges, including high cost, time consumption, labor intensity, and irreversible tissue alterations. Recent advances in deep learning have enabled digital staining through supervised model training. However, collecting large-scale, perfectly aligned pairs of stained and unstained images remains difficult. In this work, we propose a novel unsupervised deep learning framework for digital cell staining that reduces the need for extensive paired data using knowledge distillation. We explore two training schemes: <xref>(1)</xref> unpaired and <xref>(2)</xref> paired-but-misaligned settings. For the unpaired case, we introduce a two-stage pipeline, comprising light enhancement followed by colorization, as a teacher model. Subsequently, we obtain a student staining generator through knowledge distillation with hybrid non-reference losses. To leverage the pixel-wise information between adjacent sections, we further extend to the paired-but-misaligned setting, adding the Learning to Align module to utilize pixel-level information. Experiment results on our dataset demonstrate that our proposed unsupervised deep staining method can generate stained images with more accurate positions and shapes of the cell targets in both settings. Compared with competing methods, our method achieves improved results both qualitatively and quantitatively (e.g., NIQE and PSNR). We applied our digital staining method to the White Blood Cell (WBC) dataset, investigating its potential for medical applications.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 9","pages":"3770-3782"},"PeriodicalIF":0.0,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143898274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dual Interspersion and Flexible Deployment for Few-Shot Medical Image Segmentation 基于双插值和灵活部署的少镜头医学图像分割
IEEE transactions on medical imaging Pub Date : 2025-02-28 DOI: 10.1109/TMI.2025.3546653
Ziming Cheng;Shidong Wang;Yang Long;Tao Zhou;Haofeng Zhang;Ling Shao
{"title":"Dual Interspersion and Flexible Deployment for Few-Shot Medical Image Segmentation","authors":"Ziming Cheng;Shidong Wang;Yang Long;Tao Zhou;Haofeng Zhang;Ling Shao","doi":"10.1109/TMI.2025.3546653","DOIUrl":"10.1109/TMI.2025.3546653","url":null,"abstract":"Acquiring a large volume of annotated medical data is impractical due to time, financial, and legal constraints. Consequently, few-shot medical image segmentation is increasingly emerging as a prominent research direction. Nowadays, Medical scenarios pose two major challenges: 1) intra-class variation caused by diversity among support and query sets; 2) inter-class extreme imbalance resulting from background heterogeneity. However, existing prototypical networks struggle to tackle these obstacles effectively. To this end, we propose a Dual Interspersion and Flexible Deployment (DIFD) model. Drawing inspiration from military interspersion tactics, we design the dual Interspersion module to generate representative basis prototypes from support features. These basis prototypes are then deeply interacted with query features. Furthermore, we introduce a fusion factor to fuse and refine the basis prototypes. Ultimately, we seamlessly integrate and flexibly deploy the basis prototypes to facilitate correct matching between the query features and basis prototypes, thus conducive to improving the segmentation accuracy of the model. Extensive experiments on three publicly available medical image datasets demonstrate that our model significantly outshines other SoTAs (2.78% higher dice score on average across all datasets), achieving a new level of performance. The code is available at: <uri>https://github.com/zmcheng9/DIFD</uri>.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 6","pages":"2732-2744"},"PeriodicalIF":0.0,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143526038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TAD-Graph: Enhancing Whole Slide Image Analysis via Task-Aware Subgraph Disentanglement TAD-Graph:通过任务感知子图纠缠加强整张切片图像分析
IEEE transactions on medical imaging Pub Date : 2025-02-25 DOI: 10.1109/TMI.2025.3545680
Fuying Wang;Jiayi Xin;Weiqin Zhao;Yuming Jiang;Maximus Yeung;Liansheng Wang;Lequan Yu
{"title":"TAD-Graph: Enhancing Whole Slide Image Analysis via Task-Aware Subgraph Disentanglement","authors":"Fuying Wang;Jiayi Xin;Weiqin Zhao;Yuming Jiang;Maximus Yeung;Liansheng Wang;Lequan Yu","doi":"10.1109/TMI.2025.3545680","DOIUrl":"10.1109/TMI.2025.3545680","url":null,"abstract":"Learning contextual features such as interactions among various biological entities is vital for whole slide images (WSI)-based cancer diagnosis and prognosis. Graph-based methods have surpassed traditional multi-instance learning in WSI analysis by robustly integrating local pathological and contextual interaction features. However, the high resolution of WSIs often leads to large, noisy graphs. This can result in shortcut learning and overfitting due to the disproportionate graph size relative to WSI datasets. To overcome these issues, we propose a novel Task-Aware Disentanglement Graph approach (TAD-Graph) for more efficient WSI analysis. TAD-Graph operates on WSI graph representations, effectively identifying and disentangling informative subgraphs to enhance contextual feature extraction. Specifically, we inject stochasticity into the edge connections of the WSI graph and separate the WSI graph into task-relevant and task-irrelevant subgraphs. The disentanglement procedure is optimized using a graph information bottleneck-based objective, with added constraints on the task-irrelevant subgraph to reduce spurious correlations from task-relevant subgraphs to labels. TAD-Graph outperforms existing methods in three WSI analysis tasks across six benchmark datasets. Furthermore, our analysis using pathological concept-based metrics demonstrates TAD-Graph’s ability to not only improve predictive accuracy but also provide interpretive insights and aid in potential biomarker identification. Our code is publicly available at <uri>https://github.com/fuying-wang/TAD-Graph</uri>.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 6","pages":"2683-2695"},"PeriodicalIF":0.0,"publicationDate":"2025-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143495370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GDP-Net: Global Dependency-Enhanced Dual-Domain Parallel Network for Ring Artifact Removal GDP-Net:环伪影去除的全局依赖增强双域并行网络
IEEE transactions on medical imaging Pub Date : 2025-02-25 DOI: 10.1109/TMI.2025.3545121
Yikun Zhang;Guannan Liu;Yang Liu;Shipeng Xie;Jiabing Gu;Zujian Huang;Xu Ji;Tianling Lyu;Yan Xi;Shouping Zhu;Jian Yang;Yang Chen
{"title":"GDP-Net: Global Dependency-Enhanced Dual-Domain Parallel Network for Ring Artifact Removal","authors":"Yikun Zhang;Guannan Liu;Yang Liu;Shipeng Xie;Jiabing Gu;Zujian Huang;Xu Ji;Tianling Lyu;Yan Xi;Shouping Zhu;Jian Yang;Yang Chen","doi":"10.1109/TMI.2025.3545121","DOIUrl":"10.1109/TMI.2025.3545121","url":null,"abstract":"In Computed Tomography (CT) imaging, the ring artifacts caused by the inconsistent detector response can significantly degrade the reconstructed images, having negative impacts on the subsequent applications. The new generation of CT systems based on photon-counting detectors are affected by ring artifacts more severely. The flexibility and variety of detector responses make it difficult to build a well-defined model to characterize the ring artifacts. In this context, this study proposes the global dependency-enhanced dual-domain parallel neural network for Ring Artifact Removal (RAR). First, based on the fact that the features of ring artifacts are different in Cartesian and Polar coordinates, the parallel architecture is adopted to construct the deep neural network so that it can extract and exploit the latent features from different domains to improve the performance of ring artifact removal. Besides, the ring artifacts are globally relevant whether in Cartesian or Polar coordinate systems, but convolutional neural networks show inherent shortcomings in modeling long-range dependency. To tackle this problem, this study introduces the novel Mamba mechanism to achieve a global receptive field without incurring high computational complexity. It enables effective capture of the long-range dependency, thereby enhancing the model performance in image restoration and artifact reduction. The experiments on the simulated data validate the effectiveness of the dual-domain parallel neural network and the Mamba mechanism, and the results on two unseen real datasets demonstrate the promising performance of the proposed RAR algorithm in eliminating ring artifacts and recovering image details.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 6","pages":"2718-2731"},"PeriodicalIF":0.0,"publicationDate":"2025-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143495375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信