IEEE transactions on medical imaging最新文献

筛选
英文 中文
DC²T: Disentanglement-Guided Consolidation and Consistency Training for Semi-Supervised Cross-Site Continual Segmentation DC2T:用于半监督跨站点连续分割的分离引导巩固和一致性训练
IEEE transactions on medical imaging Pub Date : 2024-09-27 DOI: 10.1109/TMI.2024.3469528
Jingyang Zhang;Jialun Pei;Dunyuan Xu;Yueming Jin;Pheng-Ann Heng
{"title":"DC²T: Disentanglement-Guided Consolidation and Consistency Training for Semi-Supervised Cross-Site Continual Segmentation","authors":"Jingyang Zhang;Jialun Pei;Dunyuan Xu;Yueming Jin;Pheng-Ann Heng","doi":"10.1109/TMI.2024.3469528","DOIUrl":"10.1109/TMI.2024.3469528","url":null,"abstract":"Continual Learning (CL) is recognized to be a storage-efficient and privacy-protecting approach for learning from sequentially-arriving medical sites. However, most existing CL methods assume that each site is fully labeled, which is impractical due to budget and expertise constraint. This paper studies the Semi-Supervised Continual Learning (SSCL) that adopts partially-labeled sites arriving over time, with each site delivering only limited labeled data while the majority remains unlabeled. In this regard, it is challenging to effectively utilize unlabeled data under dynamic cross-site domain gaps, leading to intractable model forgetting on such unlabeled data. To address this problem, we introduce a novel Disentanglement-guided Consolidation and Consistency Training (DC2T) framework, which roots in an Online Semi-Supervised representation Disentanglement (OSSD) perspective to excavate content representations of partially labeled data from sites arriving over time. Moreover, these content representations are required to be consolidated for site-invariance and calibrated for style-robustness, in order to alleviate forgetting even in the absence of ground truth. Specifically, for the invariance on previous sites, we retain historical content representations when learning on a new site, via a Content-inspired Parameter Consolidation (CPC) method that prevents altering the model parameters crucial for content preservation. For the robustness against style variation, we develop a Style-induced Consistency Training (SCT) scheme that enforces segmentation consistency over style-related perturbations to recalibrate content encoding. We extensively evaluate our method on fundus and cardiac image segmentation, indicating the advantage over existing SSCL methods for alleviating forgetting on unlabeled data.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 2","pages":"903-914"},"PeriodicalIF":0.0,"publicationDate":"2024-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142329046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Leveraging Input-Level Feature Deformation With Guided-Attention for Sulcal Labeling 利用输入级特征变形和引导式注意力进行胼胝体标记
IEEE transactions on medical imaging Pub Date : 2024-09-26 DOI: 10.1109/TMI.2024.3468727
Seungeun Lee;Seunghwan Lee;Ethan H. Willbrand;Benjamin J. Parker;Silvia A. Bunge;Kevin S. Weiner;Ilwoo Lyu
{"title":"Leveraging Input-Level Feature Deformation With Guided-Attention for Sulcal Labeling","authors":"Seungeun Lee;Seunghwan Lee;Ethan H. Willbrand;Benjamin J. Parker;Silvia A. Bunge;Kevin S. Weiner;Ilwoo Lyu","doi":"10.1109/TMI.2024.3468727","DOIUrl":"10.1109/TMI.2024.3468727","url":null,"abstract":"The identification of cortical sulci is key for understanding functional and structural development of the cortex. While large, consistent sulci (or primary/secondary sulci) receive significant attention in most studies, the exploration of smaller and more variable sulci (or putative tertiary sulci) remains relatively under-investigated. Despite its importance, automatic labeling of cortical sulci is challenging due to (1) the presence of substantial anatomical variability, (2) the relatively small size of the regions of interest (ROIs) compared to unlabeled regions, and (3) the scarcity of annotated labels. In this paper, we propose a novel end-to-end learning framework using a spherical convolutional neural network (CNN). Specifically, the proposed method learns to effectively warp geometric features in a direction that facilitates the labeling of sulci while mitigating the impact of anatomical variability. Moreover, we introduce a guided-attention mechanism that takes into account the extent of deformation induced by the learned warping. This extracts discriminative features that emphasize sulcal ROIs, while suppressing irrelevant information of unlabeled regions. In the experiments, we evaluate the proposed method on 8 sulci of the posterior medial cortex. Our method outperforms existing methods particularly in the putative tertiary sulci. The code is publicly available at <uri>https://github.com/Shape-Lab/DSPHARM-Net</uri>.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 2","pages":"915-926"},"PeriodicalIF":0.0,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142325307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Translation Consistent Semi-Supervised Segmentation for 3D Medical Images 三维医学图像的翻译一致性半监督分割
IEEE transactions on medical imaging Pub Date : 2024-09-26 DOI: 10.1109/TMI.2024.3468896
Yuyuan Liu;Yu Tian;Chong Wang;Yuanhong Chen;Fengbei Liu;Vasileios Belagiannis;Gustavo Carneiro
{"title":"Translation Consistent Semi-Supervised Segmentation for 3D Medical Images","authors":"Yuyuan Liu;Yu Tian;Chong Wang;Yuanhong Chen;Fengbei Liu;Vasileios Belagiannis;Gustavo Carneiro","doi":"10.1109/TMI.2024.3468896","DOIUrl":"10.1109/TMI.2024.3468896","url":null,"abstract":"3D medical image segmentation methods have been successful, but their dependence on large amounts of voxel-level annotated data is a disadvantage that needs to be addressed given the high cost to obtain such annotation. Semi-supervised learning (SSL) solves this issue by training models with a large unlabelled and a small labelled dataset. The most successful SSL approaches are based on consistency learning that minimises the distance between model responses obtained from perturbed views of the unlabelled data. These perturbations usually keep the spatial input context between views fairly consistent, which may cause the model to learn segmentation patterns from the spatial input contexts instead of the foreground objects. In this paper, we introduce the <underline>Tra</u>nslation <underline>Co</u>nsistent <underline>Co</u>-training (TraCoCo) which is a consistency learning SSL method that perturbs the input data views by varying their spatial input context, allowing the model to learn segmentation patterns from foreground objects. Furthermore, we propose a new Confident Regional Cross entropy (CRC) loss, which improves training convergence and keeps the robustness to co-training pseudo-labelling mistakes. Our method yields state-of-the-art (SOTA) results for several 3D data benchmarks, such as the Left Atrium (LA), Pancreas-CT (Pancreas), and Brain Tumor Segmentation (BraTS19). Our method also attains best results on a 2D-slice benchmark, namely the Automated Cardiac Diagnosis Challenge (ACDC), further demonstrating its effectiveness. Our code, training logs and checkpoints are available at <uri>https://github.com/yyliu01/TraCoCo</uri>.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 2","pages":"952-968"},"PeriodicalIF":0.0,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142325309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Source Imaging Method Based on Spatial Smoothing and Edge Sparsity (SISSES) and Its Application to OPM-MEG 基于空间平滑和边缘稀疏性(SISSES)的源成像方法及其在 OPM-MEG 中的应用
IEEE transactions on medical imaging Pub Date : 2024-09-25 DOI: 10.1109/TMI.2024.3467377
Wen Li;Nan An;Fuzhi Cao;Wenli Wang;Chunhui Wang;Weinan Xu;Yang Gao;Xiaolin Ning
{"title":"Source Imaging Method Based on Spatial Smoothing and Edge Sparsity (SISSES) and Its Application to OPM-MEG","authors":"Wen Li;Nan An;Fuzhi Cao;Wenli Wang;Chunhui Wang;Weinan Xu;Yang Gao;Xiaolin Ning","doi":"10.1109/TMI.2024.3467377","DOIUrl":"10.1109/TMI.2024.3467377","url":null,"abstract":"Source estimation in magnetoencephalography (MEG) involves solving a highly ill-posed problem without a unique solution. Accurate estimation of the time course and spatial extent of the source is important for studying the mechanisms of brain activity and preoperative functional localization. Traditional methods tend to yield small-amplitude diffuse or large-amplitude focused source estimates. Recently, the structured sparsity-based source imaging algorithm has emerged as one of the most promising algorithms for improving source extent estimation. However, it suffers from a notable amplitude bias. To improve the spatiotemporal resolution of reconstructed sources, we propose a novel method called the source imaging method based on spatial smoothing and edge sparsity (SISSES). In this method, the temporal dynamics of sources are modeled using a set of temporal basis functions, and the spatial characteristics of the source are represented by a first-order Markov random field (MRF) model. In particular, sparse constraints are imposed on the MRF model residuals in the original and variation domains. Numerical simulations were conducted to validate the SISSES. The results demonstrate that SISSES outperforms benchmark methods for estimating the time course, location, and extent of patch sources. Additionally, auditory and median nerve stimulation experiments were performed using a 31-channel optically pumped magnetometer MEG system, and the SISSES was applied to the source imaging of these data. The results demonstrate that SISSES correctly identified the source regions in which brain responses occurred at different times, demonstrating its feasibility for various practical applications.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 2","pages":"969-981"},"PeriodicalIF":0.0,"publicationDate":"2024-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142321527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Explainable Unified Framework of Spatio-Temporal Coupling Learning With Application to Dynamic Brain Functional Connectivity Analysis 可解释的时空耦合学习统一框架在动态脑功能连接分析中的应用
IEEE transactions on medical imaging Pub Date : 2024-09-25 DOI: 10.1109/TMI.2024.3467384
Bin Gao;Aiju Yu;Chen Qiao;Vince D. Calhoun;Julia M. Stephen;Tony W. Wilson;Yu-Ping Wang
{"title":"An Explainable Unified Framework of Spatio-Temporal Coupling Learning With Application to Dynamic Brain Functional Connectivity Analysis","authors":"Bin Gao;Aiju Yu;Chen Qiao;Vince D. Calhoun;Julia M. Stephen;Tony W. Wilson;Yu-Ping Wang","doi":"10.1109/TMI.2024.3467384","DOIUrl":"10.1109/TMI.2024.3467384","url":null,"abstract":"Time-series data such as fMRI and MEG carry a wealth of inherent spatio-temporal coupling relationship, and their modeling via deep learning is essential for uncovering biological mechanisms. However, current machine learning models for mining spatio-temporal information usually overlook this intrinsic coupling association, in addition to poor explainability. In this paper, we present an explainable learning framework for spatio-temporal coupling. Specifically, this framework constructs a deep learning network based on spatio-temporal correlation, which can well integrate the time-varying coupled relationships between node representation and inter-node connectivity. Furthermore, it explores spatio-temporal evolution at each time step, providing a better explainability of the analysis results. Finally, we apply the proposed framework to brain dynamic functional connectivity (dFC) analysis. Experimental results demonstrate that it can effectively capture the variations in dFC during brain development and the evolution of spatio-temporal information at the resting state. Two distinct developmental functional connectivity (FC) patterns are identified. Specifically, the connectivity among regions related to emotional regulation decreases, while the connectivity associated with cognitive activities increases. In addition, children and young adults display notable cyclic fluctuations in resting-state brain dFC.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 2","pages":"941-951"},"PeriodicalIF":0.0,"publicationDate":"2024-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142321528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UTSRMorph: A Unified Transformer and Superresolution Network for Unsupervised Medical Image Registration UTSRMorph:用于无监督医学图像配准的统一变换器和超分辨率网络
IEEE transactions on medical imaging Pub Date : 2024-09-25 DOI: 10.1109/TMI.2024.3467919
Runshi Zhang;Hao Mo;Junchen Wang;Bimeng Jie;Yang He;Nenghao Jin;Liang Zhu
{"title":"UTSRMorph: A Unified Transformer and Superresolution Network for Unsupervised Medical Image Registration","authors":"Runshi Zhang;Hao Mo;Junchen Wang;Bimeng Jie;Yang He;Nenghao Jin;Liang Zhu","doi":"10.1109/TMI.2024.3467919","DOIUrl":"10.1109/TMI.2024.3467919","url":null,"abstract":"Complicated image registration is a key issue in medical image analysis, and deep learning-based methods have achieved better results than traditional methods. The methods include ConvNet-based and Transformer-based methods. Although ConvNets can effectively utilize local information to reduce redundancy via small neighborhood convolution, the limited receptive field results in the inability to capture global dependencies. Transformers can establish long-distance dependencies via a self-attention mechanism; however, the intense calculation of the relationships among all tokens leads to high redundancy. We propose a novel unsupervised image registration method named the unified Transformer and superresolution (UTSRMorph) network, which can enhance feature representation learning in the encoder and generate detailed displacement fields in the decoder to overcome these problems. We first propose a fusion attention block to integrate the advantages of ConvNets and Transformers, which inserts a ConvNet-based channel attention module into a multihead self-attention module. The overlapping attention block, a novel cross-attention method, uses overlapping windows to obtain abundant correlations with match information of a pair of images. Then, the blocks are flexibly stacked into a new powerful encoder. The decoder generation process of a high-resolution deformation displacement field from low-resolution features is considered as a superresolution process. Specifically, the superresolution module was employed to replace interpolation upsampling, which can overcome feature degradation. UTSRMorph was compared to state-of-the-art registration methods in the 3D brain MR (OASIS, IXI) and MR-CT datasets (abdomen, craniomaxillofacial). The qualitative and quantitative results indicate that UTSRMorph achieves relatively better performance. The code and datasets are publicly available at <uri>https://github.com/Runshi-Zhang/UTSRMorph</uri>.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 2","pages":"891-902"},"PeriodicalIF":0.0,"publicationDate":"2024-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142321529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhanced Multimodal Low-Rank Embedding-Based Feature Selection Model for Multimodal Alzheimer’s Disease Diagnosis 基于低等级嵌入的增强型多模态特征选择模型用于多模态阿尔茨海默病诊断
IEEE transactions on medical imaging Pub Date : 2024-09-20 DOI: 10.1109/TMI.2024.3464861
Zhi Chen;Yongguo Liu;Yun Zhang;Jiajing Zhu;Qiaoqin Li;Xindong Wu
{"title":"Enhanced Multimodal Low-Rank Embedding-Based Feature Selection Model for Multimodal Alzheimer’s Disease Diagnosis","authors":"Zhi Chen;Yongguo Liu;Yun Zhang;Jiajing Zhu;Qiaoqin Li;Xindong Wu","doi":"10.1109/TMI.2024.3464861","DOIUrl":"10.1109/TMI.2024.3464861","url":null,"abstract":"Identification of Alzheimer’s disease (AD) with multimodal neuroimaging data has been receiving increasing attention. However, the presence of numerous redundant features and corrupted neuroimages within multimodal datasets poses significant challenges for existing methods. In this paper, we propose a feature selection method named Enhanced Multimodal Low-rank Embedding (EMLE) for multimodal AD diagnosis. Unlike previous methods utilizing convex relaxations of the <inline-formula> <tex-math>$ell _{{2},{0}}$ </tex-math></inline-formula>-norm, EMLE exploits an <inline-formula> <tex-math>$ell _{{2},gamma }$ </tex-math></inline-formula>-norm regularized projection matrix to obtain an embedding representation and select informative features jointly for each modality. The <inline-formula> <tex-math>$ell _{{2},gamma }$ </tex-math></inline-formula>-norm, employing an upper-bounded nonconvex Minimax Concave Penalty (MCP) function to characterize sparsity, offers a superior approximation for the <inline-formula> <tex-math>$ell _{{2},{0}}$ </tex-math></inline-formula>-norm compared to other convex relaxations. Next, a similarity graph is learned based on the self-expressiveness property to increase the robustness to corrupted data. As the approximation coefficient vectors of samples from the same class should be highly correlated, an MCP function introduced norm, i.e., matrix <inline-formula> <tex-math>$gamma $ </tex-math></inline-formula>-norm, is applied to constrain the rank of the graph. Furthermore, recognizing that diverse modalities should share an underlying structure related to AD, we establish a consensus graph for all modalities to unveil intrinsic structures across multiple modalities. Finally, we fuse the embedding representations of all modalities into the label space to incorporate supervisory information. The results of extensive experiments on the Alzheimer’s Disease Neuroimaging Initiative datasets verify the discriminability of the features selected by EMLE.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 2","pages":"815-827"},"PeriodicalIF":0.0,"publicationDate":"2024-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142275168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IPNet: An Interpretable Network With Progressive Loss for Whole-Stage Colorectal Disease Diagnosis IPNet:用于全阶段结直肠疾病诊断的渐进损失可解释网络
IEEE transactions on medical imaging Pub Date : 2024-09-19 DOI: 10.1109/TMI.2024.3459910
Junhu Fu;Ke Chen;Qi Dou;Yun Gao;Yiping He;Pinghong Zhou;Shengli Lin;Yuanyuan Wang;Yi Guo
{"title":"IPNet: An Interpretable Network With Progressive Loss for Whole-Stage Colorectal Disease Diagnosis","authors":"Junhu Fu;Ke Chen;Qi Dou;Yun Gao;Yiping He;Pinghong Zhou;Shengli Lin;Yuanyuan Wang;Yi Guo","doi":"10.1109/TMI.2024.3459910","DOIUrl":"10.1109/TMI.2024.3459910","url":null,"abstract":"Colorectal cancer plays a dominant role in cancer-related deaths, primarily due to the absence of obvious early-stage symptoms. Whole-stage colorectal disease diagnosis is crucial for assessing lesion evolution and determining treatment plans. However, locality difference and disease progression lead to intra-class disparities and inter-class similarities for colorectal lesion representation. In addition, interpretable algorithms explaining the lesion progression are still lacking, making the prediction process a “black box”. In this paper, we propose IPNet, a dual-branch interpretable network with progressive loss for whole-stage colorectal disease diagnosis. The dual-branch architecture captures unbiased features representing diverse localities to suppress intra-class variation. The progressive loss function considers inter-class relationship, using prior knowledge of disease evolution to guide classification. Furthermore, a novel Grain-CAM is designed to interpret IPNet by visualizing pixel-wise attention maps from shallow to deep layers, providing regions semantically related to IPNet’s progressive classification. We conducted whole-stage diagnosis on two image modalities, i.e., colorectal lesion classification on 129,893 endoscopic optical images and rectal tumor T-staging on 11,072 endoscopic ultrasound images. IPNet is shown to surpass other state-of-the-art algorithms, accordingly achieving an accuracy of 93.15% and 89.62%. Especially, it establishes effective decision boundaries for challenges like polyp vs. adenoma and T2 vs. T3. The results demonstrate an explainable attempt for colorectal lesion classification at a whole-stage level, and rectal tumor T-staging by endoscopic ultrasound is also unprecedentedly explored. IPNet is expected to be further applied, assisting physicians in whole-stage disease diagnosis and enhancing diagnostic interpretability.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 2","pages":"789-800"},"PeriodicalIF":0.0,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142273441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Physics-Informed DeepMRI: k-Space Interpolation Meets Heat Diffusion 物理信息 DeepMRI:k 空间插值与热扩散
IEEE transactions on medical imaging Pub Date : 2024-09-18 DOI: 10.1109/TMI.2024.3462988
Zhuo-Xu Cui;Congcong Liu;Xiaohong Fan;Chentao Cao;Jing Cheng;Qingyong Zhu;Yuanyuan Liu;Sen Jia;Haifeng Wang;Yanjie Zhu;Yihang Zhou;Jianping Zhang;Qiegen Liu;Dong Liang
{"title":"Physics-Informed DeepMRI: k-Space Interpolation Meets Heat Diffusion","authors":"Zhuo-Xu Cui;Congcong Liu;Xiaohong Fan;Chentao Cao;Jing Cheng;Qingyong Zhu;Yuanyuan Liu;Sen Jia;Haifeng Wang;Yanjie Zhu;Yihang Zhou;Jianping Zhang;Qiegen Liu;Dong Liang","doi":"10.1109/TMI.2024.3462988","DOIUrl":"10.1109/TMI.2024.3462988","url":null,"abstract":"Recently, diffusion models have shown considerable promise for MRI reconstruction. However, extensive experimentation has revealed that these models are prone to generating artifacts due to the inherent randomness involved in generating images from pure noise. To achieve more controlled image reconstruction, we reexamine the concept of interpolatable physical priors in k-space data, focusing specifically on the interpolation of high-frequency (HF) k-space data from low-frequency (LF) k-space data. Broadly, this insight drives a shift in the generation paradigm from random noise to a more deterministic approach grounded in the existing LF k-space data. Building on this, we first establish a relationship between the interpolation of HF k-space data from LF k-space data and the reverse heat diffusion process, providing a fundamental framework for designing diffusion models that generate missing HF data. To further improve reconstruction accuracy, we integrate a traditional physics-informed k-space interpolation model into our diffusion framework as a data fidelity term. Experimental validation using publicly available datasets demonstrates that our approach significantly surpasses traditional k-space interpolation methods, deep learning-based k-space interpolation techniques, and conventional diffusion models, particularly in HF regions. Finally, we assess the generalization performance of our model across various out-of-distribution datasets. Our code are available at \u0000<uri>https://github.com/ZhuoxuCui/Heat-Diffusion</uri>\u0000.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"43 10","pages":"3503-3520"},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142245870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Source Extent Estimation in OPM-MEG: A Two-Stage Champagne Approach OPM-MEG 中的源范围估算:两阶段香槟法
IEEE transactions on medical imaging Pub Date : 2024-09-17 DOI: 10.1109/TMI.2024.3462415
Wen Li;Fuzhi Cao;Nan An;Wenli Wang;Chunhui Wang;Weinan Xu;Yang Gao;Xiaolin Ning
{"title":"Source Extent Estimation in OPM-MEG: A Two-Stage Champagne Approach","authors":"Wen Li;Fuzhi Cao;Nan An;Wenli Wang;Chunhui Wang;Weinan Xu;Yang Gao;Xiaolin Ning","doi":"10.1109/TMI.2024.3462415","DOIUrl":"10.1109/TMI.2024.3462415","url":null,"abstract":"The accurate estimation of source extent using magnetoencephalography (MEG) is important for the study of preoperative functional localization in epilepsy. Conventional source imaging techniques tend to produce diffuse or focused source estimates that fail to capture the source extent accurately. To address this issue, we propose a novel method called the two-stage Champagne approach (TS-Champagne). TS-Champagne divides source extent estimation into two stages. In the first stage, the Champagne algorithm with noise learning (Champagne-NL) is employed to obtain an initial source estimate. In the second stage, spatial basis functions are constructed from the initial source estimate. These spatial basis functions consist of potential activation source centers and their neighbors, and serve as spatial priors, which are incorporated into Champagne-NL to obtain a final source estimate. We evaluated the performance of TS-Champagne through numerical simulations. TS-Champagne achieved more robust performance under various conditions (i.e., varying source extent, number of sources, signal-to-noise level, and correlation coefficients between sources) than Champagne-NL and several benchmark methods. Furthermore, auditory and median nerve stimulation experiments were conducted using a 31-channel optically pumped magnetometer (OPM)-MEG system. The validation results indicated that the reconstructed source activity was spatially and temporally consistent with the neurophysiological results of previous OPM-MEG studies, further demonstrating the feasibility of TS-Champagne for practical applications.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 2","pages":"828-840"},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142236332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信