IEEE transactions on medical imaging最新文献

筛选
英文 中文
Amyloid-β Deposition Prediction With Large Language Model Driven and Task-Oriented Learning of Brain Functional Networks 脑功能网络的大语言模型驱动和任务导向学习预测淀粉样蛋白-β沉积
IEEE transactions on medical imaging Pub Date : 2025-01-03 DOI: 10.1109/TMI.2024.3525022
Yuxiao Liu;Mianxin Liu;Yuanwang Zhang;Yihui Guan;Qihao Guo;Fang Xie;Dinggang Shen
{"title":"Amyloid-β Deposition Prediction With Large Language Model Driven and Task-Oriented Learning of Brain Functional Networks","authors":"Yuxiao Liu;Mianxin Liu;Yuanwang Zhang;Yihui Guan;Qihao Guo;Fang Xie;Dinggang Shen","doi":"10.1109/TMI.2024.3525022","DOIUrl":"10.1109/TMI.2024.3525022","url":null,"abstract":"Amyloid-<inline-formula> <tex-math>$beta $ </tex-math></inline-formula> positron emission tomography can reflect the Amyloid-<inline-formula> <tex-math>$beta $ </tex-math></inline-formula> protein deposition in the brain and thus serves as one of the golden standards for Alzheimer’s disease (AD) diagnosis. However, its practical cost and high radioactivity hinder its application in large-scale early AD screening. Recent neuroscience studies suggest a strong association between changes in functional connectivity network (FCN) derived from functional MRI (fMRI), and deposition patterns of Amyloid-<inline-formula> <tex-math>$beta $ </tex-math></inline-formula> protein in the brain. This enables an FCN-based approach to assess the Amyloid-<inline-formula> <tex-math>$beta $ </tex-math></inline-formula> protein deposition with less expense and radioactivity. However, an effective FCN-based Amyloid-<inline-formula> <tex-math>$beta $ </tex-math></inline-formula> assessment remains lacking for practice. In this paper, we introduce a novel deep learning framework tailored for this task. Our framework comprises three innovative components: 1) a pre-trained Large Language Model Nodal Embedding Encoder, designed to extract task-related features from fMRI signals; 2) a task-oriented Hierarchical-order FCN Learning module, used to enhance the representation of complex correlations among different brain regions for improved prediction of Amyloid-<inline-formula> <tex-math>$beta $ </tex-math></inline-formula> deposition; and 3) task-feature consistency losses for promoting similarity between predicted and real Amyloid-<inline-formula> <tex-math>$beta $ </tex-math></inline-formula> values and ensuring effectiveness of predicted Amyloid-<inline-formula> <tex-math>$beta $ </tex-math></inline-formula> in downstream classification task. Experimental results show superiority of our method over several state-of-the-art FCN-based methods. Additionally, we identify crucial functional sub-networks for predicting Amyloid-<inline-formula> <tex-math>$beta $ </tex-math></inline-formula> depositions. The proposed method is anticipated to contribute valuable insights into the understanding of mechanisms of AD and its prevention.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 4","pages":"1809-1820"},"PeriodicalIF":0.0,"publicationDate":"2025-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142924506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Supervised Information Mining From Weakly Paired Images for Breast IHC Virtual Staining 弱配对图像的监督信息挖掘用于乳腺IHC虚拟染色
IEEE transactions on medical imaging Pub Date : 2025-01-03 DOI: 10.1109/TMI.2024.3525299
Xianchao Guan;Zheng Zhang;Yifeng Wang;Yueheng Li;Yongbing Zhang
{"title":"Supervised Information Mining From Weakly Paired Images for Breast IHC Virtual Staining","authors":"Xianchao Guan;Zheng Zhang;Yifeng Wang;Yueheng Li;Yongbing Zhang","doi":"10.1109/TMI.2024.3525299","DOIUrl":"10.1109/TMI.2024.3525299","url":null,"abstract":"Immunohistochemistry (IHC) examination is essential to determine the tumour subtypes, provide key prognostic factors, and develop personalized treatment plans for breast cancer. However, compared to Hematoxylin and Eosin (H&E) staining, the preparation process of IHC staining is more complex and expensive, which limits its application in clinical practice. Therefore, H&E to IHC stain transfer may be an ideal solution to obtain IHC staining. To ensure high transferring quality, it would be much more desirable to exploit the supervised information between adjacent layer images of the same tissue, which are stained by H&E and IHC stainings, respectively. Nevertheless, adjacent layer tissue images are not accurately paired at the pixel level, which poses significant challenges to network training. To address this problem, we propose a generative adversarial network for breast IHC virtual staining, which contains an optimal transport-based supervised information mining (OT-SIM) mechanism and a pathological correlation-based supervised information mining (PC-SIM) mechanism. The OT-SIM guides the network in mining matching consistency between H&E images and the adjacent layer’s real IHC images, providing as much instance-level supervision as possible. The PC-SIM further explores the consistency between the correlation among virtual IHC images and the correlation among real IHC images, providing batch-level supervision. Extensive experiments show the superiority of our method on two breast tissue benchmark datasets compared to the state-of-the-art methods both quantitatively and qualitatively. The code is available at <uri>https://github.com/xianchaoguan/SIM-GAN</uri>.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 5","pages":"2120-2130"},"PeriodicalIF":0.0,"publicationDate":"2025-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142924507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring Contrastive Pre-Training for Domain Connections in Medical Image Segmentation 医学图像分割领域连接对比预训练的探索
IEEE transactions on medical imaging Pub Date : 2025-01-03 DOI: 10.1109/TMI.2024.3525095
Zequn Zhang;Yun Jiang;Yunnan Wang;Baao Xie;Wenyao Zhang;Yuhang Li;Zhen Chen;Xin Jin;Wenjun Zeng
{"title":"Exploring Contrastive Pre-Training for Domain Connections in Medical Image Segmentation","authors":"Zequn Zhang;Yun Jiang;Yunnan Wang;Baao Xie;Wenyao Zhang;Yuhang Li;Zhen Chen;Xin Jin;Wenjun Zeng","doi":"10.1109/TMI.2024.3525095","DOIUrl":"10.1109/TMI.2024.3525095","url":null,"abstract":"Unsupervised domain adaptation (UDA) in medical image segmentation aims to improve the generalization of deep models by alleviating domain gaps caused by inconsistency across equipment, imaging protocols, and patient conditions. However, existing UDA works remain insufficiently explored and present great limitations: 1) Exhibit cumbersome designs that prioritize aligning statistical metrics and distributions, which limits the model’s flexibility and generalization while also overlooking the potential knowledge embedded in unlabeled data; 2) More applicable in a certain domain, lack the generalization capability to handle diverse shifts encountered in clinical scenarios. To overcome these limitations, we introduce MedCon, a unified framework that leverages general unsupervised contrastive pre-training to establish domain connections, effectively handling diverse domain shifts without tailored adjustments. Specifically, it initially explores a general contrastive pre-training to establish domain connections by leveraging the rich prior knowledge from unlabeled images. Thereafter, the pre-trained backbone is fine-tuned using source-based images to ultimately identify per-pixel semantic categories. To capture both intra- and inter-domain connections of anatomical structures, we construct positive-negative pairs from a hybrid aspect of both local and global scales. In this regard, a shared-weight encoder-decoder is employed to generate pixel-level representations, which are then mapped into hyper-spherical space using a non-learnable projection head to facilitate positive pair matching. Comprehensive experiments on diverse medical image datasets confirm that MedCon outperforms previous methods by effectively managing a wide range of domain shifts and showcasing superior generalization capabilities.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 4","pages":"1686-1698"},"PeriodicalIF":0.0,"publicationDate":"2025-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142924430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DARCS: Memory-Efficient Deep Compressed Sensing Reconstruction for Acceleration of 3D Whole-Heart Coronary MR Angiography 记忆高效深度压缩感知重建加速三维全心冠状动脉磁共振血管造影
IEEE transactions on medical imaging Pub Date : 2025-01-01 DOI: 10.1109/TMI.2024.3524717
Zhihao Xue;Fan Yang;Juan Gao;Zhuo Chen;Hao Peng;Chao Zou;Hang Jin;Chenxi Hu
{"title":"DARCS: Memory-Efficient Deep Compressed Sensing Reconstruction for Acceleration of 3D Whole-Heart Coronary MR Angiography","authors":"Zhihao Xue;Fan Yang;Juan Gao;Zhuo Chen;Hao Peng;Chao Zou;Hang Jin;Chenxi Hu","doi":"10.1109/TMI.2024.3524717","DOIUrl":"10.1109/TMI.2024.3524717","url":null,"abstract":"Three-dimensional coronary magnetic res- onance angiography (CMRA) requires reconstruction algorithms that can significantly suppress the artifacts encountered in heavily undersampled acquisitions. While unrolling-based deep reconstruction methods have ach- ieved state-of-the-art performance on 2D image recons- truction, their application in 3D reconstruction is hindered by the large amount of memory required to train an unrolled network. In this study, we propose a memory-efficient deep compressed sensing method that employs a sparsifying transform based on a pre-trained artifact estimation network. The artifact image estimated by a well-trained network is expected to be sparse when the input image is artifact-free and less sparse when the input image has artifacts. Thus, the artifact estimation network can be used as an inherent sparsifying transform. The proposed method, De-Aliasing Regularization-based Compressed Sensing (DARCS), was compared with a patch-based low-rank method, de-aliasing generative adversarial network (DAGAN), 3D model-based deep learning (MoDL), plug-and-play, and AI-assisted compressed sensing (AI-CS) in terms of 3D CMRA acceleration. The results demonstrate that DARCS surpasses the reconstruction quality of the comparison methods, by approximately 2 dB in peak signal-to-noise ratio (PSNR). Furthermore, the proposed method generalizes well to different undersampling rates, patterns, and noise levels, with a memory usage of only 63% of that needed by 3D MoDL. In conclusion, DARCS improves reconstruction quality for 3D CMRA with reduced memory burden.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 5","pages":"2105-2119"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142911720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MA-SAM: A Multi-Atlas Guided SAM Using Pseudo Mask Prompts Without Manual Annotation for Spine Image Segmentation MA-SAM:一种使用伪掩码提示的多图谱引导SAM,无需手动注释用于脊柱图像分割
IEEE transactions on medical imaging Pub Date : 2025-01-01 DOI: 10.1109/TMI.2024.3524570
Dingwei Fan;Junyong Zhao;Chunlin Li;Xinlong Wang;Ronghan Zhang;Qi Zhu;Mingliang Wang;Haipeng Si;Daoqiang Zhang;Liang Sun
{"title":"MA-SAM: A Multi-Atlas Guided SAM Using Pseudo Mask Prompts Without Manual Annotation for Spine Image Segmentation","authors":"Dingwei Fan;Junyong Zhao;Chunlin Li;Xinlong Wang;Ronghan Zhang;Qi Zhu;Mingliang Wang;Haipeng Si;Daoqiang Zhang;Liang Sun","doi":"10.1109/TMI.2024.3524570","DOIUrl":"10.1109/TMI.2024.3524570","url":null,"abstract":"Accurate spine segmentation is crucial in clinical diagnosis and treatment of spine diseases. However, due to the complexity of spine anatomical structure, it has remained a challenging task to accurately segment spine images. Recently, the segment anything model (SAM) has achieved superior performance for image segmentation. However, generating high-quality points and boxes is still laborious for high-dimensional medical images. Meanwhile, an accurate mask is difficult to obtain. To address these issues, in this paper, we propose a multi-atlas guided SAM using multiple pseudo mask prompts for spine image segmentation, called MA-SAM. Specifically, we first design a multi-atlas prompt generation sub-network to obtain the anatomical structure prompts. More specifically, we use a network to obtain coarse mask of the input image. Then atlas label maps are registered to the coarse mask. Subsequently, a SAM-based segmentation sub-network is used to segment images. Specifically, we first utilize adapters to fine-tune the image encoder. Meanwhile, we use a prompt encoder to learn the anatomical structure prior knowledge from the multi-atlas prompts. Finally, a mask decoder is used to fuse the image and prompt features to obtain the segmentation results. Moreover, to boost the segmentation performance, different scale features from the prompt encoder are concatenated to the Upsample Block in the mask decoder. We validate our MA-SAM on the two spine segmentation tasks, including spine anatomical structure segmentation with CT images and lumbosacral plexus segmentation with MR images. Experiment results suggest that our method achieves better segmentation performance than SAM with points, boxes, and mask prompts.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 5","pages":"2157-2169"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142911721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Imbalanced Medical Image Segmentation With Pixel-Dependent Noisy Labels 基于像素相关噪声标签的不平衡医学图像分割
IEEE transactions on medical imaging Pub Date : 2024-12-31 DOI: 10.1109/TMI.2024.3524253
Erjian Guo;Zicheng Wang;Zhen Zhao;Luping Zhou
{"title":"Imbalanced Medical Image Segmentation With Pixel-Dependent Noisy Labels","authors":"Erjian Guo;Zicheng Wang;Zhen Zhao;Luping Zhou","doi":"10.1109/TMI.2024.3524253","DOIUrl":"10.1109/TMI.2024.3524253","url":null,"abstract":"Accurate medical image segmentation is often hindered by noisy labels in training data, due to the challenges of annotating medical images. Prior research works addressing noisy labels tend to make class-dependent assumptions, overlooking the pixel-dependent nature of most noisy labels. Furthermore, existing methods typically apply fixed thresholds to filter out noisy labels, risking the removal of minority classes and consequently degrading segmentation performance. To bridge these gaps, our proposed framework, Collaborative Learning with Curriculum Selection (CLCS), addresses pixel-dependent noisy labels with class imbalance. CLCS advances the existing works by i) treating noisy labels as pixel-dependent and addressing them through a collaborative learning framework, and ii) employing a curriculum dynamic thresholding approach adapting to model learning progress to select clean data samples to mitigate the class imbalance issue, and iii) applying a noise balance loss to noisy data samples to improve data utilization instead of discarding them outright. Specifically, our CLCS contains two modules: Curriculum Noisy Label Sample Selection (CNS) and Noise Balance Loss (NBL). In the CNS module, we designed a two-branch network with discrepancy loss for collaborative learning so that different feature representations of the same instance could be extracted from distinct views and used to vote the class probabilities of pixels. Besides, a curriculum dynamic threshold is adopted to select clean-label samples through probability voting. In the NBL module, instead of directly dropping the suspiciously noisy labels, we further adopt a robust loss to leverage such instances to boost the performance. We verify our CLCS on two benchmarks with different types of segmentation noise. Our method can obtain new state-of-the-art performance in different settings, yielding more than 3% Dice and mIoU improvements. Our code is available at <uri>https://github.com/Erjian96/CLCS.git</uri>.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 5","pages":"2016-2027"},"PeriodicalIF":0.0,"publicationDate":"2024-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142908558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Recruiting Teacher IF Modality for Nephropathy Diagnosis: A Customized Distillation Method With Attention-Based Diffusion Network 肾病诊断的教师IF招募模式:基于注意力扩散网络的定制精馏方法
IEEE transactions on medical imaging Pub Date : 2024-12-31 DOI: 10.1109/TMI.2024.3524544
Mai Xu;Ning Dai;Lai Jiang;Yibing Fu;Xin Deng;Shengxi Li
{"title":"Recruiting Teacher IF Modality for Nephropathy Diagnosis: A Customized Distillation Method With Attention-Based Diffusion Network","authors":"Mai Xu;Ning Dai;Lai Jiang;Yibing Fu;Xin Deng;Shengxi Li","doi":"10.1109/TMI.2024.3524544","DOIUrl":"10.1109/TMI.2024.3524544","url":null,"abstract":"The joint use of multiple modalities for medical image processing has been widely studied in recent years. The fusion of information from different modalities has demonstrated the performance improvement for a lot of medical tasks. For nephropathy diagnosis, immunofluorescence (IF) is one of the most widely-used multi-modality medical images due to its ease of acquisition and the effectiveness for certain nephropathy. However, the existing methods mainly assume different modalities have the equal effect on the diagnosis task, failing to exploit multi-modality knowledge in details. To avoid this disadvantage, this paper proposes a novel customized multi-teacher knowledge distillation framework to transfer knowledge from the trained single-modality teacher networks to a multi-modality student network. Specifically, a new attention-based diffusion network is developed for IF based diagnosis, considering global, local, and modality attention. Besides, a teacher recruitment module and diffusion-aware distillation loss are developed to learn to select the effective teacher networks based on the medical priors of the input IF sequence. The experimental results in the test and external datasets show that the proposed method has a better nephropathy diagnosis performance and generalizability, in comparison with the state-of-the-art methods.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 5","pages":"2028-2040"},"PeriodicalIF":0.0,"publicationDate":"2024-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142908557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhanced DTCMR With Cascaded Alignment and Adaptive Diffusion 基于级联对准和自适应扩散的增强DTCMR
IEEE transactions on medical imaging Pub Date : 2024-12-30 DOI: 10.1109/TMI.2024.3523431
Fanwen Wang;Yihao Luo;Camila Munoz;Ke Wen;Yaqing Luo;Jiahao Huang;Yinzhe Wu;Zohya Khalique;Maria Molto;Ramyah Rajakulasingam;Ranil de Silva;Dudley J. Pennell;Pedro F. Ferreira;Andrew D. Scott;Sonia Nielles-Vallespin;Guang Yang
{"title":"Enhanced DTCMR With Cascaded Alignment and Adaptive Diffusion","authors":"Fanwen Wang;Yihao Luo;Camila Munoz;Ke Wen;Yaqing Luo;Jiahao Huang;Yinzhe Wu;Zohya Khalique;Maria Molto;Ramyah Rajakulasingam;Ranil de Silva;Dudley J. Pennell;Pedro F. Ferreira;Andrew D. Scott;Sonia Nielles-Vallespin;Guang Yang","doi":"10.1109/TMI.2024.3523431","DOIUrl":"10.1109/TMI.2024.3523431","url":null,"abstract":"Diffusion tensor cardiovascular magnetic resonance (DTCMR) is the only non-invasive method for visualizing myocardial microstructure, but it is challenged by inconsistent breath-holds and imperfect cardiac triggering, causing in-plane shifts and through-plane warping with an inadequate tensor fitting. While rigid registration corrects in-plane shifts, deformable registration risks distorting the diffusion distribution, and selecting a reference frame among low SNR frames is challenging. Existing pairwise deep learning and iterative methods are unsuitable for DTCMR due to their inability to handle the drastic in-plane motion and disentangle the diffusion contrast distortion with through-plane motions on low SNR frames, which compromises the accuracy of clinical biomarker tensor estimation. Our study introduces a novel deep learning framework incorporating tensor information for groupwise deformable registration, effectively correcting intra-subject inter-frame motion. This framework features a cascaded registration branch for addressing in-plane and through-plane motions and a parallel branch for generating pseudo-frames with diffusion contrasts and template updates to guide registration with a refined loss function and denoising. We evaluated our method on four DTCMR-specific metrics using data from over 900 cases from 2012 to 2023. Our method outperformed three traditional and two deep learning-based methods, achieving reduced fitting errors, the lowest percentage of negative eigenvalues at 0.446%, the highest R2 of HA line profiles at 0.911, no negative Jacobian Determinant, and the shortest reference time of 0.06 seconds per case. In conclusion, our deep learning framework significantly improves DTCMR imaging by effectively correcting inter-frame motion and surpassing existing methods across multiple metrics, demonstrating substantial clinical potential.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 4","pages":"1866-1877"},"PeriodicalIF":0.0,"publicationDate":"2024-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142905100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Single-Sided Magnetic Particle Imaging Device With Offset Field Based Spatial Encoding 基于偏移场空间编码的单面磁粉成像设备
IEEE transactions on medical imaging Pub Date : 2024-12-30 DOI: 10.1109/TMI.2024.3522979
Qibin Wang;Zhonghao Zhang;Lei Li;Franziska Schrank;Yu Zeng;Pengyue Guo;Harald Radermacher;Volkmar Schulz;Shouping Zhu
{"title":"Single-Sided Magnetic Particle Imaging Device With Offset Field Based Spatial Encoding","authors":"Qibin Wang;Zhonghao Zhang;Lei Li;Franziska Schrank;Yu Zeng;Pengyue Guo;Harald Radermacher;Volkmar Schulz;Shouping Zhu","doi":"10.1109/TMI.2024.3522979","DOIUrl":"10.1109/TMI.2024.3522979","url":null,"abstract":"Single-sided Magnetic Particle Imaging (MPI) devices enable easy imaging of areas outside the MPI device, allowing objects of any size to be imaged and improving clinical applicability. However, current single-sided MPI devices face challenges in generating high-gradient selection fields and experience a decrease in gradient strength with increasing detection depth, which limits the detection depth and resolution. We introduce a novel spatial encoding method. This method combines high-frequency alternating excitation fields with variable offset fields, leveraging the inherent characteristic of single-sided MPI devices where the magnetic field strength attenuates with distance. Consequently, the harmonic signals of particle responses at different spatial positions vary. By manipulating multiple offset fields, we correlate the nonlinear harmonic responses of magnetic particles with spatial position data. In this work, we employed an image reconstruction using a system matrix approach, which takes into account the spatial distribution of the magnetic field during the movement of the device within the field of view. Our proposed encoding approach eliminates the need for the classical selection field and directly links the spatial resolution to the strength and spatial distribution of the magnetic field, thus reducing the dependency of resolution on selection field gradients strength. We have demonstrated the feasibility of the proposed method through simulations and phantom measurements.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 4","pages":"1878-1889"},"PeriodicalIF":0.0,"publicationDate":"2024-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142905101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MRI Motion Correction Through Disentangled CycleGAN Based on Multi-Mask K-Space Subsampling 基于多掩模k空间子采样的解纠缠CycleGAN MRI运动校正
IEEE transactions on medical imaging Pub Date : 2024-12-30 DOI: 10.1109/TMI.2024.3523949
Gang Chen;Han Xie;Xinglong Rao;Xinjie Liu;Martins Otikovs;Lucio Frydman;Peng Sun;Zhi Zhang;Feng Pan;Lian Yang;Xin Zhou;Maili Liu;Qingjia Bao;Chaoyang Liu
{"title":"MRI Motion Correction Through Disentangled CycleGAN Based on Multi-Mask K-Space Subsampling","authors":"Gang Chen;Han Xie;Xinglong Rao;Xinjie Liu;Martins Otikovs;Lucio Frydman;Peng Sun;Zhi Zhang;Feng Pan;Lian Yang;Xin Zhou;Maili Liu;Qingjia Bao;Chaoyang Liu","doi":"10.1109/TMI.2024.3523949","DOIUrl":"10.1109/TMI.2024.3523949","url":null,"abstract":"This work proposes a new retrospective motion correction method, termed DCGAN-MS, which employs disentangled CycleGAN based on multi-mask k-space subsampling (DCGAN-MS) to address the image domain translation challenge. The multi-mask k-space subsampling operator is utilized to decrease the complexity of motion artifacts by randomly discarding motion-affected k-space lines. The network then disentangles the subsampled, motion-corrupted images into content and artifact features using specialized encoders, and generates motion-corrected images by decoding the content features. By utilizing multi-mask k-space subsampling, motion artifact features become more sparse compared to the original image domain, enhancing the efficiency of the DCGAN-MS network. This method effectively corrects motion artifacts in clinical gadoxetic acid-enhanced human liver MRI, human brain MRI from fastMRI, and preclinical rodent brain MRI. Quantitative improvements are demonstrated with SSIM values increasing from 0.75 to 0.86 for human liver MRI with simulated motion artifacts, and from 0.72 to 0.82 for rodent brain MRI with simulated motion artifacts. Correspondingly, PSNR values increased from 26.09 to 31.09 and from 25.10 to 31.77. The method’s performance was further validated on clinical and preclinical motion-corrupted MRI using the Kernel Inception Distance (KID) and Fréchet Inception Distance (FID) metrics. Additionally, ablation experiments were conducted to confirm the effectiveness of the multi-mask k-space subsampling approach.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 4","pages":"1907-1921"},"PeriodicalIF":0.0,"publicationDate":"2024-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142905099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信