IEEE transactions on medical imaging最新文献

筛选
英文 中文
FracFormer: Fracture Reduction Planning With Transformer-Based Shape Restoration and Fracture Data Simulation FracFormer:基于变压器形状恢复和裂缝数据模拟的裂缝减少规划
IEEE transactions on medical imaging Pub Date : 2025-04-15 DOI: 10.1109/TMI.2025.3561030
Sutuke Yibulayimu;Yanzhen Liu;Yudi Sang;Jingjiang Qin;Chao Shi;Chendi Liang;Gang Zhu;Yu Wang;Chunpeng Zhao;Xinbao Wu
{"title":"FracFormer: Fracture Reduction Planning With Transformer-Based Shape Restoration and Fracture Data Simulation","authors":"Sutuke Yibulayimu;Yanzhen Liu;Yudi Sang;Jingjiang Qin;Chao Shi;Chendi Liang;Gang Zhu;Yu Wang;Chunpeng Zhao;Xinbao Wu","doi":"10.1109/TMI.2025.3561030","DOIUrl":"10.1109/TMI.2025.3561030","url":null,"abstract":"Accurate orthopedic fracture reduction planning is essential for ensuring successful postoperative recovery and improving patient outcomes. However, current automatic methods are challenged by the complex and irregular fracture geometries and the scarcity of annotated training data. To address these challenges, we propose a novel approach that integrates learning-based shape restoration and fracture simulation. A transformer-based model is developed, which utilizes patch-to-patch shape translation and recursive fragment registration to iteratively refine fracture reduction poses. A deformable fracture generation model (DFGM) combines statistical shape modeling with clinically representative fracture patterns to generate diverse and realistic datasets, reducing the dependence on annotated samples. Tested on extensive clinical data with hipbone, sacrum, and femoral shaft fractures, the proposed method achieved mean errors of 1.85 mm and 3.40°, outperforming both template-based and existing learning-based methods. In addition, models trained solely on DFGM-synthesized data presented strong generalizability to real clinical data. The ablation experiments demonstrate the effectiveness of the fragment-aware network pipeline and the synthesis steps. Finally, a cadaver study with ground truth derived from the pre-injury scan further validated the performance of the method.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 8","pages":"3270-3283"},"PeriodicalIF":0.0,"publicationDate":"2025-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10965880","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143836687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Investigation of Class Separability Within Object Detection Models in Histopathology 组织病理学中目标检测模型中类可分离性的研究
IEEE transactions on medical imaging Pub Date : 2025-04-15 DOI: 10.1109/TMI.2025.3560134
Jonas Ammeling;Jonathan Ganz;Frauke Wilm;Katharina Breininger;Marc Aubreville
{"title":"Investigation of Class Separability Within Object Detection Models in Histopathology","authors":"Jonas Ammeling;Jonathan Ganz;Frauke Wilm;Katharina Breininger;Marc Aubreville","doi":"10.1109/TMI.2025.3560134","DOIUrl":"10.1109/TMI.2025.3560134","url":null,"abstract":"Object detection is one of the most common tasks in histopathological image analysis and generalization is a key requirement for the clinical applicability of deep object detection models. However, traditional evaluation metrics often fail to provide insights into why models fail on certain test cases, especially in the presence of domain shifts. In this work, we propose a novel quantitative method for assessing the discriminative power of a model’s latent space. Our approach, applicable to all object detection models with known local correspondences such as the popular RetinaNet, FCOS, or YOLO approaches, allows tracing discrimination across layers and coordinates. We motivate, adapt, and evaluate two suitable metrics, the generalized discrimination value and the Hellinger distance, and incorporate them into our approach. Through empirical validation on real-world histopathology datasets, we demonstrate the effectiveness of our method in capturing model discrimination properties and providing insights for architectural optimization. This work contributes to bridging the gap between model performance evaluation and understanding the underlying mechanisms influencing model behavior.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 8","pages":"3162-3174"},"PeriodicalIF":0.0,"publicationDate":"2025-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143836685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Contour-Aware Multi-Expert Model for Ambiguous Medical Image Segmentation 轮廓感知的多专家模糊医学图像分割模型
IEEE transactions on medical imaging Pub Date : 2025-04-15 DOI: 10.1109/TMI.2025.3561117
Jiangnan Wang;Caixia Zhou;Yaping Huang
{"title":"Contour-Aware Multi-Expert Model for Ambiguous Medical Image Segmentation","authors":"Jiangnan Wang;Caixia Zhou;Yaping Huang","doi":"10.1109/TMI.2025.3561117","DOIUrl":"10.1109/TMI.2025.3561117","url":null,"abstract":"Medical image segmentation is highly challenging due to the uncertainties caused by the inherent ambiguous regions and expert knowledge variations. Some recent works explore the uncertainties and produce multiple outputs to obtain more robust results. However, the quality of the boundary areas remains unsatisfactory. Unfortunately, the key differences among experts usually lie in these boundary areas, which are more critical in practical diagnosis. To tackle the above issues, different from previous pixel-wise segmentation approaches, we present a new perspective and formulate the task as a contour-based regression problem, and further propose a novel Contour-aware Multi-expert Segmentor, named ContourMS, which can provide diverse segmentation results with rich boundary details in a coarse-to-fine manner. Specifically, in the coarse stage, we use a SegmentNet to predict a region mask by leveraging the knowledge of multiple experts, and then the mask is converted to an initial contour shared by all experts. In the fine stage, we design a LatentNet to learn the expert-level latent space and a ContourNet to refine each expert contour, where the deformation guided by the expert style can gradually adjust the contour to match different annotations. Extensive experiments demonstrate that the proposed method can generate diverse segment variants and achieve competitive performance on multiple public multi-expert medical segmentation datasets.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 8","pages":"3284-3298"},"PeriodicalIF":0.0,"publicationDate":"2025-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143836688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Freehand Ultrafast Doppler Ultrasound Imaging With Optical Tracking Allows for Detailed 3D Reconstruction of Blood Flow in the Human Brain 带有光学跟踪的徒手超快多普勒超声成像允许对人脑血流进行详细的3D重建
IEEE transactions on medical imaging Pub Date : 2025-04-15 DOI: 10.1109/TMI.2025.3559576
Luuk Verhoef;Sadaf Soloukey;Frits Mastik;Bastian S. Generowicz;Eelke M. Bos;Joost W. Schouten;Sebastiaan K. E. Koekkoek;Arnaud J. P. E. Vincent;Stefan Klein;Pieter Kruizinga
{"title":"Freehand Ultrafast Doppler Ultrasound Imaging With Optical Tracking Allows for Detailed 3D Reconstruction of Blood Flow in the Human Brain","authors":"Luuk Verhoef;Sadaf Soloukey;Frits Mastik;Bastian S. Generowicz;Eelke M. Bos;Joost W. Schouten;Sebastiaan K. E. Koekkoek;Arnaud J. P. E. Vincent;Stefan Klein;Pieter Kruizinga","doi":"10.1109/TMI.2025.3559576","DOIUrl":"10.1109/TMI.2025.3559576","url":null,"abstract":"Ultrafast Doppler ultrasound imaging allows for detailed images of blood flow inside the brain during neurosurgical interventions. In this work, we extend this new imaging technique to geometrically accurate volumetric reconstructions using freehand 2D ultrafast ultrasound acquisitions in conjunction with optical position tracking. We show how the Doppler signal can be derived from a moving freehand ultrasound scan. These filtered 2D images are subsequently mapped onto a shared 3D reference space using a normalized convolution function. The proposed methodology allows for highly detailed volumetric reconstructions of cerebral and tumor blood flow. The dense vascular networks show intriguing blood vessel morphology with vessels down to several hundred micrometers in diameter. By adding patient-co-registered volumetric reconstruction to ultrafast Doppler ultrasound, we have created a 3D intra-operative imaging technique that is unmatched in terms of resolution, ease of use, and visualization capabilities.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 8","pages":"3125-3138"},"PeriodicalIF":0.0,"publicationDate":"2025-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10966005","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143836686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Uncertainty Propagation From Projections to Region Counts in Tomographic Imaging: Application to Radiopharmaceutical Dosimetry 层析成像中从投影到区域计数的不确定性传播:在放射性药物剂量学中的应用
IEEE transactions on medical imaging Pub Date : 2025-04-14 DOI: 10.1109/TMI.2025.3560330
Lucas Polson;Sara Kurkowska;Chenguang Li;Pedro Esquinas;Peyman Sheikhzadeh;Mehrshad Abbasi;Francois Benard;Carlos Uribe;Arman Rahmim
{"title":"Uncertainty Propagation From Projections to Region Counts in Tomographic Imaging: Application to Radiopharmaceutical Dosimetry","authors":"Lucas Polson;Sara Kurkowska;Chenguang Li;Pedro Esquinas;Peyman Sheikhzadeh;Mehrshad Abbasi;Francois Benard;Carlos Uribe;Arman Rahmim","doi":"10.1109/TMI.2025.3560330","DOIUrl":"10.1109/TMI.2025.3560330","url":null,"abstract":"Radiopharmaceutical therapies (RPTs) pre- sent a major opportunity to improve cancer therapy. Although many current RPTs use the same injected activity for all patients, there is interest in using absorbed dose measurements to enable personalized prescriptions. However, image-based absorbed dose calculations incur uncertainties from calibration factors, partial volume effects and segmentation methods. While previously published dose estimation protocols incorporate these uncertainties, they do not account for uncertainty that originates from Poisson noise in the projection data that gets propagated through reconstruction algorithms. This effect should be accounted for to adequately estimate the total uncertainty in absorbed dose estimates. This paper proposes a computationally practical algorithm that propagates uncertainty from projection data through clinical reconstruction algorithms to obtain uncertainties on the total activity within volumes of interest (VOIs). The algorithm is first validated on <inline-formula> <tex-math>${}^{{177}}$ </tex-math></inline-formula>Lu and <inline-formula> <tex-math>${}^{{225}}$ </tex-math></inline-formula>Ac phantom data by comparing estimated uncertainties from individual SPECT acquisitions to empirical estimates obtained from multiple acquisitions. It is then applied to (i) Monte Carlo and (ii) multi-time point <inline-formula> <tex-math>${}^{{177}}$ </tex-math></inline-formula>Lu-DOTATATE and <inline-formula> <tex-math>${}^{{225}}$ </tex-math></inline-formula>Ac-PSMA-617 patient data for time integrated activity (TIA) uncertainty estimation. The outcomes of this work are two-fold: (i) the proposed uncertainty estimation algorithm is validated, and (ii) the propagation of VOI uncertainties to TIA uncertainty is validated with Monte Carlo data and applied to patient data. The proposed algorithm is made publicly available in the open-source image reconstruction library PyTomography and in the SPECT reconstruction extension of 3D Slicer.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 8","pages":"3233-3244"},"PeriodicalIF":0.0,"publicationDate":"2025-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143831775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DistAL: A Domain-Shift Active Learning Framework With Transferable Feature Learning for Lesion Detection DistAL: 利用可转移特征学习进行病变检测的领域转移主动学习框架
IEEE transactions on medical imaging Pub Date : 2025-04-14 DOI: 10.1109/TMI.2025.3558861
Fan Bai;Ran Wei;Xiaoyu Bai;Dakai Jin;Xianghua Ye;Le Lu;Ke Yan;Max Q.-H. Meng
{"title":"DistAL: A Domain-Shift Active Learning Framework With Transferable Feature Learning for Lesion Detection","authors":"Fan Bai;Ran Wei;Xiaoyu Bai;Dakai Jin;Xianghua Ye;Le Lu;Ke Yan;Max Q.-H. Meng","doi":"10.1109/TMI.2025.3558861","DOIUrl":"10.1109/TMI.2025.3558861","url":null,"abstract":"Deep learning has demonstrated exceptional performance in medical image analysis, but its effectiveness degrades significantly when applied to different medical centers due to domain shifts. Lesion detection, a critical task in medical imaging, is particularly impacted by this challenge due to the diversity and complexity of lesions, which can arise from different organs, diseases, imaging devices, and other factors. While collecting data and labels from target domains is a feasible solution, annotating medical images is often tedious, expensive, and requires professionals. To address this problem, we combine active learning with domain-invariant feature learning. We propose a Domain-shift Active Learning (DistAL) framework, which includes a transferable feature learning algorithm and a hybrid sample selection strategy. Feature learning incorporates contrastive-consistency training to learn discriminative and domain-invariant features. The sample selection strategy is called RUDY, which jointly considers Representativeness, Uncertainty, and DiversitY. Its goal is to select samples from the unlabeled target domain for cost-effective annotation. It first selects representative samples to deal with domain shift, as well as uncertain ones to improve class separability, and then leverages K-means++ initialization to remove redundant candidates to achieve diversity. We evaluate our method for the task of lesion detection. By selecting only 1.7% samples from the target domain to annotate, DistAL achieves comparable performance to the method trained with all target labels. It outperforms other AL methods in five experiments on eight datasets collected from different hospitals, using different imaging protocols, annotation conventions, and etiologies.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 7","pages":"3038-3050"},"PeriodicalIF":0.0,"publicationDate":"2025-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143831792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PFCM: Poisson Flow Consistency Models for Low-Dose CT Image Denoising 低剂量CT图像去噪的泊松流一致性模型
IEEE transactions on medical imaging Pub Date : 2025-04-11 DOI: 10.1109/TMI.2025.3558019
Dennis Hein;Grant Stevens;Adam Wang;Ge Wang
{"title":"PFCM: Poisson Flow Consistency Models for Low-Dose CT Image Denoising","authors":"Dennis Hein;Grant Stevens;Adam Wang;Ge Wang","doi":"10.1109/TMI.2025.3558019","DOIUrl":"10.1109/TMI.2025.3558019","url":null,"abstract":"X-ray computed tomography (CT) is widely used for medical diagnosis and treatment planning; however, concerns about ionizing radiation exposure drive efforts to optimize image quality at lower doses. This study introduces Poisson Flow Consistency Models (PFCM), a novel family of deep generative models that combines the robustness of PFGM++ with the efficient single-step sampling of consistency models. PFCM are derived by generalizing consistency distillation to PFGM++ through a change-of-variables and an updated noise distribution. As a distilled version of PFGM++, PFCM inherit the ability to trade off robustness for rigidity via the hyperparameter <inline-formula> <tex-math>$text {D} in text {(}{0},infty text {)}$ </tex-math></inline-formula>. A fact that we exploit to adapt this novel generative model for the task of low-dose CT image denoising, via a “task-specific” sampler that “hijacks” the generative process by replacing an intermediate state with the low-dose CT image. While this “hijacking” introduces a severe mismatch—the noise characteristics of low-dose CT images are different from that of intermediate states in the Poisson flow process—we show that the inherent robustness of PFCM at small D effectively mitigates this issue. The resulting sampler achieves excellent performance in terms of LPIPS, SSIM, and PSNR on the Mayo low-dose CT dataset. By contrast, an analogous sampler based on standard consistency models is found to be significantly less robust under the same conditions, highlighting the importance of a tunable D afforded by our novel framework. To highlight generalizability, we show effective denoising of clinical images from a prototype photon-counting system reconstructed using a sharper kernel and at a range of energy levels.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 7","pages":"2989-3001"},"PeriodicalIF":0.0,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143822793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Large Language Model With Region-Guided Referring and Grounding for CT Report Generation 大型语言模型与区域引导参考和基础的CT报告生成
IEEE transactions on medical imaging Pub Date : 2025-04-11 DOI: 10.1109/TMI.2025.3559923
Zhixuan Chen;Yequan Bie;Haibo Jin;Hao Chen
{"title":"Large Language Model With Region-Guided Referring and Grounding for CT Report Generation","authors":"Zhixuan Chen;Yequan Bie;Haibo Jin;Hao Chen","doi":"10.1109/TMI.2025.3559923","DOIUrl":"10.1109/TMI.2025.3559923","url":null,"abstract":"Computed tomography (CT) report generation is crucial to assist radiologists in interpreting CT volumes, which can be time-consuming and labor-intensive. Existing methods primarily only consider the global features of the entire volume, making it struggle to focus on specific regions and potentially missing abnormalities. To address this issue, we propose Reg2RG, the first region-guided referring and grounding framework for CT report generation, which enhances diagnostic performance by focusing on anatomical regions within the volume. Specifically, we utilize masks from a universal segmentation module to capture local features for each referring region. A local feature decoupling (LFD) strategy is proposed to preserve the local high-resolution details with little computational overhead. Then the local features are integrated with global features to capture inter-regional relationships within a cohesive context. Moreover, we propose a novel region-report alignment (RRA) training strategy. It leverages the recognition of referring regions to guide the generation of region-specific reports, enhancing the model’s referring and grounding capabilities while also improving the report’s interpretability. A large language model (LLM) is further employed as the language decoder to generate reports from integrated visual features, facilitating region-level comprehension. Extensive experiments on two large-scale chest CT-report datasets demonstrate the superiority of our method, which outperforms several state-of-the-art methods in terms of both natural language generation and clinical efficacy metrics while preserving promising interpretability. The code is available at <uri>https://github.com/zhi-xuan-chen/Reg2RG</uri>.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 8","pages":"3139-3150"},"PeriodicalIF":0.0,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143822791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Information-Theoretic Analysis of Multimodal Image Translation 多模态图像翻译的信息论分析
IEEE transactions on medical imaging Pub Date : 2025-04-10 DOI: 10.1109/TMI.2025.3559823
Ruihao Liu;Yudu Li;Yao Li;Yiping P. Du;Zhi-Pei Liang
{"title":"Information-Theoretic Analysis of Multimodal Image Translation","authors":"Ruihao Liu;Yudu Li;Yao Li;Yiping P. Du;Zhi-Pei Liang","doi":"10.1109/TMI.2025.3559823","DOIUrl":"10.1109/TMI.2025.3559823","url":null,"abstract":"Multimodal image translation has found useful applications in solving several medical imaging problems. In this paper, we presented a systematic analysis of multimodal images and machine learning-based image translation from an information-theoretic perspective. Specifically, we analyzed the amount of mutual information that exists in some commonly used multimodal images. This analysis revealed varying structural correlation across modalities and tissue-dependence of mutual information. We also analyzed the amount of information transferred and gained in multimodal image translation and provided an upper bound on the information gain. Information-theoretic measures were also proposed to assess the effectiveness of an image translator, and the uncertainty associated with image translation. Numerical results were presented to demonstrate the information gain in practical multimodal image translation, and to validate the proposed upper bound on information gain and the translation error predictor. Finally, several potential applications of our analysis results were discussed, including the image denoising and reconstruction using side information generated by image translation. The findings from this study may prove useful for guiding the further development and application of multimodal image translation.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 8","pages":"3210-3221"},"PeriodicalIF":0.0,"publicationDate":"2025-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10962263","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143819806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improved Spiral Projection MR Fingerprinting via Memory-Efficient Synergic Optimization of 3D Spiral Trajectory, Image Reconstruction and Parameter Estimation (SOTIP) 基于三维螺旋轨迹、图像重建和参数估计(SOTIP)的高效记忆协同优化螺旋投影MR指纹识别
IEEE transactions on medical imaging Pub Date : 2025-04-10 DOI: 10.1109/TMI.2025.3559467
Jiaren Zou;Yun Jiang;Sydney Kaplan;Nicole Seiberlich;Yue Cao
{"title":"Improved Spiral Projection MR Fingerprinting via Memory-Efficient Synergic Optimization of 3D Spiral Trajectory, Image Reconstruction and Parameter Estimation (SOTIP)","authors":"Jiaren Zou;Yun Jiang;Sydney Kaplan;Nicole Seiberlich;Yue Cao","doi":"10.1109/TMI.2025.3559467","DOIUrl":"10.1109/TMI.2025.3559467","url":null,"abstract":"This work aims to improve scan efficiency and overcome computational challenges in high-resolution MR fingerprinting (MRF) with full 3D spiral trajectory by developing a computationally efficient model-based deep learning (MBDL) image reconstruction framework and a joint optimization framework of image reconstruction, quantitative parameter estimation and k-space sampling trajectory. A parameter estimation loss was used to jointly optimize image reconstruction and parameter quantification networks. Also, data-driven optimization of rotation angles of full 3D spiral trajectories through learning anatomy-specific spatiotemporal sparsity of the MRF data was performed jointly with image reconstruction network training. The MBDL image reconstruction was evaluated using simulated and in vivo MRF data acquired in healthy subjects and patients and compared with a locally low rank (LLR) iterative reconstruction. Whole-brain, 1-mm isotropic, T1 and T2 image volumes reconstructed by the MBDL improved normalized root mean squared errors (NRMSEs) (up to 30%) of the parameters and reduced reconstruction time (up to 65-fold) compared with the LLR reconstruction from both simulated and in vivo MRF data of 2-min and 1-min scans. Joint optimization of image-parameter reconstruction or sampling trajectory-image reconstruction further improved NRMSEs of T1 and T2 significantly from the baseline MBDL reconstruction (p<0.05) on simulated data. This work develops a generic, end-to-end framework to improve parameter quantification accuracy and shorten reconstruction time of 3D quantitative MRI by joint optimization of image reconstruction, parameter reconstruction and sampling trajectory with minimal computation and time demand.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 8","pages":"3185-3195"},"PeriodicalIF":0.0,"publicationDate":"2025-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143819808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信