Medical image analysis最新文献

筛选
英文 中文
A lung structure and function information-guided residual diffusion model for predicting idiopathic pulmonary fibrosis progression 预测特发性肺纤维化进展的肺结构和功能信息导向残留扩散模型
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-04-26 DOI: 10.1016/j.media.2025.103604
Caiwen Jiang , Xiaodan Xing , Yang Nan , Yingying Fang , Sheng Zhang , Simon Walsh , Guang Yang , Dinggang Shen
{"title":"A lung structure and function information-guided residual diffusion model for predicting idiopathic pulmonary fibrosis progression","authors":"Caiwen Jiang ,&nbsp;Xiaodan Xing ,&nbsp;Yang Nan ,&nbsp;Yingying Fang ,&nbsp;Sheng Zhang ,&nbsp;Simon Walsh ,&nbsp;Guang Yang ,&nbsp;Dinggang Shen","doi":"10.1016/j.media.2025.103604","DOIUrl":"10.1016/j.media.2025.103604","url":null,"abstract":"<div><div>Idiopathic Pulmonary Fibrosis (IPF) is a progressive lung disease that continuously scars and thickens lung tissue, leading to respiratory difficulties. Timely assessment of IPF progression is essential for developing treatment plans and improving patient survival rates. However, current clinical standards require multiple (usually two) CT scans at certain intervals to assess disease progression. This presents a dilemma: <em>the disease progression is identified only after the disease has already progressed</em>. To address this issue, a feasible solution is to generate the follow-up CT image from the patient’s initial CT image to achieve early prediction of IPF. To this end, we propose a lung structure and function information-guided residual diffusion model. The key components of our model include (1) using a 2.5D generation strategy to reduce computational cost of generating 3D images with the diffusion model; (2) designing structural attention to mitigate negative impact of spatial misalignment between the two CT images on generation performance; (3) employing residual diffusion to accelerate model training and inference while focusing more on differences between the two CT images (i.e., the lesion areas); and (4) developing a CLIP-based text extraction module to extract lung function test information and further using such extracted information to guide the generation. Extensive experiments demonstrate that our method can effectively predict IPF progression and achieve superior generation performance compared to state-of-the-art methods.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"103 ","pages":"Article 103604"},"PeriodicalIF":10.7,"publicationDate":"2025-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143890511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
“Recon-all-clinical”: Cortical surface reconstruction and analysis of heterogeneous clinical brain MRI “recon -all-临床”:脑皮质表面重建及异质临床MRI分析
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-04-26 DOI: 10.1016/j.media.2025.103608
Karthik Gopinath , Douglas N. Greve , Colin Magdamo , Steve Arnold , Sudeshna Das , Oula Puonti , Juan Eugenio Iglesias , Alzheimer’s Disease Neuroimaging Initiative
{"title":"“Recon-all-clinical”: Cortical surface reconstruction and analysis of heterogeneous clinical brain MRI","authors":"Karthik Gopinath ,&nbsp;Douglas N. Greve ,&nbsp;Colin Magdamo ,&nbsp;Steve Arnold ,&nbsp;Sudeshna Das ,&nbsp;Oula Puonti ,&nbsp;Juan Eugenio Iglesias ,&nbsp;Alzheimer’s Disease Neuroimaging Initiative","doi":"10.1016/j.media.2025.103608","DOIUrl":"10.1016/j.media.2025.103608","url":null,"abstract":"<div><div>Surface-based analysis of the cerebral cortex is ubiquitous in human neuroimaging with MRI. It is crucial for tasks like cortical registration, parcellation, and thickness estimation. Traditionally, such analyses require high-resolution, isotropic scans with good gray–white matter contrast, typically a T1-weighted scan with 1 mm resolution. This requirement precludes application of these techniques to most MRI scans acquired for clinical purposes, since they are often anisotropic and lack the required T1-weighted contrast. To overcome this limitation and enable large-scale neuroimaging studies using vast amounts of existing clinical data, we introduce <em>recon-all-clinical</em>, a novel methodology for cortical reconstruction, registration, parcellation, and thickness estimation for clinical brain MRI scans of any resolution and contrast. Our approach employs a hybrid analysis method that combines a convolutional neural network (CNN) trained with domain randomization to predict signed distance functions (SDFs), and classical geometry processing for accurate surface placement while maintaining topological and geometric constraints. The method does not require retraining for different acquisitions, thus simplifying the analysis of heterogeneous clinical datasets. We evaluated <em>recon-all-clinical</em> on multiple public datasets like ADNI, HCP, AIBL, OASIS and including a large clinical dataset of over 9,500 scans. The results indicate that our method produces geometrically precise cortical reconstructions across different MRI contrasts and resolutions, consistently achieving high accuracy in parcellation. Cortical thickness estimates are precise enough to capture aging effects, independently of MRI contrast, even though accuracy varies with slice thickness. Our method is publicly available at <span><span>https://surfer.nmr.mgh.harvard.edu/fswiki/recon-all-clinical</span><svg><path></path></svg></span>, enabling researchers to perform detailed cortical analysis on the huge amounts of already existing clinical MRI scans. This advancement may be particularly valuable for studying rare diseases and underrepresented populations where research-grade MRI data is scarce.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"103 ","pages":"Article 103608"},"PeriodicalIF":10.7,"publicationDate":"2025-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143881698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ProtoASNet: Comprehensive evaluation and enhanced performance with uncertainty estimation for aortic stenosis classification in echocardiography ProtoASNet:超声心动图中主动脉狭窄分类的综合评价和增强的不确定性估计性能
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-04-24 DOI: 10.1016/j.media.2025.103600
Ang Nan Gu , Hooman Vaseli , Michael Y. Tsang , Victoria Wu , S. Neda Ahmadi Amiri , Nima Kondori , Andrea Fung , Teresa S.M. Tsang , Purang Abolmaesumi
{"title":"ProtoASNet: Comprehensive evaluation and enhanced performance with uncertainty estimation for aortic stenosis classification in echocardiography","authors":"Ang Nan Gu ,&nbsp;Hooman Vaseli ,&nbsp;Michael Y. Tsang ,&nbsp;Victoria Wu ,&nbsp;S. Neda Ahmadi Amiri ,&nbsp;Nima Kondori ,&nbsp;Andrea Fung ,&nbsp;Teresa S.M. Tsang ,&nbsp;Purang Abolmaesumi","doi":"10.1016/j.media.2025.103600","DOIUrl":"10.1016/j.media.2025.103600","url":null,"abstract":"<div><div>Aortic stenosis (AS) is a prevalent heart valve disease that requires accurate and timely diagnosis for effective treatment. Current methods for automated AS severity classification rely on black-box deep learning techniques, which suffer from a low level of trustworthiness and hinder clinical adoption. To tackle this challenge, we propose ProtoASNet, a prototype-based neural network designed to classify the severity of AS from B-mode echocardiography videos. ProtoASNet bases its predictions exclusively on the similarity scores between the input and a set of learned spatio-temporal prototypes, ensuring inherent interpretability. Users can directly visualize the similarity between the input and each prototype, as well as the weighted sum of similarities. This approach provides clinically relevant evidence for each prediction, as the prototypes typically highlight markers such as calcification and restricted movement of aortic valve leaflets. Moreover, ProtoASNet utilizes abstention loss to estimate aleatoric uncertainty by defining a set of prototypes that capture ambiguity and insufficient information in the observed data. This feature augments prototype-based models with the ability to explain when they may fail. We evaluate ProtoASNet on a private dataset and the publicly available TMED-2 dataset. It surpasses existing state-of-the-art methods, achieving a balanced accuracy of 80.0% on our private dataset and 79.7% on the TMED-2 dataset, respectively. By discarding cases flagged as uncertain, ProtoASNet achieves an improved balanced accuracy of 82.4% on our private dataset. Furthermore, by offering interpretability and an uncertainty measure for each prediction, ProtoASNet improves transparency and facilitates the interactive usage of deep networks in aiding clinical decision-making. Our source code is available at: <span><span>https://github.com/hooman007/ProtoASNet</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"103 ","pages":"Article 103600"},"PeriodicalIF":10.7,"publicationDate":"2025-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143903454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
NuHTC: A hybrid task cascade for nuclei instance segmentation and classification NuHTC:一个用于核实例分割和分类的混合任务级联
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-04-23 DOI: 10.1016/j.media.2025.103595
Bao Li , Zhenyu Liu , Song Zhang , Xiangyu Liu , Caixia Sun , Jiangang Liu , Bensheng Qiu , Jie Tian
{"title":"NuHTC: A hybrid task cascade for nuclei instance segmentation and classification","authors":"Bao Li ,&nbsp;Zhenyu Liu ,&nbsp;Song Zhang ,&nbsp;Xiangyu Liu ,&nbsp;Caixia Sun ,&nbsp;Jiangang Liu ,&nbsp;Bensheng Qiu ,&nbsp;Jie Tian","doi":"10.1016/j.media.2025.103595","DOIUrl":"10.1016/j.media.2025.103595","url":null,"abstract":"<div><div>Nuclei instance segmentation and classification of hematoxylin and eosin (H&amp;E) stained digital pathology images are essential for further downstream cancer diagnosis and prognosis tasks. Previous works mainly focused on bottom-up methods using a single-level feature map for segmenting nuclei instances, while multilevel feature maps seemed to be more suitable for nuclei instances with various sizes and types. In this paper, we develop an effective top-down nuclei instance segmentation and classification framework (NuHTC) based on a hybrid task cascade (HTC). The NuHTC has two new components: a watershed proposal network (WSPN) and a hybrid feature extractor (HFE). The WSPN can provide additional proposals for the region proposal network which leads the model to predict bounding boxes more precisely. The HFE at the region of interest (RoI) alignment stage can better utilize both the high-level global and the low-level semantic features. It can guide NuHTC to learn nuclei instance features with less intraclass variance. We conduct extensive experiments using our method in four public multiclass nuclei instance segmentation datasets. The quantitative results of NuHTC demonstrate its superiority in both instance segmentation and classification compared to other state-of-the-art methods.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"103 ","pages":"Article 103595"},"PeriodicalIF":10.7,"publicationDate":"2025-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143876511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MVNMF: Multiview nonnegative matrix factorization for radio-multigenomic analysis in breast cancer prognosis MVNMF:用于乳腺癌预后的放射-多基因组分析的多视角非阴性矩阵分解
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-04-22 DOI: 10.1016/j.media.2025.103566
Jian Guan , Ming Fan , Lihua Li
{"title":"MVNMF: Multiview nonnegative matrix factorization for radio-multigenomic analysis in breast cancer prognosis","authors":"Jian Guan ,&nbsp;Ming Fan ,&nbsp;Lihua Li","doi":"10.1016/j.media.2025.103566","DOIUrl":"10.1016/j.media.2025.103566","url":null,"abstract":"<div><div>Radiogenomic research provides a deeper understanding of breast cancer biology by investigating the correlations between imaging phenotypes and genetic data. However, current radiogenomic research primarily focuses on the correlation between imaging phenotypes and single-genomic data (e.g., gene expression data), overlooking the potential of multi-genomics data to unveil more nuances in cancer characterization. To this end, we propose a multiview nonnegative matrix factorization (MVNMF) method for the radio-multigenomic analysis that identifies dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) features associated with multi-genomics data, including DNA copy number alterations, mutations, and mRNAs, each of which is independently predictive of cancer outcomes. MVNMF incorporates subspace learning and multiview regularization into a unified model to simultaneously select features and explore correlations. Subspace learning is utilized to identify representative radiomic features crucial for tumor analysis, while multiview regularization enables the learning of the correlation between the identified radiomic features and multi-genomics data. Experimental results showed that, for overall survival prediction in breast cancer, MVNMF classified patients into two distinct groups characterized by significant differences in survival (p = 0.0012). Furthermore, it achieved better performance with a C-index of 0.698 compared to the method without considering any genomics data (C-index = 0.528). MVNMF is an effective framework for identifying radiomic features linked to multi-genomics data, which improves its predictive power and provides a better understanding of the biological mechanisms underlying observed phenotypes. MVNMF offers a novel framework for prognostic prediction in breast cancer, with the potential to catalyze further radiogenomic/radio-multigenomic studies.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"103 ","pages":"Article 103566"},"PeriodicalIF":10.7,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143876509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SynMSE: A multimodal similarity evaluator for complex distribution discrepancy in unsupervised deformable multimodal medical image registration SynMSE:无监督可变形多模态医学图像配准中复杂分布差异的多模态相似性评估器
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-04-22 DOI: 10.1016/j.media.2025.103620
Jingke Zhu , Boyun Zheng , Bing Xiong , Yuxin Zhang , Ming Cui , Deyu Sun , Jing Cai , Yaoqin Xie , Wenjian Qin
{"title":"SynMSE: A multimodal similarity evaluator for complex distribution discrepancy in unsupervised deformable multimodal medical image registration","authors":"Jingke Zhu ,&nbsp;Boyun Zheng ,&nbsp;Bing Xiong ,&nbsp;Yuxin Zhang ,&nbsp;Ming Cui ,&nbsp;Deyu Sun ,&nbsp;Jing Cai ,&nbsp;Yaoqin Xie ,&nbsp;Wenjian Qin","doi":"10.1016/j.media.2025.103620","DOIUrl":"10.1016/j.media.2025.103620","url":null,"abstract":"<div><div>Unsupervised deformable multimodal medical image registration often confronts complex scenarios, which include intermodality domain gaps, multi-organ anatomical heterogeneity, and physiological motion variability. These factors introduce substantial grayscale distribution discrepancies, hindering precise alignment between different imaging modalities. However, existing methods have not been sufficiently adapted to meet the specific demands of registration in such complex scenarios. To overcome the above challenges, we propose SynMSE, a novel multimodal similarity evaluator that can be seamlessly integrated as a plug-and-play module in any registration framework to serve as the similarity metric. SynMSE is trained using random transformations to simulate spatial misalignments and a structure-constrained generator to model grayscale distribution discrepancies. By emphasizing spatial alignment and mitigating the influence of complex distributional variations, SynMSE effectively addresses the aforementioned issues. Extensive experiments on the Learn2Reg 2022 CT-MR abdomen dataset, the clinical cervical CT-MR dataset, and the CuRIOUS MR-US brain dataset demonstrate that SynMSE achieves state-of-the-art performance. Our code is available on the project page <span><span>https://github.com/MIXAILAB/SynMSE</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"103 ","pages":"Article 103620"},"PeriodicalIF":10.7,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143890510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cycle-conditional diffusion model for noise correction of diffusion-weighted images using unpaired data 非配对弥散加权图像噪声校正的循环条件弥散模型
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-04-21 DOI: 10.1016/j.media.2025.103579
Pengli Zhu , Chaoqiang Liu , Yingji Fu , Nanguang Chen , Anqi Qiu
{"title":"Cycle-conditional diffusion model for noise correction of diffusion-weighted images using unpaired data","authors":"Pengli Zhu ,&nbsp;Chaoqiang Liu ,&nbsp;Yingji Fu ,&nbsp;Nanguang Chen ,&nbsp;Anqi Qiu","doi":"10.1016/j.media.2025.103579","DOIUrl":"10.1016/j.media.2025.103579","url":null,"abstract":"<div><div>Diffusion-weighted imaging (DWI) is a key modality for studying brain microstructure, but its signals are highly susceptible to noise due to the thermal motion of water molecules and interactions with tissue microarchitecture, leading to significant signal attenuation and a low signal-to-noise ratio (SNR). In this paper, we propose a novel approach, a Cycle-Conditional Diffusion Model (Cycle-CDM) using unpaired data learning, aimed at improving DWI quality and reliability through noise correction. Cycle-CDM leverages a cycle-consistent translation architecture to bridge the domain gap between noise-contaminated and noise-free DWIs, enabling the restoration of high-quality images without requiring paired datasets. By utilizing two conditional diffusion models, Cycle-CDM establishes data interrelationships between the two types of DWIs, while incorporating synthesized anatomical priors from the cycle translation process to guide noise removal. In addition, we introduce specific constraints to preserve anatomical fidelity, allowing Cycle-CDM to effectively learn the underlying noise distribution and achieve accurate denoising. Our experiments conducted on simulated datasets, as well as children and adolescents’ datasets with strong clinical relevance. Our results demonstrate that Cycle-CDM outperforms comparative methods, such as U-Net, CycleGAN, Pix2Pix, MUNIT and MPPCA, in terms of noise correction performance. We demonstrated that Cycle-CDM can be generalized to DWIs with head motion when they were acquired using different MRI scannsers. Importantly, the denoised DWI data produced by Cycle-CDM exhibit accurate preservation of underlying tissue microstructure, thus substantially improving their medical applicability.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"103 ","pages":"Article 103579"},"PeriodicalIF":10.7,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143863787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Uncertainty mapping and probabilistic tractography using Simulation-based Inference in diffusion MRI: A comparison with classical Bayes 基于模拟推理的扩散MRI不确定性映射和概率示踪:与经典贝叶斯的比较
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-04-20 DOI: 10.1016/j.media.2025.103580
J.P. Manzano-Patrón , Michael Deistler , Cornelius Schröder , Theodore Kypraios , Pedro J. Gonçalves , Jakob H. Macke , Stamatios N. Sotiropoulos
{"title":"Uncertainty mapping and probabilistic tractography using Simulation-based Inference in diffusion MRI: A comparison with classical Bayes","authors":"J.P. Manzano-Patrón ,&nbsp;Michael Deistler ,&nbsp;Cornelius Schröder ,&nbsp;Theodore Kypraios ,&nbsp;Pedro J. Gonçalves ,&nbsp;Jakob H. Macke ,&nbsp;Stamatios N. Sotiropoulos","doi":"10.1016/j.media.2025.103580","DOIUrl":"10.1016/j.media.2025.103580","url":null,"abstract":"<div><div>Simulation-Based Inference (SBI) has recently emerged as a powerful framework for Bayesian inference: Neural networks are trained on simulations from a forward model, and learn to rapidly estimate posterior distributions. We here present an SBI framework for parametric spherical deconvolution of diffusion MRI data of the brain. We demonstrate its utility for estimating white matter fibre orientations, mapping uncertainty of voxel-based estimates and performing probabilistic tractography by spatially propagating fibre orientation uncertainty. We conduct an extensive comparison against established Bayesian methods based on Markov-Chain Monte-Carlo (MCMC) and find that: a) in-silico training can lead to calibrated SBI networks with accurate parameter estimates and uncertainty mapping for both single- and multi-shell diffusion MRI, b) SBI allows amortised inference of the posterior distribution of model parameters given unseen observations, which is orders of magnitude faster than MCMC, c) SBI-based tractography yields reconstructions that have a high level of agreement with their MCMC-based counterparts, equal to or higher than scan-rescan reproducibility of estimates. We further demonstrate how SBI design considerations (such as dealing with noise, defining priors and handling model selection) can affect performance, allowing us to identify optimal practices. Taken together, our results show that SBI provides a powerful alternative to classical Bayesian inference approaches for fast and accurate model estimation and uncertainty mapping in MRI.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"103 ","pages":"Article 103580"},"PeriodicalIF":10.7,"publicationDate":"2025-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143887921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3DGR-CT: Sparse-view CT reconstruction with a 3D Gaussian representation 3DGR-CT:采用三维高斯表示的稀疏视图CT重建
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-04-20 DOI: 10.1016/j.media.2025.103585
Yingtai Li , Xueming Fu , Han Li , Shang Zhao , Ruiyang Jin , S. Kevin Zhou
{"title":"3DGR-CT: Sparse-view CT reconstruction with a 3D Gaussian representation","authors":"Yingtai Li ,&nbsp;Xueming Fu ,&nbsp;Han Li ,&nbsp;Shang Zhao ,&nbsp;Ruiyang Jin ,&nbsp;S. Kevin Zhou","doi":"10.1016/j.media.2025.103585","DOIUrl":"10.1016/j.media.2025.103585","url":null,"abstract":"<div><div>Sparse-view computed tomography (CT) reduces radiation exposure by acquiring fewer projections, making it a valuable tool in clinical scenarios where low-dose radiation is essential. However, this often results in increased noise and artifacts due to limited data. In this paper we propose a novel 3D Gaussian representation (3DGR) based method for sparse-view CT reconstruction. Inspired by recent success in novel view synthesis driven by 3D Gaussian splatting, we leverage the efficiency and expressiveness of 3D Gaussian representation as an alternative to implicit neural representation. To unleash the potential of 3DGR for CT imaging scenario, we propose two key innovations: (i) FBP-image-guided Guassian initialization and (ii) efficient integration with a differentiable CT projector. Extensive experiments and ablations on diverse datasets demonstrate the proposed 3DGR-CT consistently outperforms state-of-the-art counterpart methods, achieving higher reconstruction accuracy with faster convergence. Furthermore, we showcase the potential of 3DGR-CT for real-time physical simulation, which holds important clinical applications while challenging for implicit neural representations. Code available at: <span><span>https://github.com/SigmaLDC/3DGR-CT</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"103 ","pages":"Article 103585"},"PeriodicalIF":10.7,"publicationDate":"2025-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143869711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
One for multiple: Physics-informed synthetic data boosts generalizable deep learning for fast MRI reconstruction 一对多:物理信息合成数据促进了快速MRI重建的可推广深度学习
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-04-20 DOI: 10.1016/j.media.2025.103616
Zi Wang , Xiaotong Yu , Chengyan Wang , Weibo Chen , Jiazheng Wang , Ying-Hua Chu , Hongwei Sun , Rushuai Li , Peiyong Li , Fan Yang , Haiwei Han , Taishan Kang , Jianzhong Lin , Chen Yang , Shufu Chang , Zhang Shi , Sha Hua , Yan Li , Juan Hu , Liuhong Zhu , Xiaobo Qu
{"title":"One for multiple: Physics-informed synthetic data boosts generalizable deep learning for fast MRI reconstruction","authors":"Zi Wang ,&nbsp;Xiaotong Yu ,&nbsp;Chengyan Wang ,&nbsp;Weibo Chen ,&nbsp;Jiazheng Wang ,&nbsp;Ying-Hua Chu ,&nbsp;Hongwei Sun ,&nbsp;Rushuai Li ,&nbsp;Peiyong Li ,&nbsp;Fan Yang ,&nbsp;Haiwei Han ,&nbsp;Taishan Kang ,&nbsp;Jianzhong Lin ,&nbsp;Chen Yang ,&nbsp;Shufu Chang ,&nbsp;Zhang Shi ,&nbsp;Sha Hua ,&nbsp;Yan Li ,&nbsp;Juan Hu ,&nbsp;Liuhong Zhu ,&nbsp;Xiaobo Qu","doi":"10.1016/j.media.2025.103616","DOIUrl":"10.1016/j.media.2025.103616","url":null,"abstract":"<div><div>Magnetic resonance imaging (MRI) is a widely used radiological modality renowned for its radiation-free, comprehensive insights into the human body, facilitating medical diagnoses. However, the drawback of prolonged scan times hinders its accessibility. The k-space undersampling offers a solution, yet the resultant artifacts necessitate meticulous removal during image reconstruction. Although deep learning (DL) has proven effective for fast MRI image reconstruction, its broader applicability across various imaging scenarios has been constrained. Challenges include the high cost and privacy restrictions associated with acquiring large-scale, diverse training data, coupled with the inherent difficulty of addressing mismatches between training and target data in existing DL methodologies. Here, we present a novel Physics-Informed Synthetic data learning Framework for fast MRI, called PISF. PISF marks a breakthrough by enabling generalizable DL for multi-scenario MRI reconstruction through a single trained model. Our approach separates the reconstruction of a 2D image into many 1D basic problems, commencing with 1D data synthesis to facilitate generalization. We demonstrate that training DL models on synthetic data, coupled with enhanced learning techniques, yields <em>in vivo</em> MRI reconstructions comparable to or surpassing those of models trained on matched realistic datasets, reducing the reliance on real-world MRI data by up to 96 %. With a single trained model, our PISF supports the high-quality reconstruction under 4 sampling patterns, 5 anatomies, 6 contrasts, 5 vendors, and 7 centers, exhibiting remarkable generalizability. Its adaptability to 2 neuro and 2 cardiovascular patient populations has been validated through evaluations by 10 experienced medical professionals. In summary, PISF presents a feasible and cost-effective way to significantly boost the widespread adoption of DL in various fast MRI applications.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"103 ","pages":"Article 103616"},"PeriodicalIF":10.7,"publicationDate":"2025-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143869709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信