Medical image analysis最新文献

筛选
英文 中文
Exploring the values underlying machine learning research in medical image analysis
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-02-25 DOI: 10.1016/j.media.2025.103494
John S.H. Baxter , Roy Eagleson
{"title":"Exploring the values underlying machine learning research in medical image analysis","authors":"John S.H. Baxter ,&nbsp;Roy Eagleson","doi":"10.1016/j.media.2025.103494","DOIUrl":"10.1016/j.media.2025.103494","url":null,"abstract":"<div><div>Machine learning has emerged as a crucial tool for medical image analysis, largely due to recent developments in deep artificial neural networks addressing numerous, diverse clinical problems. As with any conceptual tool, the effective use of machine learning should be predicated on an understanding of its underlying motivations just as much as algorithms or theory — and to do so, we need to explore its philosophical foundations. One of these foundations is the understanding of how values, despite being non-empirical, nevertheless affect scientific research. This article has three goals: to introduce the reader to values in a way that is specific to medical image analysis; to characterise a particular set of technical decisions (what we call the <em>end-to-end vs. separable learning spectrum</em>) that are fundamental to machine learning for medical image analysis; and to create a simple and structured method to show how these values can be rigorously connected to these technical decisions. This better understanding of how the philosophy of science can clarify fundamental elements of how medical image analysis research is performed and can be improved.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"102 ","pages":"Article 103494"},"PeriodicalIF":10.7,"publicationDate":"2025-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143508155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MSTNet: Multi-scale spatial-aware transformer with multi-instance learning for diabetic retinopathy classification
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-02-24 DOI: 10.1016/j.media.2025.103511
Xin Wei , Yanbei Liu , Fang Zhang , Lei Geng , Chunyan Shan , Xiangyu Cao , Zhitao Xiao
{"title":"MSTNet: Multi-scale spatial-aware transformer with multi-instance learning for diabetic retinopathy classification","authors":"Xin Wei ,&nbsp;Yanbei Liu ,&nbsp;Fang Zhang ,&nbsp;Lei Geng ,&nbsp;Chunyan Shan ,&nbsp;Xiangyu Cao ,&nbsp;Zhitao Xiao","doi":"10.1016/j.media.2025.103511","DOIUrl":"10.1016/j.media.2025.103511","url":null,"abstract":"<div><div>Diabetic retinopathy (DR), the leading cause of vision loss among diabetic adults worldwide, underscores the importance of early detection and timely treatment using fundus images to prevent vision loss. However, existing deep learning methods struggle to capture the correlation and contextual information of subtle lesion features with the current scale of dataset. To this end, we propose a novel Multi-scale Spatial-aware Transformer Network (MSTNet) for DR classification. MSTNet encodes information from image patches at varying scales as input features, constructing a dual-pathway backbone network comprised of two Transformer encoders of different sizes to extract both local details and global context from images. To fully leverage structural prior knowledge, we introduce a Spatial-aware Module (SAM) to capture spatial local information within the images. Furthermore, considering the differences between medical and natural images, specifically that regions of interest in medical images often lack distinct subjectivity and continuity, we employ a Multiple Instance Learning (MIL) strategy to aggregate features from diverse regions, thereby enhancing correlation to subtle lesion areas. Ultimately, a cross-fusion classifier integrates dual-pathway features to produce the final classification result. We evaluate MSTNet on four public DR datasets, including APTOS2019, RFMiD2020, Messidor, and IDRiD. Extensive experiments demonstrate that MSTNet exhibits superior diagnostic and grading accuracy, achieving improvements of up to 2.0% in terms of ACC and 1.2% in terms of F1 score, highlighting its effectiveness in accurately assessing fundus images.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"102 ","pages":"Article 103511"},"PeriodicalIF":10.7,"publicationDate":"2025-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143511602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Joint modeling histology and molecular markers for cancer classification
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-02-22 DOI: 10.1016/j.media.2025.103505
Xiaofei Wang , Hanyu Liu , Yupei Zhang , Boyang Zhao , Hao Duan , Wanming Hu , Yonggao Mou , Stephen Price , Chao Li
{"title":"Joint modeling histology and molecular markers for cancer classification","authors":"Xiaofei Wang ,&nbsp;Hanyu Liu ,&nbsp;Yupei Zhang ,&nbsp;Boyang Zhao ,&nbsp;Hao Duan ,&nbsp;Wanming Hu ,&nbsp;Yonggao Mou ,&nbsp;Stephen Price ,&nbsp;Chao Li","doi":"10.1016/j.media.2025.103505","DOIUrl":"10.1016/j.media.2025.103505","url":null,"abstract":"<div><div>Cancers are characterized by remarkable heterogeneity and diverse prognosis. Accurate cancer classification is essential for patient stratification and clinical decision-making. Although digital pathology has been advancing cancer diagnosis and prognosis, the paradigm in cancer pathology has shifted from purely relying on histology features to incorporating molecular markers. There is an urgent need for digital pathology methods to meet the needs of the new paradigm. We introduce a novel digital pathology approach to jointly predict molecular markers and histology features and model their interactions for cancer classification. Firstly, to mitigate the challenge of cross-magnification information propagation, we propose a multi-scale disentangling module, enabling the extraction of multi-scale features from high-magnification (cellular-level) to low-magnification (tissue-level) whole slide images. Further, based on the multi-scale features, we propose an attention-based hierarchical multi-task multi-instance learning framework to simultaneously predict histology and molecular markers. Moreover, we propose a co-occurrence probability-based label correlation graph network to model the co-occurrence of molecular markers. Lastly, we design a cross-modal interaction module with the dynamic confidence constrain loss and a cross-modal gradient modulation strategy, to model the interactions of histology and molecular markers. Our experiments demonstrate that our method outperforms other state-of-the-art methods in classifying glioma, histology features and molecular markers. Our method promises to promote precise oncology with the potential to advance biomedical research and clinical applications. The code is available at <span><span>github</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"102 ","pages":"Article 103505"},"PeriodicalIF":10.7,"publicationDate":"2025-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143478696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CVFSNet: A Cross View Fusion Scoring Network for end-to-end mTICI scoring
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-02-22 DOI: 10.1016/j.media.2025.103508
Weijin Xu , Tao Tan , Huihua Yang , Wentao Liu , Yifu Chen , Ling Zhang , Xipeng Pan , Feng Gao , Yiming Deng , Theo van Walsum , Matthijs van der Sluijs , Ruisheng Su
{"title":"CVFSNet: A Cross View Fusion Scoring Network for end-to-end mTICI scoring","authors":"Weijin Xu ,&nbsp;Tao Tan ,&nbsp;Huihua Yang ,&nbsp;Wentao Liu ,&nbsp;Yifu Chen ,&nbsp;Ling Zhang ,&nbsp;Xipeng Pan ,&nbsp;Feng Gao ,&nbsp;Yiming Deng ,&nbsp;Theo van Walsum ,&nbsp;Matthijs van der Sluijs ,&nbsp;Ruisheng Su","doi":"10.1016/j.media.2025.103508","DOIUrl":"10.1016/j.media.2025.103508","url":null,"abstract":"<div><div>The modified Thrombolysis In Cerebral Infarction (mTICI) score serves as one of the key clinical indicators to assess the success of the Mechanical Thrombectomy (MT), requiring physicians to inspect Digital Subtraction Angiography (DSA) images in both the coronal and sagittal views. However, assessing mTICI scores manually is time-consuming and has considerable observer variability. An automatic, objective, and end-to-end method for assigning mTICI scores may effectively avoid observer errors. Therefore, in this paper, we propose a novel Cross View Fusion Scoring Network (CVFSNet) for automatic, objective, and end-to-end mTICI scoring, which employs dual branches to simultaneously extract spatial–temporal features from coronal and sagittal views. Then, a novel Cross View Fusion Module (CVFM) is introduced to fuse the features from two views, which explores the positional characteristics of coronal and sagittal views to generate a pseudo-oblique sagittal feature and ultimately constructs more representative features to enhance the scoring performance. In addition, we provide AmTICIS, a newly collected and the first publicly available DSA image dataset with expert annotations for automatic mTICI scoring, which can effectively promote researchers to conduct studies of ischemic stroke based on DSA images and finally help patients get better medical treatment. Extensive experimentation results demonstrate the promising performance of our methods and the validity of the cross-view fusion module. Code and data will be available at <span><span>https://github.com/xwjBupt/CVFSNet</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"102 ","pages":"Article 103508"},"PeriodicalIF":10.7,"publicationDate":"2025-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143508156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MERIT: Multi-view evidential learning for reliable and interpretable liver fibrosis staging
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-02-22 DOI: 10.1016/j.media.2025.103507
Yuanye Liu , Zheyao Gao , Nannan Shi , Fuping Wu , Yuxin Shi , Qingchao Chen , Xiahai Zhuang
{"title":"MERIT: Multi-view evidential learning for reliable and interpretable liver fibrosis staging","authors":"Yuanye Liu ,&nbsp;Zheyao Gao ,&nbsp;Nannan Shi ,&nbsp;Fuping Wu ,&nbsp;Yuxin Shi ,&nbsp;Qingchao Chen ,&nbsp;Xiahai Zhuang","doi":"10.1016/j.media.2025.103507","DOIUrl":"10.1016/j.media.2025.103507","url":null,"abstract":"<div><div>Accurate staging of liver fibrosis from magnetic resonance imaging (MRI) is crucial in clinical practice. While conventional methods often focus on a specific sub-region, multi-view learning captures more information by analyzing multiple patches simultaneously. However, previous multi-view approaches could not typically calculate uncertainty by nature, and they generally integrate features from different views in a black-box fashion, hence compromising reliability as well as interpretability of the resulting models. In this work, we propose a new multi-view method based on evidential learning, referred to as MERIT, which tackles the two challenges in a unified framework. MERIT enables uncertainty quantification of the predictions to enhance reliability, and employs a logic-based combination rule to improve interpretability. Specifically, MERIT models the prediction from each sub-view as an opinion with quantified uncertainty under the guidance of the subjective logic theory. Furthermore, a distribution-aware base rate is introduced to enhance performance, particularly in scenarios involving class distribution shifts. Finally, MERIT adopts a feature-specific combination rule to explicitly fuse multi-view predictions, thereby enhancing interpretability. Results have showcased the effectiveness of the proposed MERIT, highlighting the reliability and offering both ad-hoc and post-hoc interpretability. They also illustrate that MERIT can elucidate the significance of each view in the decision-making process for liver fibrosis staging. Our code will be released via <span><span>https://github.com/HenryLau7/MERIT</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"102 ","pages":"Article 103507"},"PeriodicalIF":10.7,"publicationDate":"2025-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143521224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Large-scale benchmarking and boosting transfer learning for medical image analysis
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-02-21 DOI: 10.1016/j.media.2025.103487
Mohammad Reza Hosseinzadeh Taher , Fatemeh Haghighi , Michael B. Gotway , Jianming Liang
{"title":"Large-scale benchmarking and boosting transfer learning for medical image analysis","authors":"Mohammad Reza Hosseinzadeh Taher ,&nbsp;Fatemeh Haghighi ,&nbsp;Michael B. Gotway ,&nbsp;Jianming Liang","doi":"10.1016/j.media.2025.103487","DOIUrl":"10.1016/j.media.2025.103487","url":null,"abstract":"<div><div>Transfer learning, particularly fine-tuning models pretrained on photographic images to medical images, has proven indispensable for medical image analysis. There are numerous models with distinct architectures pretrained on various datasets using different strategies. But, there is a lack of up-to-date large-scale evaluations of their transferability to medical imaging, posing a challenge for practitioners in selecting the most proper pretrained models for their tasks at hand. To fill this gap, we conduct a comprehensive systematic study, focusing on (<em>i</em>) benchmarking numerous conventional and modern convolutional neural network (ConvNet) and vision transformer architectures across various medical tasks; (<em>ii</em>) investigating the impact of fine-tuning data size on the performance of ConvNets compared with vision transformers in medical imaging; (<em>iii</em>) examining the impact of pretraining data granularity on transfer learning performance; (<em>iv</em>) evaluating transferability of a wide range of recent self-supervised methods with diverse training objectives to a variety of medical tasks across different modalities; and (<em>v</em>) delving into the efficacy of domain-adaptive pretraining on both photographic and medical datasets to develop high-performance models for medical tasks. Our large-scale study (<span><math><mo>∼</mo></math></span>5,000 experiments) yields impactful insights: (1) ConvNets demonstrate higher transferability than vision transformers when fine-tuning for medical tasks; (2) ConvNets prove to be more annotation efficient than vision transformers when fine-tuning for medical tasks; (3) Fine-grained representations, rather than high-level semantic features, prove pivotal for fine-grained medical tasks; (4) Self-supervised models excel in learning holistic features compared with supervised models; and (5) Domain-adaptive pretraining leads to performant models via harnessing knowledge acquired from ImageNet and enhancing it through the utilization of readily accessible expert annotations associated with medical datasets. As open science, all codes and pretrained models are available at <span><span>GitHub.com/JLiangLab/BenchmarkTransferLearning</span><svg><path></path></svg></span> (Version 2).</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"102 ","pages":"Article 103487"},"PeriodicalIF":10.7,"publicationDate":"2025-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143674377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Preoperative fracture reduction planning for image-guided pelvic trauma surgery: A comprehensive pipeline with learning
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-02-21 DOI: 10.1016/j.media.2025.103506
Yanzhen Liu , Sutuke Yibulayimu , Yudi Sang , Gang Zhu , Chao Shi , Chendi Liang , Qiyong Cao , Chunpeng Zhao , Xinbao Wu , Yu Wang
{"title":"Preoperative fracture reduction planning for image-guided pelvic trauma surgery: A comprehensive pipeline with learning","authors":"Yanzhen Liu ,&nbsp;Sutuke Yibulayimu ,&nbsp;Yudi Sang ,&nbsp;Gang Zhu ,&nbsp;Chao Shi ,&nbsp;Chendi Liang ,&nbsp;Qiyong Cao ,&nbsp;Chunpeng Zhao ,&nbsp;Xinbao Wu ,&nbsp;Yu Wang","doi":"10.1016/j.media.2025.103506","DOIUrl":"10.1016/j.media.2025.103506","url":null,"abstract":"<div><div>Pelvic fractures are among the most complex challenges in orthopedic trauma, which usually involve hipbone and sacrum fractures, as well as joint dislocations. Traditional preoperative surgical planning relies on the operator’s subjective interpretation of CT images, which is both time-consuming and prone to inaccuracies. This study introduces an automated preoperative planning solution for pelvic fracture reduction, addressing the limitations of conventional methods. The proposed solution includes a novel multi-scale distance-weighted neural network for segmenting pelvic fracture fragments from CT scans, and a learning-based approach to restore pelvic structure, combining a morphable model-based method for single-bone fracture reduction and a recursive pose estimation module for joint dislocation reduction. Comprehensive experiments on a clinical dataset of 30 fracture cases demonstrated the efficacy of our methods. Our segmentation network outperformed traditional max-flow segmentation and networks without distance weighting, achieving a Dice similarity coefficient (DSC) of 0.986 ± 0.055 and a local DSC of 0.940 ± 0.056 around the fracture sites. The proposed reduction method surpassed mirroring and mean template techniques, and an optimization-based joint matching method, achieving a target reduction error of (3.265 ± 1.485) mm, rotation errors of (3.476 ± 1.995)°, and translation errors of (2.773 ± 1.390) mm. In the proof-of-concept cadaver studies, our method achieved a DSC of 0.988 in segmentation and 3.731 mm error in reduction planning, which senior experts deemed excellent. In conclusion, our automated approach significantly improves traditional preoperative planning, enhancing both efficiency and accuracy in pelvic fracture reduction.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"102 ","pages":"Article 103506"},"PeriodicalIF":10.7,"publicationDate":"2025-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143474420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hyperfusion: A hypernetwork approach to multimodal integration of tabular and medical imaging data for predictive modeling
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-02-21 DOI: 10.1016/j.media.2025.103503
Daniel Duenias , Brennan Nichyporuk , Tal Arbel , Tammy Riklin Raviv , ADNI
{"title":"Hyperfusion: A hypernetwork approach to multimodal integration of tabular and medical imaging data for predictive modeling","authors":"Daniel Duenias ,&nbsp;Brennan Nichyporuk ,&nbsp;Tal Arbel ,&nbsp;Tammy Riklin Raviv ,&nbsp;ADNI","doi":"10.1016/j.media.2025.103503","DOIUrl":"10.1016/j.media.2025.103503","url":null,"abstract":"<div><div>The integration of diverse clinical modalities such as medical imaging and the tabular data extracted from patients’ Electronic Health Records (EHRs) is a crucial aspect of modern healthcare. Integrative analysis of multiple sources can provide a comprehensive understanding of the clinical condition of a patient, improving diagnosis and treatment decision. Deep Neural Networks (DNNs) consistently demonstrate outstanding performance in a wide range of multimodal tasks in the medical domain. However, the complex endeavor of effectively merging medical imaging with clinical, demographic and genetic information represented as numerical tabular data remains a highly active and ongoing research pursuit. We present a novel framework based on hypernetworks to fuse clinical imaging and tabular data by conditioning the image processing on the EHR’s values and measurements. This approach aims to leverage the complementary information present in these modalities to enhance the accuracy of various medical applications. We demonstrate the strength and generality of our method on two different brain Magnetic Resonance Imaging (MRI) analysis tasks, namely, brain age prediction conditioned by subject’s sex and multi-class Alzheimer’s Disease (AD) classification conditioned by tabular data. We show that our framework outperforms both single-modality models and state-of-the-art MRI tabular data fusion methods. A link to our code can be found at <span><span>https://github.com/daniel4725/HyperFusion</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"102 ","pages":"Article 103503"},"PeriodicalIF":10.7,"publicationDate":"2025-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143534140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Joint coil sensitivity and motion correction in parallel MRI with a self-calibrating score-based diffusion model
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-02-21 DOI: 10.1016/j.media.2025.103502
Lixuan Chen , Xuanyu Tian , Jiangjie Wu , Ruimin Feng , Guoyan Lao , Yuyao Zhang , Hongen Liao , Hongjiang Wei
{"title":"Joint coil sensitivity and motion correction in parallel MRI with a self-calibrating score-based diffusion model","authors":"Lixuan Chen ,&nbsp;Xuanyu Tian ,&nbsp;Jiangjie Wu ,&nbsp;Ruimin Feng ,&nbsp;Guoyan Lao ,&nbsp;Yuyao Zhang ,&nbsp;Hongen Liao ,&nbsp;Hongjiang Wei","doi":"10.1016/j.media.2025.103502","DOIUrl":"10.1016/j.media.2025.103502","url":null,"abstract":"<div><div>Magnetic Resonance Imaging (MRI) stands as a powerful modality in clinical diagnosis. However, it faces challenges such as long acquisition time and vulnerability to motion-induced artifacts. While many existing motion correction algorithms have shown success, most fail to account for the impact of motion artifacts on coil sensitivity map (CSM) estimation during fast MRI reconstruction. This oversight can lead to significant performance degradation, as errors in the estimated CSMs can propagate and compromise motion correction. In this work, we propose JSMoCo, a novel method for jointly estimating motion parameters and time-varying coil sensitivity maps for under-sampled MRI reconstruction. The joint estimation presents a highly ill-posed inverse problem due to the increased number of unknowns. To address this challenge, we leverage score-based diffusion models as powerful priors and apply MRI physical principles to effectively constrain the solution space. Specifically, we parameterize rigid motion with trainable variables and model CSMs as polynomial functions. A Gibbs sampler is employed to ensure system consistency between the sensitivity maps and the reconstructed images, effectively preventing error propagation from pre-estimated sensitivity maps to the final reconstructed images. We evaluate JSMoCo through 2D and 3D motion correction experiments on simulated motion-corrupted fastMRI dataset and <em>in-vivo</em> real MRI brain scans. The results demonstrate that JSMoCo successfully reconstructs high-quality MRI images from under-sampled k-space data, achieving robust motion correction by accurately estimating time-varying coil sensitivities. The code is available at <span><span>https://github.com/MeijiTian/JSMoCo</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"102 ","pages":"Article 103502"},"PeriodicalIF":10.7,"publicationDate":"2025-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143550698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MBSS-T1: Model-based subject-specific self-supervised motion correction for robust cardiac T1 mapping
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-02-19 DOI: 10.1016/j.media.2025.103495
Eyal Hanania , Adi Zehavi-Lenz , Ilya Volovik , Daphna Link-Sourani , Israel Cohen , Moti Freiman
{"title":"MBSS-T1: Model-based subject-specific self-supervised motion correction for robust cardiac T1 mapping","authors":"Eyal Hanania ,&nbsp;Adi Zehavi-Lenz ,&nbsp;Ilya Volovik ,&nbsp;Daphna Link-Sourani ,&nbsp;Israel Cohen ,&nbsp;Moti Freiman","doi":"10.1016/j.media.2025.103495","DOIUrl":"10.1016/j.media.2025.103495","url":null,"abstract":"<div><div>Cardiac T1 mapping is a valuable quantitative MRI technique for diagnosing diffuse myocardial diseases. Traditional methods, relying on breath-hold sequences and cardiac triggering based on an ECG signal, face challenges with patient compliance, limiting their effectiveness. Image registration can enable motion-robust cardiac T1 mapping, but inherent intensity differences between time points pose a challenge. We present MBSS-T1, a subject-specific self-supervised model for motion correction in cardiac T1 mapping. Physical constraints, implemented through a loss function comparing synthesized and motion-corrected images, enforce signal decay behavior, while anatomical constraints, applied via a Dice loss, ensure realistic deformations. The unique combination of these constraints results in motion-robust cardiac T1 mapping along the longitudinal relaxation axis. In a 5-fold experiment on a public dataset of 210 patients (STONE sequence) and an internal dataset of 19 patients (MOLLI sequence), MBSS-T1 outperformed baseline deep-learning registration methods. It achieved superior model fitting quality (<span><math><msup><mrow><mi>R</mi></mrow><mrow><mn>2</mn></mrow></msup></math></span>: 0.975 vs. 0.941, 0.946 for STONE; 0.987 vs. 0.982, 0.965 for MOLLI free-breathing; 0.994 vs. 0.993, 0.991 for MOLLI breath-hold), anatomical alignment (Dice: 0.89 vs. 0.84, 0.88 for STONE; 0.963 vs. 0.919, 0.851 for MOLLI free-breathing; 0.954 vs. 0.924, 0.871 for MOLLI breath-hold), and visual quality (4.33 vs. 3.38, 3.66 for STONE; 4.1 vs. 3.5, 3.28 for MOLLI free-breathing; 3.79 vs. 3.15, 2.84 for MOLLI breath-hold). MBSS-T1 enables motion-robust T1 mapping for broader patient populations, overcoming challenges such as suboptimal compliance, and facilitates free-breathing cardiac T1 mapping without requiring large annotated datasets. Our code is available at <span><span>https://github.com/TechnionComputationalMRILab/MBSS-T1</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"102 ","pages":"Article 103495"},"PeriodicalIF":10.7,"publicationDate":"2025-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143471189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信