Jingke Zhu , Boyun Zheng , Bing Xiong , Yuxin Zhang , Ming Cui , Deyu Sun , Jing Cai , Yaoqin Xie , Wenjian Qin
{"title":"SynMSE: A multimodal similarity evaluator for complex distribution discrepancy in unsupervised deformable multimodal medical image registration","authors":"Jingke Zhu , Boyun Zheng , Bing Xiong , Yuxin Zhang , Ming Cui , Deyu Sun , Jing Cai , Yaoqin Xie , Wenjian Qin","doi":"10.1016/j.media.2025.103620","DOIUrl":null,"url":null,"abstract":"<div><div>Unsupervised deformable multimodal medical image registration often confronts complex scenarios, which include intermodality domain gaps, multi-organ anatomical heterogeneity, and physiological motion variability. These factors introduce substantial grayscale distribution discrepancies, hindering precise alignment between different imaging modalities. However, existing methods have not been sufficiently adapted to meet the specific demands of registration in such complex scenarios. To overcome the above challenges, we propose SynMSE, a novel multimodal similarity evaluator that can be seamlessly integrated as a plug-and-play module in any registration framework to serve as the similarity metric. SynMSE is trained using random transformations to simulate spatial misalignments and a structure-constrained generator to model grayscale distribution discrepancies. By emphasizing spatial alignment and mitigating the influence of complex distributional variations, SynMSE effectively addresses the aforementioned issues. Extensive experiments on the Learn2Reg 2022 CT-MR abdomen dataset, the clinical cervical CT-MR dataset, and the CuRIOUS MR-US brain dataset demonstrate that SynMSE achieves state-of-the-art performance. Our code is available on the project page <span><span>https://github.com/MIXAILAB/SynMSE</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"103 ","pages":"Article 103620"},"PeriodicalIF":10.7000,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical image analysis","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1361841525001677","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Unsupervised deformable multimodal medical image registration often confronts complex scenarios, which include intermodality domain gaps, multi-organ anatomical heterogeneity, and physiological motion variability. These factors introduce substantial grayscale distribution discrepancies, hindering precise alignment between different imaging modalities. However, existing methods have not been sufficiently adapted to meet the specific demands of registration in such complex scenarios. To overcome the above challenges, we propose SynMSE, a novel multimodal similarity evaluator that can be seamlessly integrated as a plug-and-play module in any registration framework to serve as the similarity metric. SynMSE is trained using random transformations to simulate spatial misalignments and a structure-constrained generator to model grayscale distribution discrepancies. By emphasizing spatial alignment and mitigating the influence of complex distributional variations, SynMSE effectively addresses the aforementioned issues. Extensive experiments on the Learn2Reg 2022 CT-MR abdomen dataset, the clinical cervical CT-MR dataset, and the CuRIOUS MR-US brain dataset demonstrate that SynMSE achieves state-of-the-art performance. Our code is available on the project page https://github.com/MIXAILAB/SynMSE.
期刊介绍:
Medical Image Analysis serves as a platform for sharing new research findings in the realm of medical and biological image analysis, with a focus on applications of computer vision, virtual reality, and robotics to biomedical imaging challenges. The journal prioritizes the publication of high-quality, original papers contributing to the fundamental science of processing, analyzing, and utilizing medical and biological images. It welcomes approaches utilizing biomedical image datasets across all spatial scales, from molecular/cellular imaging to tissue/organ imaging.