Lei Li, Liumin Zhu, Qifu Wang, Zhuoli Dong, Tianli Liao, Peng Li
{"title":"基于改进模块的无监督多模态图像配准双流网络。","authors":"Lei Li, Liumin Zhu, Qifu Wang, Zhuoli Dong, Tianli Liao, Peng Li","doi":"10.1007/s12539-025-00707-5","DOIUrl":null,"url":null,"abstract":"<p><p> Multi-modal medical image registration aims to align images from different modalities to establish spatial correspondences. Although deep learning-based methods have shown great potential, the lack of explicit reference relations makes unsupervised multi-modal registration still a challenging task. In this paper, we propose a novel unsupervised dual-stream multi-modal registration framework (DSMR), which combines a dual-stream registration network with a refinement module. Unlike existing methods that treat multi-modal registration as a uni-modal problem using a translation network, DSMR leverages the moving, fixed and translated images to generate two deformation fields. Specifically, we first utilize a translation network to convert a moving image into a translated image similar to a fixed image. Then, we employ the dual-stream registration network to compute two deformation fields respectively: the initial deformation field generated from the fixed image and the moving image, and the translated deformation field generated from the translated image and the fixed image. The translated deformation field acts as a pseudo-ground truth to refine the initial deformation field and mitigate issues such as artificial features introduced by translation. Finally, we use the refinement module to enhance the deformation field by integrating registration errors and contextual information. Extensive experimental results show that our DSMR achieves exceptional performance, demonstrating its strong generalization in learning the spatial relationships between images from unsupervised modalities. The source code of this work is available at https://github.com/raylihaut/DSMR .</p>","PeriodicalId":13670,"journal":{"name":"Interdisciplinary Sciences: Computational Life Sciences","volume":" ","pages":""},"PeriodicalIF":3.9000,"publicationDate":"2025-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"DSMR: Dual-Stream Networks with Refinement Module for Unsupervised Multi-modal Image Registration.\",\"authors\":\"Lei Li, Liumin Zhu, Qifu Wang, Zhuoli Dong, Tianli Liao, Peng Li\",\"doi\":\"10.1007/s12539-025-00707-5\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p> Multi-modal medical image registration aims to align images from different modalities to establish spatial correspondences. Although deep learning-based methods have shown great potential, the lack of explicit reference relations makes unsupervised multi-modal registration still a challenging task. In this paper, we propose a novel unsupervised dual-stream multi-modal registration framework (DSMR), which combines a dual-stream registration network with a refinement module. Unlike existing methods that treat multi-modal registration as a uni-modal problem using a translation network, DSMR leverages the moving, fixed and translated images to generate two deformation fields. Specifically, we first utilize a translation network to convert a moving image into a translated image similar to a fixed image. Then, we employ the dual-stream registration network to compute two deformation fields respectively: the initial deformation field generated from the fixed image and the moving image, and the translated deformation field generated from the translated image and the fixed image. The translated deformation field acts as a pseudo-ground truth to refine the initial deformation field and mitigate issues such as artificial features introduced by translation. Finally, we use the refinement module to enhance the deformation field by integrating registration errors and contextual information. Extensive experimental results show that our DSMR achieves exceptional performance, demonstrating its strong generalization in learning the spatial relationships between images from unsupervised modalities. The source code of this work is available at https://github.com/raylihaut/DSMR .</p>\",\"PeriodicalId\":13670,\"journal\":{\"name\":\"Interdisciplinary Sciences: Computational Life Sciences\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":3.9000,\"publicationDate\":\"2025-04-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Interdisciplinary Sciences: Computational Life Sciences\",\"FirstCategoryId\":\"99\",\"ListUrlMain\":\"https://doi.org/10.1007/s12539-025-00707-5\",\"RegionNum\":2,\"RegionCategory\":\"生物学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"MATHEMATICAL & COMPUTATIONAL BIOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Interdisciplinary Sciences: Computational Life Sciences","FirstCategoryId":"99","ListUrlMain":"https://doi.org/10.1007/s12539-025-00707-5","RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATHEMATICAL & COMPUTATIONAL BIOLOGY","Score":null,"Total":0}
DSMR: Dual-Stream Networks with Refinement Module for Unsupervised Multi-modal Image Registration.
Multi-modal medical image registration aims to align images from different modalities to establish spatial correspondences. Although deep learning-based methods have shown great potential, the lack of explicit reference relations makes unsupervised multi-modal registration still a challenging task. In this paper, we propose a novel unsupervised dual-stream multi-modal registration framework (DSMR), which combines a dual-stream registration network with a refinement module. Unlike existing methods that treat multi-modal registration as a uni-modal problem using a translation network, DSMR leverages the moving, fixed and translated images to generate two deformation fields. Specifically, we first utilize a translation network to convert a moving image into a translated image similar to a fixed image. Then, we employ the dual-stream registration network to compute two deformation fields respectively: the initial deformation field generated from the fixed image and the moving image, and the translated deformation field generated from the translated image and the fixed image. The translated deformation field acts as a pseudo-ground truth to refine the initial deformation field and mitigate issues such as artificial features introduced by translation. Finally, we use the refinement module to enhance the deformation field by integrating registration errors and contextual information. Extensive experimental results show that our DSMR achieves exceptional performance, demonstrating its strong generalization in learning the spatial relationships between images from unsupervised modalities. The source code of this work is available at https://github.com/raylihaut/DSMR .
期刊介绍:
Interdisciplinary Sciences--Computational Life Sciences aims to cover the most recent and outstanding developments in interdisciplinary areas of sciences, especially focusing on computational life sciences, an area that is enjoying rapid development at the forefront of scientific research and technology.
The journal publishes original papers of significant general interest covering recent research and developments. Articles will be published rapidly by taking full advantage of internet technology for online submission and peer-reviewing of manuscripts, and then by publishing OnlineFirstTM through SpringerLink even before the issue is built or sent to the printer.
The editorial board consists of many leading scientists with international reputation, among others, Luc Montagnier (UNESCO, France), Dennis Salahub (University of Calgary, Canada), Weitao Yang (Duke University, USA). Prof. Dongqing Wei at the Shanghai Jiatong University is appointed as the editor-in-chief; he made important contributions in bioinformatics and computational physics and is best known for his ground-breaking works on the theory of ferroelectric liquids. With the help from a team of associate editors and the editorial board, an international journal with sound reputation shall be created.