Dongxue Li , Xin Yang , Songyu Chen , Liwei Deng , Qi Lan , Sijuan Huang , Jing Wang
{"title":"TSMR-Net: a two-stage multimodal medical image registration method via pseudo-image generation and deformable registration","authors":"Dongxue Li , Xin Yang , Songyu Chen , Liwei Deng , Qi Lan , Sijuan Huang , Jing Wang","doi":"10.1016/j.patrec.2025.09.006","DOIUrl":null,"url":null,"abstract":"<div><div>Multimodal medical image registration is critical for accurate diagnosis, treatment planning, and surgical guidance. However, differences in imaging mechanisms cause substantial appearance discrepancies between modalities, hindering effective feature extraction and similarity measurement. We propose TSMR-Net, a two-stage multimodal registration framework. In the first stage, an Intensity Distribution Regression module nonlinearly transforms the fixed image into a modality-consistent generated fixed-like image, reducing inter-modality appearance gaps. In the second stage, a deformable registration network aligns the generated fixed-like and moving images using a unimodal similarity metric. The architecture incorporates a parallel downsampling module for multi-scale spatial feature capture and residual skip connections with a 3D channel interaction module to enhance feature propagation. Experiments on IXI and BraTS2023 datasets show that TSMR-Net outperforms state-of-the-art methods in alignment precision, structural consistency, and deformation stability. These findings validate the two-stage strategy’s effectiveness in bridging modality gaps and improving registration accuracy. TSMR-Net provides a scalable, robust solution for diverse multimodal registration tasks with strong potential for clinical application.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"197 ","pages":"Pages 359-367"},"PeriodicalIF":3.3000,"publicationDate":"2025-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pattern Recognition Letters","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167865525003162","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Multimodal medical image registration is critical for accurate diagnosis, treatment planning, and surgical guidance. However, differences in imaging mechanisms cause substantial appearance discrepancies between modalities, hindering effective feature extraction and similarity measurement. We propose TSMR-Net, a two-stage multimodal registration framework. In the first stage, an Intensity Distribution Regression module nonlinearly transforms the fixed image into a modality-consistent generated fixed-like image, reducing inter-modality appearance gaps. In the second stage, a deformable registration network aligns the generated fixed-like and moving images using a unimodal similarity metric. The architecture incorporates a parallel downsampling module for multi-scale spatial feature capture and residual skip connections with a 3D channel interaction module to enhance feature propagation. Experiments on IXI and BraTS2023 datasets show that TSMR-Net outperforms state-of-the-art methods in alignment precision, structural consistency, and deformation stability. These findings validate the two-stage strategy’s effectiveness in bridging modality gaps and improving registration accuracy. TSMR-Net provides a scalable, robust solution for diverse multimodal registration tasks with strong potential for clinical application.
期刊介绍:
Pattern Recognition Letters aims at rapid publication of concise articles of a broad interest in pattern recognition.
Subject areas include all the current fields of interest represented by the Technical Committees of the International Association of Pattern Recognition, and other developing themes involving learning and recognition.