TSMR-Net: a two-stage multimodal medical image registration method via pseudo-image generation and deformable registration

IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Dongxue Li , Xin Yang , Songyu Chen , Liwei Deng , Qi Lan , Sijuan Huang , Jing Wang
{"title":"TSMR-Net: a two-stage multimodal medical image registration method via pseudo-image generation and deformable registration","authors":"Dongxue Li ,&nbsp;Xin Yang ,&nbsp;Songyu Chen ,&nbsp;Liwei Deng ,&nbsp;Qi Lan ,&nbsp;Sijuan Huang ,&nbsp;Jing Wang","doi":"10.1016/j.patrec.2025.09.006","DOIUrl":null,"url":null,"abstract":"<div><div>Multimodal medical image registration is critical for accurate diagnosis, treatment planning, and surgical guidance. However, differences in imaging mechanisms cause substantial appearance discrepancies between modalities, hindering effective feature extraction and similarity measurement. We propose TSMR-Net, a two-stage multimodal registration framework. In the first stage, an Intensity Distribution Regression module nonlinearly transforms the fixed image into a modality-consistent generated fixed-like image, reducing inter-modality appearance gaps. In the second stage, a deformable registration network aligns the generated fixed-like and moving images using a unimodal similarity metric. The architecture incorporates a parallel downsampling module for multi-scale spatial feature capture and residual skip connections with a 3D channel interaction module to enhance feature propagation. Experiments on IXI and BraTS2023 datasets show that TSMR-Net outperforms state-of-the-art methods in alignment precision, structural consistency, and deformation stability. These findings validate the two-stage strategy’s effectiveness in bridging modality gaps and improving registration accuracy. TSMR-Net provides a scalable, robust solution for diverse multimodal registration tasks with strong potential for clinical application.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"197 ","pages":"Pages 359-367"},"PeriodicalIF":3.3000,"publicationDate":"2025-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pattern Recognition Letters","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167865525003162","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Multimodal medical image registration is critical for accurate diagnosis, treatment planning, and surgical guidance. However, differences in imaging mechanisms cause substantial appearance discrepancies between modalities, hindering effective feature extraction and similarity measurement. We propose TSMR-Net, a two-stage multimodal registration framework. In the first stage, an Intensity Distribution Regression module nonlinearly transforms the fixed image into a modality-consistent generated fixed-like image, reducing inter-modality appearance gaps. In the second stage, a deformable registration network aligns the generated fixed-like and moving images using a unimodal similarity metric. The architecture incorporates a parallel downsampling module for multi-scale spatial feature capture and residual skip connections with a 3D channel interaction module to enhance feature propagation. Experiments on IXI and BraTS2023 datasets show that TSMR-Net outperforms state-of-the-art methods in alignment precision, structural consistency, and deformation stability. These findings validate the two-stage strategy’s effectiveness in bridging modality gaps and improving registration accuracy. TSMR-Net provides a scalable, robust solution for diverse multimodal registration tasks with strong potential for clinical application.

Abstract Image

TSMR-Net:一种通过伪图像生成和变形配准的两阶段多模态医学图像配准方法
多模态医学图像配准是准确诊断、治疗计划和手术指导的关键。然而,成像机制的差异导致了模态之间存在实质性的外观差异,阻碍了有效的特征提取和相似性测量。我们提出TSMR-Net,一个两阶段的多模态注册框架。在第一阶段,强度分布回归模块将固定图像非线性地转换为模态一致的生成的类似固定的图像,减少模态间的外观差距。在第二阶段,可变形的配准网络使用单峰相似度度量来对齐生成的固定和运动图像。该体系结构结合了用于多尺度空间特征捕获的并行下采样模块和带有3D通道交互模块的残余跳过连接,以增强特征传播。在IXI和BraTS2023数据集上的实验表明,TSMR-Net在对准精度、结构一致性和变形稳定性方面优于目前最先进的方法。这些发现验证了两阶段策略在弥合模态差距和提高配准精度方面的有效性。TSMR-Net为不同的多模式注册任务提供了一个可扩展的、强大的解决方案,具有很强的临床应用潜力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Pattern Recognition Letters
Pattern Recognition Letters 工程技术-计算机:人工智能
CiteScore
12.40
自引率
5.90%
发文量
287
审稿时长
9.1 months
期刊介绍: Pattern Recognition Letters aims at rapid publication of concise articles of a broad interest in pattern recognition. Subject areas include all the current fields of interest represented by the Technical Committees of the International Association of Pattern Recognition, and other developing themes involving learning and recognition.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信