Attention-enhanced Dual-stream Registration Network via Mixed Attention Transformer and Gated Adaptive Fusion

IF 10.7 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Yuan Chang, Zheng Li
{"title":"Attention-enhanced Dual-stream Registration Network via Mixed Attention Transformer and Gated Adaptive Fusion","authors":"Yuan Chang,&nbsp;Zheng Li","doi":"10.1016/j.media.2025.103713","DOIUrl":null,"url":null,"abstract":"<div><div>Deformable registration requires extracting salient features within each image and finding feature pairs with potential matching possibilities between the moving and fixed images, thereby estimating the deformation field used to align the images to be registered. With the development of deep learning, various deformable registration networks utilizing advanced architectures such as CNNs or Transformers have been proposed, showing excellent registration performance. However, existing works fail to effectively achieve both feature extraction within images and feature matching between images simultaneously. In this paper, we propose a novel Attention-enhanced Dual-stream Registration Network (ADRNet) for deformable brain MRI registration. First, we use parallel CNN modules to extract shallow features from the moving and fixed images separately. Then, we propose a Mixed Attention Transformer (MAT) module with self-attention, cross-attention, and local attention to model self-correlation and cross-correlation to find features for matching. Finally, we improve skip connections, a key component of U-shape networks ignored by existing methods. We propose a Gated Adaptive Fusion (GAF) module with a gate mechanism, using decoding features to control the encoding features transmitted through skip connections, to better integrate encoder–decoder features, thereby obtaining matching features with more accurate one-to-one correspondence. The extensive and comprehensive experiments on three public brain MRI datasets demonstrate that our method achieves state-of-the-art registration performance.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"105 ","pages":"Article 103713"},"PeriodicalIF":10.7000,"publicationDate":"2025-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical image analysis","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1361841525002609","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Deformable registration requires extracting salient features within each image and finding feature pairs with potential matching possibilities between the moving and fixed images, thereby estimating the deformation field used to align the images to be registered. With the development of deep learning, various deformable registration networks utilizing advanced architectures such as CNNs or Transformers have been proposed, showing excellent registration performance. However, existing works fail to effectively achieve both feature extraction within images and feature matching between images simultaneously. In this paper, we propose a novel Attention-enhanced Dual-stream Registration Network (ADRNet) for deformable brain MRI registration. First, we use parallel CNN modules to extract shallow features from the moving and fixed images separately. Then, we propose a Mixed Attention Transformer (MAT) module with self-attention, cross-attention, and local attention to model self-correlation and cross-correlation to find features for matching. Finally, we improve skip connections, a key component of U-shape networks ignored by existing methods. We propose a Gated Adaptive Fusion (GAF) module with a gate mechanism, using decoding features to control the encoding features transmitted through skip connections, to better integrate encoder–decoder features, thereby obtaining matching features with more accurate one-to-one correspondence. The extensive and comprehensive experiments on three public brain MRI datasets demonstrate that our method achieves state-of-the-art registration performance.
基于混合注意转换器和门控自适应融合的注意力增强双流注册网络
可变形配准需要提取每个图像中的显著特征,并在运动图像和固定图像之间寻找具有潜在匹配可能性的特征对,从而估计用于对齐待配准图像的变形场。随着深度学习的发展,利用cnn或transformer等先进架构的各种可变形配准网络被提出,具有优异的配准性能。然而,现有的工作并不能有效地同时实现图像内的特征提取和图像间的特征匹配。在本文中,我们提出了一种新的用于形变脑MRI配准的注意力增强双流配准网络(ADRNet)。首先,我们使用并行CNN模块分别从运动图像和固定图像中提取浅特征。然后,我们提出了一个混合注意转换器(MAT)模块,该模块具有自注意、交叉注意和局部注意,用于模型自相关和相互相关,以寻找匹配的特征。最后,我们改进了跳跃连接,这是现有方法忽略的u型网络的关键组成部分。本文提出了一种门控自适应融合(GAF)模块,该模块采用门控机制,利用解码特征来控制通过跳过连接传输的编码特征,从而更好地整合编码器-解码器特征,从而获得更精确的一对一对应的匹配特征。在三个公开的脑MRI数据集上进行的广泛而全面的实验表明,我们的方法达到了最先进的配准性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Medical image analysis
Medical image analysis 工程技术-工程:生物医学
CiteScore
22.10
自引率
6.40%
发文量
309
审稿时长
6.6 months
期刊介绍: Medical Image Analysis serves as a platform for sharing new research findings in the realm of medical and biological image analysis, with a focus on applications of computer vision, virtual reality, and robotics to biomedical imaging challenges. The journal prioritizes the publication of high-quality, original papers contributing to the fundamental science of processing, analyzing, and utilizing medical and biological images. It welcomes approaches utilizing biomedical image datasets across all spatial scales, from molecular/cellular imaging to tissue/organ imaging.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信