AAPMatcher:自适应注意力修剪匹配器,用于精确的局部特征匹配

IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Xuan Fan , Sijia Liu , Shuaiyan Liu , Lijun Zhao , Ruifeng Li
{"title":"AAPMatcher:自适应注意力修剪匹配器,用于精确的局部特征匹配","authors":"Xuan Fan ,&nbsp;Sijia Liu ,&nbsp;Shuaiyan Liu ,&nbsp;Lijun Zhao ,&nbsp;Ruifeng Li","doi":"10.1016/j.neunet.2025.107403","DOIUrl":null,"url":null,"abstract":"<div><div>Local feature matching, which seeks to establish correspondences between two images, serves as a fundamental component in numerous computer vision applications, such as camera tracking and 3D mapping. Recently, Transformer has demonstrated remarkable capability in modeling accurate correspondences for the two input sequences owing to its long-range context integration capability. Whereas, indiscriminate modeling in traditional transformers inevitably introduces noise and includes irrelevant information which can degrade the quality of feature representations. Towards this end, we introduce an <em>adaptive attention pruning matcher for accurate local feature matching (AAPMatcher)</em>, which is designed for robust and accurate local feature matching. We overhaul the traditional uniform feature extraction for sequences by introducing the adaptive pruned transformer (APFormer), which adaptively retains the most profitable attention values for feature consolidation, enabling the network to obtain more useful feature information while filtering out useless information. Moreover, considering the fixed combination of self- and cross-APFormer greatly limits the flexibility of the network, we propose a two-stage <em>adaptive hybrid attention strategy (AHAS)</em>, which achieves the optimal combination for APFormers in a coarse to fine manner. Benefiting from the clean feature representations and the optimal combination of APFormers, AAPMatcher exceeds the state-of-the-art approaches over multiple benchmarks, including pose estimation, homography estimation, and visual localization.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"188 ","pages":"Article 107403"},"PeriodicalIF":6.0000,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"AAPMatcher: Adaptive attention pruning matcher for accurate local feature matching\",\"authors\":\"Xuan Fan ,&nbsp;Sijia Liu ,&nbsp;Shuaiyan Liu ,&nbsp;Lijun Zhao ,&nbsp;Ruifeng Li\",\"doi\":\"10.1016/j.neunet.2025.107403\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Local feature matching, which seeks to establish correspondences between two images, serves as a fundamental component in numerous computer vision applications, such as camera tracking and 3D mapping. Recently, Transformer has demonstrated remarkable capability in modeling accurate correspondences for the two input sequences owing to its long-range context integration capability. Whereas, indiscriminate modeling in traditional transformers inevitably introduces noise and includes irrelevant information which can degrade the quality of feature representations. Towards this end, we introduce an <em>adaptive attention pruning matcher for accurate local feature matching (AAPMatcher)</em>, which is designed for robust and accurate local feature matching. We overhaul the traditional uniform feature extraction for sequences by introducing the adaptive pruned transformer (APFormer), which adaptively retains the most profitable attention values for feature consolidation, enabling the network to obtain more useful feature information while filtering out useless information. Moreover, considering the fixed combination of self- and cross-APFormer greatly limits the flexibility of the network, we propose a two-stage <em>adaptive hybrid attention strategy (AHAS)</em>, which achieves the optimal combination for APFormers in a coarse to fine manner. Benefiting from the clean feature representations and the optimal combination of APFormers, AAPMatcher exceeds the state-of-the-art approaches over multiple benchmarks, including pose estimation, homography estimation, and visual localization.</div></div>\",\"PeriodicalId\":49763,\"journal\":{\"name\":\"Neural Networks\",\"volume\":\"188 \",\"pages\":\"Article 107403\"},\"PeriodicalIF\":6.0000,\"publicationDate\":\"2025-03-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neural Networks\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0893608025002825\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0893608025002825","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

局部特征匹配,旨在建立两个图像之间的对应关系,是许多计算机视觉应用的基本组成部分,如相机跟踪和3D映射。最近,Transformer由于其远程上下文集成能力,在对两个输入序列的精确对应建模方面表现出了显著的能力。然而,传统变压器的不加区分建模不可避免地引入了噪声和不相关信息,这些信息会降低特征表示的质量。为此,我们引入了一种用于精确局部特征匹配的自适应注意力修剪匹配器(AAPMatcher),旨在实现鲁棒和精确的局部特征匹配。我们通过引入自适应剪叶变压器(APFormer)来改进传统的序列均匀特征提取方法,该方法自适应地保留最有利的关注值进行特征整合,使网络在过滤掉无用信息的同时获得更多有用的特征信息。此外,考虑到自与交叉apformer的固定组合极大地限制了网络的灵活性,我们提出了一种两阶段自适应混合注意策略(AHAS),以粗到细的方式实现了apformer的最优组合。得益于清晰的特征表示和APFormers的最佳组合,AAPMatcher在多个基准测试中超过了最先进的方法,包括姿态估计、单应性估计和视觉定位。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
AAPMatcher: Adaptive attention pruning matcher for accurate local feature matching
Local feature matching, which seeks to establish correspondences between two images, serves as a fundamental component in numerous computer vision applications, such as camera tracking and 3D mapping. Recently, Transformer has demonstrated remarkable capability in modeling accurate correspondences for the two input sequences owing to its long-range context integration capability. Whereas, indiscriminate modeling in traditional transformers inevitably introduces noise and includes irrelevant information which can degrade the quality of feature representations. Towards this end, we introduce an adaptive attention pruning matcher for accurate local feature matching (AAPMatcher), which is designed for robust and accurate local feature matching. We overhaul the traditional uniform feature extraction for sequences by introducing the adaptive pruned transformer (APFormer), which adaptively retains the most profitable attention values for feature consolidation, enabling the network to obtain more useful feature information while filtering out useless information. Moreover, considering the fixed combination of self- and cross-APFormer greatly limits the flexibility of the network, we propose a two-stage adaptive hybrid attention strategy (AHAS), which achieves the optimal combination for APFormers in a coarse to fine manner. Benefiting from the clean feature representations and the optimal combination of APFormers, AAPMatcher exceeds the state-of-the-art approaches over multiple benchmarks, including pose estimation, homography estimation, and visual localization.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Neural Networks
Neural Networks 工程技术-计算机:人工智能
CiteScore
13.90
自引率
7.70%
发文量
425
审稿时长
67 days
期刊介绍: Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信