{"title":"Learning Feature Matching via Matchable Keypoint-Assisted Graph Neural Network","authors":"Zizhuo Li;Jiayi Ma","doi":"10.1109/TIP.2024.3512352","DOIUrl":null,"url":null,"abstract":"Accurately matching local features between a pair of images corresponding to the same 3D scene is a challenging computer vision task. Previous studies typically utilize attention-based graph neural networks (GNNs) with fully-connected graphs over keypoints within/across images for visual and geometric information reasoning. However, in the background of local feature matching, a significant number of keypoints are non-repeatable due to factors like occlusion and failure of the detector, and thus irrelevant for message passing. The connectivity with non-repeatable keypoints not only introduces redundancy, resulting in limited efficiency (quadratic computational complexity w.r.t. the keypoint number), but also interferes with the representation aggregation process, leading to limited accuracy. Aiming at the best of both worlds on accuracy and efficiency, we propose MaKeGNN, a sparse attention-based GNN architecture which bypasses non-repeatable keypoints and leverages matchable ones to guide compact and meaningful message passing. More specifically, our Bilateral Context-Aware Sampling (BCAS) Module first dynamically samples two small sets of well-distributed keypoints with high matchability scores from the image pair. Then, our Matchable Keypoint-Assisted Context Aggregation (MKACA) Module regards sampled informative keypoints as message bottlenecks and thus constrains each keypoint only to retrieve favorable contextual information from intra- and inter-matchable keypoints, evading the interference of irrelevant and redundant connectivity with non-repeatable ones. Furthermore, considering the potential noise in initial keypoints and sampled matchable ones, the MKACA module adopts a matchability-guided attentional aggregation operation for purer data-dependent context propagation. By these means, MaKeGNN outperforms the state-of-the-arts on multiple highly challenging benchmarks, while significantly reducing computational and memory complexity compared to typical attentional GNNs.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"154-169"},"PeriodicalIF":0.0000,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10794561/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Accurately matching local features between a pair of images corresponding to the same 3D scene is a challenging computer vision task. Previous studies typically utilize attention-based graph neural networks (GNNs) with fully-connected graphs over keypoints within/across images for visual and geometric information reasoning. However, in the background of local feature matching, a significant number of keypoints are non-repeatable due to factors like occlusion and failure of the detector, and thus irrelevant for message passing. The connectivity with non-repeatable keypoints not only introduces redundancy, resulting in limited efficiency (quadratic computational complexity w.r.t. the keypoint number), but also interferes with the representation aggregation process, leading to limited accuracy. Aiming at the best of both worlds on accuracy and efficiency, we propose MaKeGNN, a sparse attention-based GNN architecture which bypasses non-repeatable keypoints and leverages matchable ones to guide compact and meaningful message passing. More specifically, our Bilateral Context-Aware Sampling (BCAS) Module first dynamically samples two small sets of well-distributed keypoints with high matchability scores from the image pair. Then, our Matchable Keypoint-Assisted Context Aggregation (MKACA) Module regards sampled informative keypoints as message bottlenecks and thus constrains each keypoint only to retrieve favorable contextual information from intra- and inter-matchable keypoints, evading the interference of irrelevant and redundant connectivity with non-repeatable ones. Furthermore, considering the potential noise in initial keypoints and sampled matchable ones, the MKACA module adopts a matchability-guided attentional aggregation operation for purer data-dependent context propagation. By these means, MaKeGNN outperforms the state-of-the-arts on multiple highly challenging benchmarks, while significantly reducing computational and memory complexity compared to typical attentional GNNs.