基于关键点响应约束的特征匹配方法,使用相位一致性二进制编码

IF 7.5 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Xiaomin Liu , Qiqi Li , Yuzhe Hu , Jeng-Shyang Pan , Huaqi Zhao , Donghua Yuan , Jun-Bao Li
{"title":"基于关键点响应约束的特征匹配方法,使用相位一致性二进制编码","authors":"Xiaomin Liu ,&nbsp;Qiqi Li ,&nbsp;Yuzhe Hu ,&nbsp;Jeng-Shyang Pan ,&nbsp;Huaqi Zhao ,&nbsp;Donghua Yuan ,&nbsp;Jun-Bao Li","doi":"10.1016/j.patcog.2024.111078","DOIUrl":null,"url":null,"abstract":"<div><div>At present, the cross-view geo-localization (CGL) task is still far from practical. This is mainly because of the intensity differences between the two images from different sensors. In this study, we propose a learning feature-matching framework with binary encoding of phase congruency to solve the problem of intensity differences between the two images. First, the autoencoder-weighted fusion method is used to obtain an intensity alignment image that would make the two images from different sensors comparable. Second, the keypoint responses of the two images are calculated using the binary encoding of the phase congruency theory, which is employed to construct the feature-matching method. This method considers the invariance of the phase information in weak-texture images and uses the phase information to compute the keypoint response with higher distinguishability and matchability. Finally, using the two intensity-aligned images, a method for computing the binary encoding of the phase congruency keypoint response loss function is employed to optimize the keypoint detector and feature descriptor and obtain the corresponding keypoint set of the two images. The experimental results show that the improved feature matching is superior to existing methods and solves the problem of view differences in object matching. The code can be found at <span><span>https://github.com/lqq-dot/FMPCKR</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"159 ","pages":"Article 111078"},"PeriodicalIF":7.5000,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Feature-matching method based on keypoint response constraint using binary encoding of phase congruency\",\"authors\":\"Xiaomin Liu ,&nbsp;Qiqi Li ,&nbsp;Yuzhe Hu ,&nbsp;Jeng-Shyang Pan ,&nbsp;Huaqi Zhao ,&nbsp;Donghua Yuan ,&nbsp;Jun-Bao Li\",\"doi\":\"10.1016/j.patcog.2024.111078\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>At present, the cross-view geo-localization (CGL) task is still far from practical. This is mainly because of the intensity differences between the two images from different sensors. In this study, we propose a learning feature-matching framework with binary encoding of phase congruency to solve the problem of intensity differences between the two images. First, the autoencoder-weighted fusion method is used to obtain an intensity alignment image that would make the two images from different sensors comparable. Second, the keypoint responses of the two images are calculated using the binary encoding of the phase congruency theory, which is employed to construct the feature-matching method. This method considers the invariance of the phase information in weak-texture images and uses the phase information to compute the keypoint response with higher distinguishability and matchability. Finally, using the two intensity-aligned images, a method for computing the binary encoding of the phase congruency keypoint response loss function is employed to optimize the keypoint detector and feature descriptor and obtain the corresponding keypoint set of the two images. The experimental results show that the improved feature matching is superior to existing methods and solves the problem of view differences in object matching. The code can be found at <span><span>https://github.com/lqq-dot/FMPCKR</span><svg><path></path></svg></span>.</div></div>\",\"PeriodicalId\":49713,\"journal\":{\"name\":\"Pattern Recognition\",\"volume\":\"159 \",\"pages\":\"Article 111078\"},\"PeriodicalIF\":7.5000,\"publicationDate\":\"2024-10-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Pattern Recognition\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S003132032400829X\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pattern Recognition","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S003132032400829X","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

目前,跨视角地理定位(CGL)任务还远未实现。这主要是因为来自不同传感器的两幅图像之间存在强度差异。在本研究中,我们提出了一种具有相位一致性二进制编码的学习特征匹配框架,以解决两幅图像之间的强度差异问题。首先,使用自编码器加权融合方法获得强度对齐图像,使来自不同传感器的两幅图像具有可比性。其次,利用相位一致性理论的二进制编码计算两幅图像的关键点响应,并以此构建特征匹配方法。这种方法考虑了弱纹理图像中相位信息的不变性,并利用相位信息计算出具有更高区分度和匹配度的关键点响应。最后,利用两幅强度对齐的图像,采用计算相位一致性关键点响应损失函数二进制编码的方法,优化关键点检测器和特征描述器,得到两幅图像对应的关键点集。实验结果表明,改进后的特征匹配优于现有方法,并解决了物体匹配中的视图差异问题。代码见 https://github.com/lqq-dot/FMPCKR。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Feature-matching method based on keypoint response constraint using binary encoding of phase congruency
At present, the cross-view geo-localization (CGL) task is still far from practical. This is mainly because of the intensity differences between the two images from different sensors. In this study, we propose a learning feature-matching framework with binary encoding of phase congruency to solve the problem of intensity differences between the two images. First, the autoencoder-weighted fusion method is used to obtain an intensity alignment image that would make the two images from different sensors comparable. Second, the keypoint responses of the two images are calculated using the binary encoding of the phase congruency theory, which is employed to construct the feature-matching method. This method considers the invariance of the phase information in weak-texture images and uses the phase information to compute the keypoint response with higher distinguishability and matchability. Finally, using the two intensity-aligned images, a method for computing the binary encoding of the phase congruency keypoint response loss function is employed to optimize the keypoint detector and feature descriptor and obtain the corresponding keypoint set of the two images. The experimental results show that the improved feature matching is superior to existing methods and solves the problem of view differences in object matching. The code can be found at https://github.com/lqq-dot/FMPCKR.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Pattern Recognition
Pattern Recognition 工程技术-工程:电子与电气
CiteScore
14.40
自引率
16.20%
发文量
683
审稿时长
5.6 months
期刊介绍: The field of Pattern Recognition is both mature and rapidly evolving, playing a crucial role in various related fields such as computer vision, image processing, text analysis, and neural networks. It closely intersects with machine learning and is being applied in emerging areas like biometrics, bioinformatics, multimedia data analysis, and data science. The journal Pattern Recognition, established half a century ago during the early days of computer science, has since grown significantly in scope and influence.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信