跨视点图像地理定位的交叉注意网络

Jingjing Wang, Xi Li
{"title":"跨视点图像地理定位的交叉注意网络","authors":"Jingjing Wang, Xi Li","doi":"10.1109/ISAS59543.2023.10164457","DOIUrl":null,"url":null,"abstract":"The task of cross-view geo-location is to get a corresponding image from a dataset of Global Positioning System (GPS) labeled aerial-view images, given a ground-view query image with an unknown location. This task presents challenges due to the significant differences in viewpoint and appearance between the two types of images. To overcome these challenges, we have developed a novel attention-based method that leverages a key localization cue. The cross-attention-based Swap Encoder Module (SEM) is proposed, which effectively aligns features by directing the network’s focus towards relevant information. Additionally, we employ an Image Proposal Network (IPN) to ensure consistent inputs of both aerial and ground-view images that correspond, during both training and validation phases. Experimental results show that our proposed network significantly outperforms existing benchmarking CVUSA dataset, with significant improvements for top-1 recall from 61.4% to 71.45%, and for top-10 from 90.49% to 92.30%.","PeriodicalId":199115,"journal":{"name":"2023 6th International Symposium on Autonomous Systems (ISAS)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Cross-Attention Network for Cross-View Image Geo-Localization\",\"authors\":\"Jingjing Wang, Xi Li\",\"doi\":\"10.1109/ISAS59543.2023.10164457\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The task of cross-view geo-location is to get a corresponding image from a dataset of Global Positioning System (GPS) labeled aerial-view images, given a ground-view query image with an unknown location. This task presents challenges due to the significant differences in viewpoint and appearance between the two types of images. To overcome these challenges, we have developed a novel attention-based method that leverages a key localization cue. The cross-attention-based Swap Encoder Module (SEM) is proposed, which effectively aligns features by directing the network’s focus towards relevant information. Additionally, we employ an Image Proposal Network (IPN) to ensure consistent inputs of both aerial and ground-view images that correspond, during both training and validation phases. Experimental results show that our proposed network significantly outperforms existing benchmarking CVUSA dataset, with significant improvements for top-1 recall from 61.4% to 71.45%, and for top-10 from 90.49% to 92.30%.\",\"PeriodicalId\":199115,\"journal\":{\"name\":\"2023 6th International Symposium on Autonomous Systems (ISAS)\",\"volume\":\"18 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-06-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 6th International Symposium on Autonomous Systems (ISAS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISAS59543.2023.10164457\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 6th International Symposium on Autonomous Systems (ISAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISAS59543.2023.10164457","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

交叉视点地理定位的任务是从全球定位系统(GPS)标记的鸟瞰图数据集中获得相应的图像,给定未知位置的地面视图查询图像。由于两种类型的图像在视点和外观上存在显着差异,因此这项任务提出了挑战。为了克服这些挑战,我们开发了一种新的基于注意力的方法,利用关键的定位线索。提出了基于交叉注意的交换编码器模块(SEM),该模块通过将网络的焦点指向相关信息,有效地对齐特征。此外,我们采用了图像建议网络(IPN),以确保在训练和验证阶段,航拍和地面图像的输入一致。实验结果表明,我们提出的网络显著优于现有的基准CVUSA数据集,前1名的召回率从61.4%提高到71.45%,前10名的召回率从90.49%提高到92.30%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Cross-Attention Network for Cross-View Image Geo-Localization
The task of cross-view geo-location is to get a corresponding image from a dataset of Global Positioning System (GPS) labeled aerial-view images, given a ground-view query image with an unknown location. This task presents challenges due to the significant differences in viewpoint and appearance between the two types of images. To overcome these challenges, we have developed a novel attention-based method that leverages a key localization cue. The cross-attention-based Swap Encoder Module (SEM) is proposed, which effectively aligns features by directing the network’s focus towards relevant information. Additionally, we employ an Image Proposal Network (IPN) to ensure consistent inputs of both aerial and ground-view images that correspond, during both training and validation phases. Experimental results show that our proposed network significantly outperforms existing benchmarking CVUSA dataset, with significant improvements for top-1 recall from 61.4% to 71.45%, and for top-10 from 90.49% to 92.30%.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信