Attention-Driven Object Encoding and Multiscale Contextual Perception for Improved Cross-View Object Geo-Localization

Haoshuai Song;Xiaochong Tong;Xiaoyu Zhang;Yaxian Lei;He Li;Congzhou Guo
{"title":"Attention-Driven Object Encoding and Multiscale Contextual Perception for Improved Cross-View Object Geo-Localization","authors":"Haoshuai Song;Xiaochong Tong;Xiaoyu Zhang;Yaxian Lei;He Li;Congzhou Guo","doi":"10.1109/LGRS.2025.3560258","DOIUrl":null,"url":null,"abstract":"Cross-view object geo-localization (CVOGL) is essential for applications like navigation and intelligent city management. By identifying objects in street-view/drone-view and precisely locating them in satellite imagery, more accurate geo-localization can be achieved compared to retrieval-based methods. However, existing approaches fail to account for query object shape/size and significant scale variations in remote sensing images. To address these limitations, we propose an attention-driven multiscale perception network (AMPNet) for cross-view geo-localization. AMPNet employs an attention-driven object encoding (ADOE) based on segmentation, which provides prior information to enable learning more discriminative representations of the query object. In addition, AMPNet introduces a cross-view multiscale perception (CVMSP) module that captures multiscale contextual information using varying convolution kernels, and applies an MLP to enhance channel-wise feature interactions. Experimental results demonstrate that AMPNet outperforms state-of-the-art methods in both ground-to-satellite and drone-to-satellite object localization tasks on a challenging dataset.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0000,"publicationDate":"2025-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10964230/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Cross-view object geo-localization (CVOGL) is essential for applications like navigation and intelligent city management. By identifying objects in street-view/drone-view and precisely locating them in satellite imagery, more accurate geo-localization can be achieved compared to retrieval-based methods. However, existing approaches fail to account for query object shape/size and significant scale variations in remote sensing images. To address these limitations, we propose an attention-driven multiscale perception network (AMPNet) for cross-view geo-localization. AMPNet employs an attention-driven object encoding (ADOE) based on segmentation, which provides prior information to enable learning more discriminative representations of the query object. In addition, AMPNet introduces a cross-view multiscale perception (CVMSP) module that captures multiscale contextual information using varying convolution kernels, and applies an MLP to enhance channel-wise feature interactions. Experimental results demonstrate that AMPNet outperforms state-of-the-art methods in both ground-to-satellite and drone-to-satellite object localization tasks on a challenging dataset.
注意驱动目标编码和多尺度上下文感知改进跨视目标地理定位
跨视图对象地理定位(cogl)对于导航和智能城市管理等应用至关重要。通过在街景/无人机视图中识别物体,并在卫星图像中精确定位它们,与基于检索的方法相比,可以实现更精确的地理定位。然而,现有的方法无法解释遥感图像中查询对象的形状/大小和显著的尺度变化。为了解决这些限制,我们提出了一个注意力驱动的多尺度感知网络(AMPNet)用于跨视图地理定位。AMPNet采用了一种基于分割的注意力驱动对象编码(ADOE),它提供了先验信息,以便学习查询对象的更具区别性的表示。此外,AMPNet引入了一个跨视图多尺度感知(CVMSP)模块,该模块使用不同的卷积核捕获多尺度上下文信息,并应用MLP来增强通道特征交互。实验结果表明,在具有挑战性的数据集上,AMPNet在地对星和无人机对卫星目标定位任务中都优于最先进的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信