City-level aerial geo-localization based on map matching network

IF 12.2 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL
Yong Tang , Jingyi Zhang , Jianhua Gong , Yi Li , Banghui Yang
{"title":"City-level aerial geo-localization based on map matching network","authors":"Yong Tang ,&nbsp;Jingyi Zhang ,&nbsp;Jianhua Gong ,&nbsp;Yi Li ,&nbsp;Banghui Yang","doi":"10.1016/j.isprsjprs.2025.08.002","DOIUrl":null,"url":null,"abstract":"<div><div>Autonomous localization of aircraft relies on precise geo-localization, and under Global Navigation Satellite System (GNSS)-denied conditions, visual localization methods are among the most important techniques for aircraft autonomous localization. Global visual localization typically relies on pre-established 3D maps, which require significant storage and computational overhead, limiting the applicability of aerial visual localization. Therefore, we propose a visual localization method based on OpenStreetMap, an openly accessible 2D map. This method not only enables localization in the absence of GNSS but also has lower storage and computational requirements compared to 3D map-based visual methods. This makes our approach feasible for visual geo-localization at the urban scale. We designed a neural network model based on the Vision Transformer (ViT) to extract features from aerial images and 2D maps for fast matching and retrieval, thereby estimating the global geo-location of the aerial images. Additionally, we employ particle filtering to fuse location estimates from map retrieval, visual odometry, and GNSS, achieving higher-precision real-time localization. Moreover, we collected aerial images and map tiles covering over 1000 square kilometers from the urban and suburban areas of four large cities, creating a novel aerial image-to-map matching dataset. Experiments show that, compared to the current state-of-the-art methods, our map retrieval network achieves a higher average recall rate on the dataset. In GNSS-denied conditions, our multi-source fusion localization method can achieve real-time global geo-localization at the urban scale, and under weak GNSS signals, our method provides significantly higher localization accuracy than GNSS alone.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"229 ","pages":"Pages 65-77"},"PeriodicalIF":12.2000,"publicationDate":"2025-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ISPRS Journal of Photogrammetry and Remote Sensing","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0924271625003144","RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"GEOGRAPHY, PHYSICAL","Score":null,"Total":0}
引用次数: 0

Abstract

Autonomous localization of aircraft relies on precise geo-localization, and under Global Navigation Satellite System (GNSS)-denied conditions, visual localization methods are among the most important techniques for aircraft autonomous localization. Global visual localization typically relies on pre-established 3D maps, which require significant storage and computational overhead, limiting the applicability of aerial visual localization. Therefore, we propose a visual localization method based on OpenStreetMap, an openly accessible 2D map. This method not only enables localization in the absence of GNSS but also has lower storage and computational requirements compared to 3D map-based visual methods. This makes our approach feasible for visual geo-localization at the urban scale. We designed a neural network model based on the Vision Transformer (ViT) to extract features from aerial images and 2D maps for fast matching and retrieval, thereby estimating the global geo-location of the aerial images. Additionally, we employ particle filtering to fuse location estimates from map retrieval, visual odometry, and GNSS, achieving higher-precision real-time localization. Moreover, we collected aerial images and map tiles covering over 1000 square kilometers from the urban and suburban areas of four large cities, creating a novel aerial image-to-map matching dataset. Experiments show that, compared to the current state-of-the-art methods, our map retrieval network achieves a higher average recall rate on the dataset. In GNSS-denied conditions, our multi-source fusion localization method can achieve real-time global geo-localization at the urban scale, and under weak GNSS signals, our method provides significantly higher localization accuracy than GNSS alone.
基于地图匹配网络的城市级航空地理定位
飞机的自主定位依赖于精确的地理定位,在全球导航卫星系统(GNSS)拒绝的条件下,视觉定位方法是飞机自主定位的重要技术之一。全球视觉定位通常依赖于预先建立的3D地图,这需要大量的存储和计算开销,限制了航空视觉定位的适用性。因此,我们提出了一种基于开放2D地图OpenStreetMap的视觉定位方法。该方法不仅可以在没有GNSS的情况下实现定位,而且与基于3D地图的视觉方法相比,具有更低的存储和计算需求。这使得我们的方法在城市尺度上的视觉地理定位是可行的。设计了一种基于视觉变换(Vision Transformer, ViT)的神经网络模型,从航拍图像和二维地图中提取特征进行快速匹配和检索,从而估计航拍图像的全球地理位置。此外,我们采用粒子滤波融合地图检索,视觉里程计和GNSS的位置估计,实现更高精度的实时定位。此外,我们从四个大城市的城市和郊区收集了覆盖超过1000平方公里的航空图像和地图块,创建了一个新的航空图像到地图匹配数据集。实验表明,与目前最先进的方法相比,我们的地图检索网络在数据集上实现了更高的平均召回率。在拒绝GNSS的条件下,我们的多源融合定位方法可以实现城市尺度下的实时全球地理定位,并且在微弱GNSS信号下,我们的定位精度明显高于单独GNSS。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
ISPRS Journal of Photogrammetry and Remote Sensing
ISPRS Journal of Photogrammetry and Remote Sensing 工程技术-成像科学与照相技术
CiteScore
21.00
自引率
6.30%
发文量
273
审稿时长
40 days
期刊介绍: The ISPRS Journal of Photogrammetry and Remote Sensing (P&RS) serves as the official journal of the International Society for Photogrammetry and Remote Sensing (ISPRS). It acts as a platform for scientists and professionals worldwide who are involved in various disciplines that utilize photogrammetry, remote sensing, spatial information systems, computer vision, and related fields. The journal aims to facilitate communication and dissemination of advancements in these disciplines, while also acting as a comprehensive source of reference and archive. P&RS endeavors to publish high-quality, peer-reviewed research papers that are preferably original and have not been published before. These papers can cover scientific/research, technological development, or application/practical aspects. Additionally, the journal welcomes papers that are based on presentations from ISPRS meetings, as long as they are considered significant contributions to the aforementioned fields. In particular, P&RS encourages the submission of papers that are of broad scientific interest, showcase innovative applications (especially in emerging fields), have an interdisciplinary focus, discuss topics that have received limited attention in P&RS or related journals, or explore new directions in scientific or professional realms. It is preferred that theoretical papers include practical applications, while papers focusing on systems and applications should include a theoretical background.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信