{"title":"Real-Time Cross-View Image Matching and Camera Pose Determination for Unmanned Aerial Vehicles","authors":"Long Chen, Bo Wu, Ran Duan, Zeyu Chen","doi":"10.14358/pers.23-00073r2","DOIUrl":null,"url":null,"abstract":"In global navigation satellite systems (GNSS)-denied environments, vision-based methods are commonly used for the positioning and navigation of aerial robots. However, traditional methods often suffer from accumulative estimation errors over time, leading to trajectory drift and lack\n real-time performance, particularly in large-scale scenarios. This article presents novel approaches, including feature-based cross-view image matching and the integration of visual odometry and photogrammetric space resection for camera pose determination in real-time. Experimental evaluation\n with real UAV datasets demonstrated that the proposed method reliably matches features in cross-view images with large differences in spatial resolution, coverage, and perspective views, achieving a root-mean-square error of 4.7 m for absolute position error and 0.33° for rotation error,\n and delivering real-time performance of 12 frames per second (FPS) when implemented in a lightweight edge device onboard UAV. This approach offters potential for diverse intelligent UAV applications in GNSS-denied environments based on real-time feedback control.","PeriodicalId":211256,"journal":{"name":"Photogrammetric Engineering & Remote Sensing","volume":"1 11","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Photogrammetric Engineering & Remote Sensing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.14358/pers.23-00073r2","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In global navigation satellite systems (GNSS)-denied environments, vision-based methods are commonly used for the positioning and navigation of aerial robots. However, traditional methods often suffer from accumulative estimation errors over time, leading to trajectory drift and lack
real-time performance, particularly in large-scale scenarios. This article presents novel approaches, including feature-based cross-view image matching and the integration of visual odometry and photogrammetric space resection for camera pose determination in real-time. Experimental evaluation
with real UAV datasets demonstrated that the proposed method reliably matches features in cross-view images with large differences in spatial resolution, coverage, and perspective views, achieving a root-mean-square error of 4.7 m for absolute position error and 0.33° for rotation error,
and delivering real-time performance of 12 frames per second (FPS) when implemented in a lightweight edge device onboard UAV. This approach offters potential for diverse intelligent UAV applications in GNSS-denied environments based on real-time feedback control.