A photogrammetric approach for real‐time visual SLAM applied to an omnidirectional system

Thaisa Aline Correia Garcia, Antonio Maria Garcia Tommaselli, Letícia Ferrari Castanheiro, Mariana Batista Campos
{"title":"A photogrammetric approach for real‐time visual SLAM applied to an omnidirectional system","authors":"Thaisa Aline Correia Garcia, Antonio Maria Garcia Tommaselli, Letícia Ferrari Castanheiro, Mariana Batista Campos","doi":"10.1111/phor.12494","DOIUrl":null,"url":null,"abstract":"The problem of sequential estimation of the exterior orientation of imaging sensors and the three‐dimensional environment reconstruction in real time is commonly known as visual simultaneous localisation and mapping (vSLAM). Omnidirectional optical sensors have been increasingly used in vSLAM solutions, mainly for providing a wider view of the scene, allowing the extraction of more features. However, dealing with unmodelled points in the hyperhemispherical field poses challenges, mainly due to the complex lens geometry entailed in the image formation process. To address these challenges, the use of rigorous photogrammetric models that appropriately handle the geometry of fisheye lens cameras can overcome these challenges. Thus, this study presents a real‐time vSLAM approach for omnidirectional systems adapting ORB‐SLAM with a rigorous projection model (equisolid‐angle). The implementation was conducted on the Nvidia Jetson TX2 board, and the approach was evaluated using hyperhemispherical images captured by a dual‐fisheye camera (Ricoh Theta S) embedded into a mobile backpack platform. The trajectory covered a distance of 140 m, with the approach demonstrating accuracy better than 0.12 m at the beginning and achieving metre‐level accuracy at the end of the trajectory. Additionally, we compared the performance of our proposed approach with a generic model for fisheye lens cameras.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The Photogrammetric Record","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1111/phor.12494","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The problem of sequential estimation of the exterior orientation of imaging sensors and the three‐dimensional environment reconstruction in real time is commonly known as visual simultaneous localisation and mapping (vSLAM). Omnidirectional optical sensors have been increasingly used in vSLAM solutions, mainly for providing a wider view of the scene, allowing the extraction of more features. However, dealing with unmodelled points in the hyperhemispherical field poses challenges, mainly due to the complex lens geometry entailed in the image formation process. To address these challenges, the use of rigorous photogrammetric models that appropriately handle the geometry of fisheye lens cameras can overcome these challenges. Thus, this study presents a real‐time vSLAM approach for omnidirectional systems adapting ORB‐SLAM with a rigorous projection model (equisolid‐angle). The implementation was conducted on the Nvidia Jetson TX2 board, and the approach was evaluated using hyperhemispherical images captured by a dual‐fisheye camera (Ricoh Theta S) embedded into a mobile backpack platform. The trajectory covered a distance of 140 m, with the approach demonstrating accuracy better than 0.12 m at the beginning and achieving metre‐level accuracy at the end of the trajectory. Additionally, we compared the performance of our proposed approach with a generic model for fisheye lens cameras.
应用于全向系统的实时视觉 SLAM 摄影测量方法
成像传感器外部方位的顺序估计和三维环境的实时重建问题通常被称为视觉同步定位和绘图(vSLAM)。全向光学传感器在 vSLAM 解决方案中的使用越来越多,主要是为了提供更广阔的场景视野,从而提取更多特征。然而,处理超半球领域中的未建模点带来了挑战,这主要是由于图像形成过程中涉及复杂的透镜几何形状。为了应对这些挑战,使用能适当处理鱼眼镜头相机几何形状的严格摄影测量模型可以克服这些挑战。因此,本研究提出了一种适用于全向系统的实时 vSLAM 方法,该方法将 ORB-SLAM 与严格的投影模型(等实角)相适配。该方法在 Nvidia Jetson TX2 板上实现,并使用嵌入移动背包平台的双鱼眼相机(理光 Theta S)捕获的超半球图像进行评估。轨迹覆盖了 140 米的距离,该方法在轨迹开始时的精度优于 0.12 米,在轨迹结束时达到了米级精度。此外,我们还比较了我们提出的方法与鱼眼镜头相机通用模型的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信