Visual Simultaneous Localization and Mapping (vSLAM) of Driverless Car in GPS-Denied Areas

Abira Kanwal, Zunaira Anjum, Wasif Muhammad
{"title":"Visual Simultaneous Localization and Mapping (vSLAM) of Driverless Car in GPS-Denied Areas","authors":"Abira Kanwal, Zunaira Anjum, Wasif Muhammad","doi":"10.3390/engproc2021012049","DOIUrl":null,"url":null,"abstract":"A simultaneous localization and mapping (SLAM) algorithm allows a mobile robot or a driverless car to determine its location in an unknown and dynamic environment where it is placed, and simultaneously allows it to build a consistent map of that environment. Driverless cars are becoming an emerging reality from science fiction, but there is still too much required for the development of technological breakthroughs for their control, guidance, safety, and health related issues. One existing problem which is required to be addressed is SLAM of driverless car in GPS denied-areas, i.e., congested urban areas with large buildings where GPS signals are weak as a result of congested infrastructure. Due to poor reception of GPS signals in these areas, there is an immense need to localize and route driverless car using onboard sensory modalities, e.g., LIDAR, RADAR, etc., without being dependent on GPS information for its navigation and control. The driverless car SLAM using LIDAR and RADAR involves costly sensors, which appears to be a limitation of this approach. To overcome these limitations, in this article we propose a visual information-based SLAM (vSLAM) algorithm for GPS-denied areas using a cheap video camera. As a front-end process, features-based monocular visual odometry (VO) on grayscale input image frames is performed. Random Sample Consensus (RANSAC) refinement and global pose estimation is performed as a back-end process. The results obtained from the proposed approach demonstrate 95% accuracy with a maximum mean error of 4.98.","PeriodicalId":11748,"journal":{"name":"Engineering Proceedings","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2021-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Engineering Proceedings","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3390/engproc2021012049","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

A simultaneous localization and mapping (SLAM) algorithm allows a mobile robot or a driverless car to determine its location in an unknown and dynamic environment where it is placed, and simultaneously allows it to build a consistent map of that environment. Driverless cars are becoming an emerging reality from science fiction, but there is still too much required for the development of technological breakthroughs for their control, guidance, safety, and health related issues. One existing problem which is required to be addressed is SLAM of driverless car in GPS denied-areas, i.e., congested urban areas with large buildings where GPS signals are weak as a result of congested infrastructure. Due to poor reception of GPS signals in these areas, there is an immense need to localize and route driverless car using onboard sensory modalities, e.g., LIDAR, RADAR, etc., without being dependent on GPS information for its navigation and control. The driverless car SLAM using LIDAR and RADAR involves costly sensors, which appears to be a limitation of this approach. To overcome these limitations, in this article we propose a visual information-based SLAM (vSLAM) algorithm for GPS-denied areas using a cheap video camera. As a front-end process, features-based monocular visual odometry (VO) on grayscale input image frames is performed. Random Sample Consensus (RANSAC) refinement and global pose estimation is performed as a back-end process. The results obtained from the proposed approach demonstrate 95% accuracy with a maximum mean error of 4.98.
gps阻断区无人驾驶汽车视觉同步定位与制图(vSLAM)
同时定位和绘图(SLAM)算法允许移动机器人或无人驾驶汽车确定其在未知和动态环境中的位置,并同时允许它构建该环境的一致地图。无人驾驶汽车正在从科幻小说中逐渐成为现实,但在其控制、引导、安全和健康相关问题上,技术突破的发展仍有很多需要解决的问题。目前需要解决的一个问题是无人驾驶汽车在GPS拒绝区域的SLAM问题,即由于基础设施拥挤,GPS信号较弱的大型建筑物密集的城市区域。由于这些地区的GPS信号接收能力差,因此非常需要使用车载传感模式(如LIDAR、RADAR等)对无人驾驶汽车进行定位和路由,而不依赖GPS信息进行导航和控制。使用激光雷达和雷达的无人驾驶汽车SLAM涉及昂贵的传感器,这似乎是这种方法的局限性。为了克服这些限制,在本文中,我们提出了一种基于视觉信息的SLAM (vSLAM)算法,用于使用廉价摄像机的gps拒绝区域。作为前端处理,对灰度输入图像帧进行基于特征的单目视觉里程测定(VO)。随机样本一致性(RANSAC)改进和全局姿态估计作为后端过程执行。结果表明,该方法的准确率为95%,最大平均误差为4.98。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
0.70
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信