{"title":"Visual Simultaneous Localization and Mapping (vSLAM) of Driverless Car in GPS-Denied Areas","authors":"Abira Kanwal, Zunaira Anjum, Wasif Muhammad","doi":"10.3390/engproc2021012049","DOIUrl":null,"url":null,"abstract":"A simultaneous localization and mapping (SLAM) algorithm allows a mobile robot or a driverless car to determine its location in an unknown and dynamic environment where it is placed, and simultaneously allows it to build a consistent map of that environment. Driverless cars are becoming an emerging reality from science fiction, but there is still too much required for the development of technological breakthroughs for their control, guidance, safety, and health related issues. One existing problem which is required to be addressed is SLAM of driverless car in GPS denied-areas, i.e., congested urban areas with large buildings where GPS signals are weak as a result of congested infrastructure. Due to poor reception of GPS signals in these areas, there is an immense need to localize and route driverless car using onboard sensory modalities, e.g., LIDAR, RADAR, etc., without being dependent on GPS information for its navigation and control. The driverless car SLAM using LIDAR and RADAR involves costly sensors, which appears to be a limitation of this approach. To overcome these limitations, in this article we propose a visual information-based SLAM (vSLAM) algorithm for GPS-denied areas using a cheap video camera. As a front-end process, features-based monocular visual odometry (VO) on grayscale input image frames is performed. Random Sample Consensus (RANSAC) refinement and global pose estimation is performed as a back-end process. The results obtained from the proposed approach demonstrate 95% accuracy with a maximum mean error of 4.98.","PeriodicalId":11748,"journal":{"name":"Engineering Proceedings","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2021-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Engineering Proceedings","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3390/engproc2021012049","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
A simultaneous localization and mapping (SLAM) algorithm allows a mobile robot or a driverless car to determine its location in an unknown and dynamic environment where it is placed, and simultaneously allows it to build a consistent map of that environment. Driverless cars are becoming an emerging reality from science fiction, but there is still too much required for the development of technological breakthroughs for their control, guidance, safety, and health related issues. One existing problem which is required to be addressed is SLAM of driverless car in GPS denied-areas, i.e., congested urban areas with large buildings where GPS signals are weak as a result of congested infrastructure. Due to poor reception of GPS signals in these areas, there is an immense need to localize and route driverless car using onboard sensory modalities, e.g., LIDAR, RADAR, etc., without being dependent on GPS information for its navigation and control. The driverless car SLAM using LIDAR and RADAR involves costly sensors, which appears to be a limitation of this approach. To overcome these limitations, in this article we propose a visual information-based SLAM (vSLAM) algorithm for GPS-denied areas using a cheap video camera. As a front-end process, features-based monocular visual odometry (VO) on grayscale input image frames is performed. Random Sample Consensus (RANSAC) refinement and global pose estimation is performed as a back-end process. The results obtained from the proposed approach demonstrate 95% accuracy with a maximum mean error of 4.98.