{"title":"Obstacle Avoidance for Omnidirectional Mobile Robot Using SLAM","authors":"V. Nandikolla, Bryan Ghoslin","doi":"10.1115/1.4055689","DOIUrl":null,"url":null,"abstract":"\n In the field of mobile robotics, Simultaneous Localization and Mapping (SLAM) is an algorithmic approach to the computational problem of creating and updating a map of an environment while simultaneously keeping track of where the robot is within the environment. Applications of a SLAM algorithm are important for autonomous mobile systems to traverse an environment while avoiding obstacles and accurately achieving designated goal destinations. This paper presents the design of a SLAM-driven controller for a semi-autonomous omnidirectional mobile robot. Input to the system comes from a Brain Computer Interface in the form of simple driving commands or a goal destination as decided by the user. Due to latency issues of reacting and responding in real time, the system must safely navigate following the last given commands until it runs out of free space, reaches a goal designation, or receives a new command. The robotic system utilizes a three-wheeled robot kit with an upgraded sensor system. The Intel RealSense Depth Camera D435 and two lidar sensors are utilized to construct a full 360° field of view. The SLAM algorithm and system controllers are developed using the Robot Operating System (ROS). The controllers are developed and tested within Gazebo, which is a physics simulation engine utilized for rapid prototyping. Testing was performed to validate controller performance when given varying commands as well as performing long distance path planning and obstacle avoidance. The system was often capable of achieving its goal destinations with a small error of around 3% or less though the error was found to increase with the more commands the system processed.","PeriodicalId":73734,"journal":{"name":"Journal of engineering and science in medical diagnostics and therapy","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2022-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of engineering and science in medical diagnostics and therapy","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1115/1.4055689","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In the field of mobile robotics, Simultaneous Localization and Mapping (SLAM) is an algorithmic approach to the computational problem of creating and updating a map of an environment while simultaneously keeping track of where the robot is within the environment. Applications of a SLAM algorithm are important for autonomous mobile systems to traverse an environment while avoiding obstacles and accurately achieving designated goal destinations. This paper presents the design of a SLAM-driven controller for a semi-autonomous omnidirectional mobile robot. Input to the system comes from a Brain Computer Interface in the form of simple driving commands or a goal destination as decided by the user. Due to latency issues of reacting and responding in real time, the system must safely navigate following the last given commands until it runs out of free space, reaches a goal designation, or receives a new command. The robotic system utilizes a three-wheeled robot kit with an upgraded sensor system. The Intel RealSense Depth Camera D435 and two lidar sensors are utilized to construct a full 360° field of view. The SLAM algorithm and system controllers are developed using the Robot Operating System (ROS). The controllers are developed and tested within Gazebo, which is a physics simulation engine utilized for rapid prototyping. Testing was performed to validate controller performance when given varying commands as well as performing long distance path planning and obstacle avoidance. The system was often capable of achieving its goal destinations with a small error of around 3% or less though the error was found to increase with the more commands the system processed.