{"title":"Vision Based Leader-Follower Control of Wheeled Mobile Robots using Reinforcement Learning and Deep Learning","authors":"Kayleb Garmon, Y. Wang","doi":"10.1109/ISSSR58837.2023.00071","DOIUrl":null,"url":null,"abstract":"Vision-based control of mobile robots often involves complex calculations to derive a control law. The reinforcement learning algorithm (Q-learning) offers a machine learning method to extrapolate a control law from an environment given discretized actions, without the need of complex calculations. In this paper, a vision-based controller is created using Q-Learning to enable tracking in a leader-follower configuration of two nonholonomic autonomous mobile robots. The follower robot gathers its desired trajectory values by using a deep learning SSD model to identify a distinguishing visual feature on the leader robot and uses a lidar to determine the distance between two robots. These parameters are utilized to select an optimal action of the follower robot through reinforcement learning. The emulated results in a ROS Gazebo environment show this method to be effective in enabling a wheeled mobile robot to follow another, while simultaneously avoiding obstacles.","PeriodicalId":185173,"journal":{"name":"2023 9th International Symposium on System Security, Safety, and Reliability (ISSSR)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 9th International Symposium on System Security, Safety, and Reliability (ISSSR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISSSR58837.2023.00071","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Vision-based control of mobile robots often involves complex calculations to derive a control law. The reinforcement learning algorithm (Q-learning) offers a machine learning method to extrapolate a control law from an environment given discretized actions, without the need of complex calculations. In this paper, a vision-based controller is created using Q-Learning to enable tracking in a leader-follower configuration of two nonholonomic autonomous mobile robots. The follower robot gathers its desired trajectory values by using a deep learning SSD model to identify a distinguishing visual feature on the leader robot and uses a lidar to determine the distance between two robots. These parameters are utilized to select an optimal action of the follower robot through reinforcement learning. The emulated results in a ROS Gazebo environment show this method to be effective in enabling a wheeled mobile robot to follow another, while simultaneously avoiding obstacles.