Christoph Rohmann, Jens Lenkowski, Harald Bachem, Bernd Lichte
{"title":"Highly Accurate Positioning Method for Car-Like Robots Utilizing a Monocular Camera and QR Code Tracking","authors":"Christoph Rohmann, Jens Lenkowski, Harald Bachem, Bernd Lichte","doi":"10.1109/ROSE56499.2022.9977425","DOIUrl":null,"url":null,"abstract":"Mobile robots often serve as a means for the transportation of goods. In most cases, this task requires high positioning accuracy in order to allow for a smooth transfer of goods from the transfer station to the robot and vice versa. This is especially difficult with car-like robots, as they lack a degree of freedom that e.g. omni-directional robots possess. In this paper, we propose a highly accurate positioning method for such car-like robots that consists of three consecutive modes. In the free navigation mode, the robot drives towards a predefined target position until it detects a transfer station. By using a single monocular camera and a trained convolutional neural network, our system detects the transfer station and approaches it automatically until it reaches a predefined distance, which marks the end of the vague positioning mode. The camera detects and tracks a QR code attached to the station, which we use to estimate the robots relative position to the code thus initiating the accurate positioning mode. In this mode, our system uses a third order polynomial path-planning approach that achieves an average positioning accuracy of 11mm longitudinally and 8mm laterally with an angular offset of 0.7° in front of the code. We show the applicability of this functionality through a set of experimental validation tests that mimic real-world use-cases.","PeriodicalId":265529,"journal":{"name":"2022 IEEE International Symposium on Robotic and Sensors Environments (ROSE)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Symposium on Robotic and Sensors Environments (ROSE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ROSE56499.2022.9977425","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Mobile robots often serve as a means for the transportation of goods. In most cases, this task requires high positioning accuracy in order to allow for a smooth transfer of goods from the transfer station to the robot and vice versa. This is especially difficult with car-like robots, as they lack a degree of freedom that e.g. omni-directional robots possess. In this paper, we propose a highly accurate positioning method for such car-like robots that consists of three consecutive modes. In the free navigation mode, the robot drives towards a predefined target position until it detects a transfer station. By using a single monocular camera and a trained convolutional neural network, our system detects the transfer station and approaches it automatically until it reaches a predefined distance, which marks the end of the vague positioning mode. The camera detects and tracks a QR code attached to the station, which we use to estimate the robots relative position to the code thus initiating the accurate positioning mode. In this mode, our system uses a third order polynomial path-planning approach that achieves an average positioning accuracy of 11mm longitudinally and 8mm laterally with an angular offset of 0.7° in front of the code. We show the applicability of this functionality through a set of experimental validation tests that mimic real-world use-cases.