{"title":"基于实用视觉的有腿机器人蒙特卡罗定位","authors":"M. Sridharan, Gregory Kuhlmann, P. Stone","doi":"10.1109/ROBOT.2005.1570630","DOIUrl":null,"url":null,"abstract":"Mobile robot localization, the ability of a robot to determine its global position and orientation, continues to be a major research focus in robotics. In most past cases, such localization has been studied on wheeled robots with range finding sensors such as sonar or lasers. In this paper, we consider the more challenging scenario of a legged robot localizing with a limited field-of-view camera as its primary sensory input. We begin with a baseline implementation adapted from the literature that provides a reasonable level of competence, but that exhibits some weaknesses in real-world tests. We propose a series of practical enhancements designed to improve the robot’s sensory and actuator models that enable our robots to achieve a 50% improvement in localization accuracy over the baseline implementation. We go on to demonstrate how the accuracy improvement is even more dramatic when the robot is subjected to large unmodeled movements. These enhancements are each individually straightforward, but together they provide a roadmap for avoiding potential pitfalls when implementing Monte Carlo Localization on vision-based and/or legged robots.","PeriodicalId":350878,"journal":{"name":"Proceedings of the 2005 IEEE International Conference on Robotics and Automation","volume":"38 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2005-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"77","resultStr":"{\"title\":\"Practical Vision-Based Monte Carlo Localization on a Legged Robot\",\"authors\":\"M. Sridharan, Gregory Kuhlmann, P. Stone\",\"doi\":\"10.1109/ROBOT.2005.1570630\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Mobile robot localization, the ability of a robot to determine its global position and orientation, continues to be a major research focus in robotics. In most past cases, such localization has been studied on wheeled robots with range finding sensors such as sonar or lasers. In this paper, we consider the more challenging scenario of a legged robot localizing with a limited field-of-view camera as its primary sensory input. We begin with a baseline implementation adapted from the literature that provides a reasonable level of competence, but that exhibits some weaknesses in real-world tests. We propose a series of practical enhancements designed to improve the robot’s sensory and actuator models that enable our robots to achieve a 50% improvement in localization accuracy over the baseline implementation. We go on to demonstrate how the accuracy improvement is even more dramatic when the robot is subjected to large unmodeled movements. These enhancements are each individually straightforward, but together they provide a roadmap for avoiding potential pitfalls when implementing Monte Carlo Localization on vision-based and/or legged robots.\",\"PeriodicalId\":350878,\"journal\":{\"name\":\"Proceedings of the 2005 IEEE International Conference on Robotics and Automation\",\"volume\":\"38 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2005-04-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"77\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2005 IEEE International Conference on Robotics and Automation\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ROBOT.2005.1570630\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2005 IEEE International Conference on Robotics and Automation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ROBOT.2005.1570630","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Practical Vision-Based Monte Carlo Localization on a Legged Robot
Mobile robot localization, the ability of a robot to determine its global position and orientation, continues to be a major research focus in robotics. In most past cases, such localization has been studied on wheeled robots with range finding sensors such as sonar or lasers. In this paper, we consider the more challenging scenario of a legged robot localizing with a limited field-of-view camera as its primary sensory input. We begin with a baseline implementation adapted from the literature that provides a reasonable level of competence, but that exhibits some weaknesses in real-world tests. We propose a series of practical enhancements designed to improve the robot’s sensory and actuator models that enable our robots to achieve a 50% improvement in localization accuracy over the baseline implementation. We go on to demonstrate how the accuracy improvement is even more dramatic when the robot is subjected to large unmodeled movements. These enhancements are each individually straightforward, but together they provide a roadmap for avoiding potential pitfalls when implementing Monte Carlo Localization on vision-based and/or legged robots.