{"title":"Robot Visual Navigation in Semi-structured Outdoor Environments","authors":"D. Mateus, J. Aviña-Cervantes, M. Devy","doi":"10.1109/ROBOT.2005.1570844","DOIUrl":null,"url":null,"abstract":"This work describes a navigation framework for robots in semi-structured outdoor environments which enables planning of semantic tasks by chaining of elementary visual-based movement primitives. Navigation is achieved by understanding the underlying world behind the image and using these results as a guideline to control the robot. As retrieving semantic information from vision is computationally demanding, short-term tasks are planned and executed while new vision information is processed. Thanks to learning techniques, the methods are adapted to different environment conditions. Fusion and filtering techniques provide reliability and stability to the system. The procedures have been fully integrated and tested with a real robot in an experimental environment. Results are discussed.","PeriodicalId":350878,"journal":{"name":"Proceedings of the 2005 IEEE International Conference on Robotics and Automation","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2005-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"32","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2005 IEEE International Conference on Robotics and Automation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ROBOT.2005.1570844","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 32
Abstract
This work describes a navigation framework for robots in semi-structured outdoor environments which enables planning of semantic tasks by chaining of elementary visual-based movement primitives. Navigation is achieved by understanding the underlying world behind the image and using these results as a guideline to control the robot. As retrieving semantic information from vision is computationally demanding, short-term tasks are planned and executed while new vision information is processed. Thanks to learning techniques, the methods are adapted to different environment conditions. Fusion and filtering techniques provide reliability and stability to the system. The procedures have been fully integrated and tested with a real robot in an experimental environment. Results are discussed.