学习导航

A. Zelinsky
{"title":"学习导航","authors":"A. Zelinsky","doi":"10.1109/IROS.1989.637926","DOIUrl":null,"url":null,"abstract":"Research into mobile robot navigation has concentrated on the problem of finding optimal paths through known environments. For a mobile robot to operate in a partially known or a completely unknown environment the robot must be able to learn the cnvironmcnt. In this paper a mobile robot cnvironment learning algorithm is presented. The robot learns the environment by sensing, while it is executing paths produced by a path planner. A s obstacles are encountered en route to a goal, the model of the environment is updated and a new path to the goal is planned to avoid the obstacles. The paths generated by the algorithm will initially be locally optimal, but as the knowledge of the environment structure increases the optimality of the solution paths will improve. for the optimisation of distance to the goal combined will obstacle clearance information. The algorithm uses uses a quadtree representation to model environments. Initially the robot treats the unknown world as free space, until it learns information to the contrary. The path planner uses the distance transform methodology to generate all the solution paths from the goal back to the start location. This method not only yields a solution path to the goal, but it can also detect whether a goal is reachable. Once a path has been planned to the goal, the robot can execute it in a \"safe\" or \"optimal\" way. This paper will present the results of a computer simulation of the learning algorithm. Introduction The algorithm allows A great deal of the research concerning the problem of finding optimal paths for robot manipulators and autonomous mobile robots has concentrated on situations where the environment in which the robot operates in, is completely known and supplied -0 the path planner partially known or completely unknown environment, these path planning techniques are not directly applicable or extendable. Difficulties arise in deciding how to treat unexplored regions as obstacles, and only proceed into the unexplored regions if the goal lies there [ 41 [ 61 . 11 131 . For a robot navigating in a Research efforts into environment learning with mobile robots have followed a variety of approaches I41 111 . These can be broadly classified into two groupings; adaptive models and rigid models. Adaptive models use the nature and clutter of the environment to drive a representation of the environment. Typically adaptive modeis represent the environment as a network of free space regions [51 [ 7 ] 191 or as a graph of obstacle vertices [4]. environment learning methods require accurate sensor information. Rigid models impose a structure, such as a grid onto the environment without any regard to the nature and clutter of the environment [ 61 81 [ 111 [ 1 2 1 . Adaptive model environment learning methods offer elegant solutions, but in practise are difficult to implement on a real robot. The converse can be said of rigid model environment learning methods. Environment learning by a mobile robot csn be Adaptive model accomplished into one of two ways; either by operating in'mapping' or 'learning' modes. When a robot is operating in 'mapping' mode [ 81 [ 91, it traverses the entire environment in 2 systematic manner, scanning with onboard sensors and updating a map. The map i s then used for a11 subsequent path planning cxcrciscs. Difficultics n r i s c if thc cnvironment is updated and a new path to the goal is planned to avoid the obstacles 101. The paths generated by this system will initially be locally optimal, but as knowledge of environment increases the optimality of the paths generated, improves until eventually global optimality is attained. Environment learning methods of this type have difficulty in detecting that a goal is not reachable since they use heuristics when planning locally optimal paths. restrictions that the robot can recognise line of sight distances to obstacles and can detect obstacle. edges without imprecision, makes it difficult to implement this learning algorithm with current sensor technology. Since sensing line of sight to obstacles cannot be guaranteed an arrangement of obstacles in an environment can always be found where a heuristic path planner will fail [ 131. The The Jarvis e t a l . [ 121 approach to environment learning covers the enviroqment with a grid, and generates all the paths from the goal location back to the start locatio.. using distance transforms. To find the path of minimum length from the start to the goal, the grid neighbours of the start location are examined to find the grid with the lowest distance transform. This grid cell represents the start of a chain of cells of least distance to the goal. This method uses a rigid environment model and uses a 'learning' mode to map the environment. This method can also deduce that goals are not reachable. However, this method does have drawbacks, namely the inefficiencies of a grid representation, the costly computation of the distance transform and the ambiguity of optimal minimum length paths. A good compromise between adaptive free space models and rigid grid models are hierarchical planning methods. One such approach has been to use q-adtrees 131, this approach assumes that map of the environment is supplied to the path planner. Quadtrees have the advantage of having a rigid structure, but are adaptive to the clutter of the environment. This paper will present an environment learning algorithm using quadtrees to model the environment and a modification of the distance transform to generate paths of navigation for the robot. Most patn pianners approximate the robot to De a cylinder, then shrink the robot to point and expand all the objects in the environment by the robots radius. This strategy is useful in known environments, but in unexplored environments this represents unnecessary extra processin?, and important information is discarded. Namely the volumein which the robot is lOCatPd in and the volumes of the paths swepr; by the robot through the environment are definitely free space in a static environment and are probably free space in a dynamic environment.","PeriodicalId":332317,"journal":{"name":"Proceedings. IEEE/RSJ International Workshop on Intelligent Robots and Systems '. (IROS '89) 'The Autonomous Mobile Robots and Its Applications","volume":"59 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1989-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Navigation By Learning\",\"authors\":\"A. Zelinsky\",\"doi\":\"10.1109/IROS.1989.637926\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Research into mobile robot navigation has concentrated on the problem of finding optimal paths through known environments. For a mobile robot to operate in a partially known or a completely unknown environment the robot must be able to learn the cnvironmcnt. In this paper a mobile robot cnvironment learning algorithm is presented. The robot learns the environment by sensing, while it is executing paths produced by a path planner. A s obstacles are encountered en route to a goal, the model of the environment is updated and a new path to the goal is planned to avoid the obstacles. The paths generated by the algorithm will initially be locally optimal, but as the knowledge of the environment structure increases the optimality of the solution paths will improve. for the optimisation of distance to the goal combined will obstacle clearance information. The algorithm uses uses a quadtree representation to model environments. Initially the robot treats the unknown world as free space, until it learns information to the contrary. The path planner uses the distance transform methodology to generate all the solution paths from the goal back to the start location. This method not only yields a solution path to the goal, but it can also detect whether a goal is reachable. Once a path has been planned to the goal, the robot can execute it in a \\\"safe\\\" or \\\"optimal\\\" way. This paper will present the results of a computer simulation of the learning algorithm. Introduction The algorithm allows A great deal of the research concerning the problem of finding optimal paths for robot manipulators and autonomous mobile robots has concentrated on situations where the environment in which the robot operates in, is completely known and supplied -0 the path planner partially known or completely unknown environment, these path planning techniques are not directly applicable or extendable. Difficulties arise in deciding how to treat unexplored regions as obstacles, and only proceed into the unexplored regions if the goal lies there [ 41 [ 61 . 11 131 . For a robot navigating in a Research efforts into environment learning with mobile robots have followed a variety of approaches I41 111 . These can be broadly classified into two groupings; adaptive models and rigid models. Adaptive models use the nature and clutter of the environment to drive a representation of the environment. Typically adaptive modeis represent the environment as a network of free space regions [51 [ 7 ] 191 or as a graph of obstacle vertices [4]. environment learning methods require accurate sensor information. Rigid models impose a structure, such as a grid onto the environment without any regard to the nature and clutter of the environment [ 61 81 [ 111 [ 1 2 1 . Adaptive model environment learning methods offer elegant solutions, but in practise are difficult to implement on a real robot. The converse can be said of rigid model environment learning methods. Environment learning by a mobile robot csn be Adaptive model accomplished into one of two ways; either by operating in'mapping' or 'learning' modes. When a robot is operating in 'mapping' mode [ 81 [ 91, it traverses the entire environment in 2 systematic manner, scanning with onboard sensors and updating a map. The map i s then used for a11 subsequent path planning cxcrciscs. Difficultics n r i s c if thc cnvironment is updated and a new path to the goal is planned to avoid the obstacles 101. The paths generated by this system will initially be locally optimal, but as knowledge of environment increases the optimality of the paths generated, improves until eventually global optimality is attained. Environment learning methods of this type have difficulty in detecting that a goal is not reachable since they use heuristics when planning locally optimal paths. restrictions that the robot can recognise line of sight distances to obstacles and can detect obstacle. edges without imprecision, makes it difficult to implement this learning algorithm with current sensor technology. Since sensing line of sight to obstacles cannot be guaranteed an arrangement of obstacles in an environment can always be found where a heuristic path planner will fail [ 131. The The Jarvis e t a l . [ 121 approach to environment learning covers the enviroqment with a grid, and generates all the paths from the goal location back to the start locatio.. using distance transforms. To find the path of minimum length from the start to the goal, the grid neighbours of the start location are examined to find the grid with the lowest distance transform. This grid cell represents the start of a chain of cells of least distance to the goal. This method uses a rigid environment model and uses a 'learning' mode to map the environment. This method can also deduce that goals are not reachable. However, this method does have drawbacks, namely the inefficiencies of a grid representation, the costly computation of the distance transform and the ambiguity of optimal minimum length paths. A good compromise between adaptive free space models and rigid grid models are hierarchical planning methods. One such approach has been to use q-adtrees 131, this approach assumes that map of the environment is supplied to the path planner. Quadtrees have the advantage of having a rigid structure, but are adaptive to the clutter of the environment. This paper will present an environment learning algorithm using quadtrees to model the environment and a modification of the distance transform to generate paths of navigation for the robot. Most patn pianners approximate the robot to De a cylinder, then shrink the robot to point and expand all the objects in the environment by the robots radius. This strategy is useful in known environments, but in unexplored environments this represents unnecessary extra processin?, and important information is discarded. Namely the volumein which the robot is lOCatPd in and the volumes of the paths swepr; by the robot through the environment are definitely free space in a static environment and are probably free space in a dynamic environment.\",\"PeriodicalId\":332317,\"journal\":{\"name\":\"Proceedings. IEEE/RSJ International Workshop on Intelligent Robots and Systems '. (IROS '89) 'The Autonomous Mobile Robots and Its Applications\",\"volume\":\"59 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1989-09-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings. IEEE/RSJ International Workshop on Intelligent Robots and Systems '. (IROS '89) 'The Autonomous Mobile Robots and Its Applications\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IROS.1989.637926\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings. IEEE/RSJ International Workshop on Intelligent Robots and Systems '. (IROS '89) 'The Autonomous Mobile Robots and Its Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IROS.1989.637926","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

摘要

然而,这种方法也有缺点,即网格表示的效率低下,距离变换的计算成本高,最优最小长度路径的模糊性。在自适应自由空间模型和刚性网格模型之间的一个很好的折衷是分层规划方法。一种这样的方法是使用q-adtrees 131,这种方法假设环境的地图被提供给路径规划器。四叉树具有刚性结构的优点,但可以适应环境的杂乱。本文将提出一种使用四叉树对环境进行建模的环境学习算法,并对距离变换进行修改以生成机器人的导航路径。大多数模式规划者将机器人近似为一个圆柱体,然后将机器人缩小为点,并将环境中的所有物体扩展为机器人的半径。这个策略在已知的环境中很有用,但是在未知的环境中,这意味着不必要的额外处理。,重要信息被丢弃。即机器人lOCatPd所在的体积和清扫路径的体积;机器人所通过的环境肯定是静态环境中的自由空间,也可能是动态环境中的自由空间。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Navigation By Learning
Research into mobile robot navigation has concentrated on the problem of finding optimal paths through known environments. For a mobile robot to operate in a partially known or a completely unknown environment the robot must be able to learn the cnvironmcnt. In this paper a mobile robot cnvironment learning algorithm is presented. The robot learns the environment by sensing, while it is executing paths produced by a path planner. A s obstacles are encountered en route to a goal, the model of the environment is updated and a new path to the goal is planned to avoid the obstacles. The paths generated by the algorithm will initially be locally optimal, but as the knowledge of the environment structure increases the optimality of the solution paths will improve. for the optimisation of distance to the goal combined will obstacle clearance information. The algorithm uses uses a quadtree representation to model environments. Initially the robot treats the unknown world as free space, until it learns information to the contrary. The path planner uses the distance transform methodology to generate all the solution paths from the goal back to the start location. This method not only yields a solution path to the goal, but it can also detect whether a goal is reachable. Once a path has been planned to the goal, the robot can execute it in a "safe" or "optimal" way. This paper will present the results of a computer simulation of the learning algorithm. Introduction The algorithm allows A great deal of the research concerning the problem of finding optimal paths for robot manipulators and autonomous mobile robots has concentrated on situations where the environment in which the robot operates in, is completely known and supplied -0 the path planner partially known or completely unknown environment, these path planning techniques are not directly applicable or extendable. Difficulties arise in deciding how to treat unexplored regions as obstacles, and only proceed into the unexplored regions if the goal lies there [ 41 [ 61 . 11 131 . For a robot navigating in a Research efforts into environment learning with mobile robots have followed a variety of approaches I41 111 . These can be broadly classified into two groupings; adaptive models and rigid models. Adaptive models use the nature and clutter of the environment to drive a representation of the environment. Typically adaptive modeis represent the environment as a network of free space regions [51 [ 7 ] 191 or as a graph of obstacle vertices [4]. environment learning methods require accurate sensor information. Rigid models impose a structure, such as a grid onto the environment without any regard to the nature and clutter of the environment [ 61 81 [ 111 [ 1 2 1 . Adaptive model environment learning methods offer elegant solutions, but in practise are difficult to implement on a real robot. The converse can be said of rigid model environment learning methods. Environment learning by a mobile robot csn be Adaptive model accomplished into one of two ways; either by operating in'mapping' or 'learning' modes. When a robot is operating in 'mapping' mode [ 81 [ 91, it traverses the entire environment in 2 systematic manner, scanning with onboard sensors and updating a map. The map i s then used for a11 subsequent path planning cxcrciscs. Difficultics n r i s c if thc cnvironment is updated and a new path to the goal is planned to avoid the obstacles 101. The paths generated by this system will initially be locally optimal, but as knowledge of environment increases the optimality of the paths generated, improves until eventually global optimality is attained. Environment learning methods of this type have difficulty in detecting that a goal is not reachable since they use heuristics when planning locally optimal paths. restrictions that the robot can recognise line of sight distances to obstacles and can detect obstacle. edges without imprecision, makes it difficult to implement this learning algorithm with current sensor technology. Since sensing line of sight to obstacles cannot be guaranteed an arrangement of obstacles in an environment can always be found where a heuristic path planner will fail [ 131. The The Jarvis e t a l . [ 121 approach to environment learning covers the enviroqment with a grid, and generates all the paths from the goal location back to the start locatio.. using distance transforms. To find the path of minimum length from the start to the goal, the grid neighbours of the start location are examined to find the grid with the lowest distance transform. This grid cell represents the start of a chain of cells of least distance to the goal. This method uses a rigid environment model and uses a 'learning' mode to map the environment. This method can also deduce that goals are not reachable. However, this method does have drawbacks, namely the inefficiencies of a grid representation, the costly computation of the distance transform and the ambiguity of optimal minimum length paths. A good compromise between adaptive free space models and rigid grid models are hierarchical planning methods. One such approach has been to use q-adtrees 131, this approach assumes that map of the environment is supplied to the path planner. Quadtrees have the advantage of having a rigid structure, but are adaptive to the clutter of the environment. This paper will present an environment learning algorithm using quadtrees to model the environment and a modification of the distance transform to generate paths of navigation for the robot. Most patn pianners approximate the robot to De a cylinder, then shrink the robot to point and expand all the objects in the environment by the robots radius. This strategy is useful in known environments, but in unexplored environments this represents unnecessary extra processin?, and important information is discarded. Namely the volumein which the robot is lOCatPd in and the volumes of the paths swepr; by the robot through the environment are definitely free space in a static environment and are probably free space in a dynamic environment.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信