Sub-goal based Robot Visual Navigation through Sensorial Space Tesselation

G. Palamas, J. Ware
{"title":"Sub-goal based Robot Visual Navigation through Sensorial Space Tesselation","authors":"G. Palamas, J. Ware","doi":"10.14569/IJARAI.2013.021106","DOIUrl":null,"url":null,"abstract":"In this paper, we propose an evolutionary cognitive architecture to enable a mobile robot to cope with the task of visual navigation. Initially a graph based world representation is used to build a map, prior to navigation, through an appearance based scheme using only features associated with color information. During the next step, a genetic algorithm evolves a navigation controller that the robot uses for visual servoing, driving through a set of nodes on the topological map. Experiments in simulation show that an evolved robot, adapted to both exteroceptive and proprioceptive data, is able to successfully drive through a list of sub-goals minimizing the problem of local minima in which evolutionary process can sometimes get trapped. We also show that this approach is more expressive for defining a simplistic fitness formula yet descriptive enough for targeting specific goals. With respect of vision based robot navigation, most research work is focused on four major areas: map building and interpretation; self-localization; path planning; and obstacle- avoidance. Of these four major research areas, self-localization is of key importance. The recognition of the initial position, the target position, and the current position occupied by the robot while wandering around are all bound to a self-localization process. The main two approaches used for robot localization are landmark based and appearance based techniques. In this paper, we describe a combination of a developmental method for autonomous map building and an evolutionary strategy to verify the results of the map interpretation in terms of navigation usability. Our strategy involves two discrete phases: map building and navigation phase. In the first phase an agent freely explores a pre-determined simulated terrain, collecting visual signatures corresponding to positions in the environment. After the exploration, a self-organizing algorithm builds a graph representation of the environment with nodes corresponding to known places and edges to known pathways. During the second phase, a population of robot controllers is evolved to evaluate map usability. Robots evolve to autonomously navigate from an initial position to a goal position. In order to facilitate successful translation, a shortest path algorithm is employed to extract the best path for the robot to follow. This algorithm also reveals all those intermediate positions that the robot needs to traverse in order to reach the goal position. These intermediate positions act also as sub- goals for the evolution process. II. SENSING THE ENVIRONMENT To be fully autonomous, a robot must rely on its own perceptions to localize. Perception of the world generates representation concepts, topological or geometrical, within a mental framework relating new concepts to pre-existing ones (3). The space of possible perceptions available to the robot for carrying out this task may be divided into two categories: Internal perception (proprioception) or perceptions of its own interactions with the world, associate changes of primitive actuator behavior like motor states; external or sensory perception (exteroception) is sensing things of the outside world. A robot's exteroceptors include all kinds of sensors such as proximity detectors and video cameras. Our system uses only visual information for map building and navigation.","PeriodicalId":323606,"journal":{"name":"International Journal of Advanced Research in Artificial Intelligence","volume":"25 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Advanced Research in Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.14569/IJARAI.2013.021106","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

Abstract

In this paper, we propose an evolutionary cognitive architecture to enable a mobile robot to cope with the task of visual navigation. Initially a graph based world representation is used to build a map, prior to navigation, through an appearance based scheme using only features associated with color information. During the next step, a genetic algorithm evolves a navigation controller that the robot uses for visual servoing, driving through a set of nodes on the topological map. Experiments in simulation show that an evolved robot, adapted to both exteroceptive and proprioceptive data, is able to successfully drive through a list of sub-goals minimizing the problem of local minima in which evolutionary process can sometimes get trapped. We also show that this approach is more expressive for defining a simplistic fitness formula yet descriptive enough for targeting specific goals. With respect of vision based robot navigation, most research work is focused on four major areas: map building and interpretation; self-localization; path planning; and obstacle- avoidance. Of these four major research areas, self-localization is of key importance. The recognition of the initial position, the target position, and the current position occupied by the robot while wandering around are all bound to a self-localization process. The main two approaches used for robot localization are landmark based and appearance based techniques. In this paper, we describe a combination of a developmental method for autonomous map building and an evolutionary strategy to verify the results of the map interpretation in terms of navigation usability. Our strategy involves two discrete phases: map building and navigation phase. In the first phase an agent freely explores a pre-determined simulated terrain, collecting visual signatures corresponding to positions in the environment. After the exploration, a self-organizing algorithm builds a graph representation of the environment with nodes corresponding to known places and edges to known pathways. During the second phase, a population of robot controllers is evolved to evaluate map usability. Robots evolve to autonomously navigate from an initial position to a goal position. In order to facilitate successful translation, a shortest path algorithm is employed to extract the best path for the robot to follow. This algorithm also reveals all those intermediate positions that the robot needs to traverse in order to reach the goal position. These intermediate positions act also as sub- goals for the evolution process. II. SENSING THE ENVIRONMENT To be fully autonomous, a robot must rely on its own perceptions to localize. Perception of the world generates representation concepts, topological or geometrical, within a mental framework relating new concepts to pre-existing ones (3). The space of possible perceptions available to the robot for carrying out this task may be divided into two categories: Internal perception (proprioception) or perceptions of its own interactions with the world, associate changes of primitive actuator behavior like motor states; external or sensory perception (exteroception) is sensing things of the outside world. A robot's exteroceptors include all kinds of sensors such as proximity detectors and video cameras. Our system uses only visual information for map building and navigation.
基于感知空间镶嵌的子目标机器人视觉导航
在本文中,我们提出了一种进化认知架构,使移动机器人能够处理视觉导航任务。最初,基于图形的世界表示用于构建地图,在导航之前,通过仅使用与颜色信息相关的特征的基于外观的方案。在下一步,一个遗传算法进化出一个导航控制器,机器人将其用于视觉伺服,在拓扑图上的一组节点上行驶。仿真实验表明,进化后的机器人能够同时适应外部感受和本体感受数据,能够成功地通过一系列子目标,最大限度地减少进化过程中可能陷入的局部最小值问题。我们还表明,这种方法在定义简单的适应度公式方面更具表现力,但对于针对特定目标的描述却足够。在基于视觉的机器人导航方面,大多数研究工作集中在四个主要领域:地图的构建和解释;self-localization;路径规划;还有避障。在这四个主要研究领域中,自我定位是至关重要的。机器人在漫游时对初始位置、目标位置和当前所占位置的识别都绑定到一个自定位过程中。用于机器人定位的两种主要方法是基于地标和基于外观的技术。在本文中,我们描述了一种自主地图构建的开发方法和一种进化策略的组合,以验证地图解释在导航可用性方面的结果。我们的策略包含两个独立的阶段:地图构建和导航阶段。在第一阶段,智能体自由探索预先确定的模拟地形,收集与环境中位置相对应的视觉特征。探索后,自组织算法构建环境的图表示,节点对应已知地点,边对应已知路径。在第二阶段,进化机器人控制器群体来评估地图可用性。机器人进化到能够自主地从初始位置导航到目标位置。为了使机器人能够顺利完成平移,采用最短路径算法提取机器人的最佳路径。该算法还揭示了机器人为了达到目标位置而需要经过的所有中间位置。这些中间位置也作为进化过程的子目标。2感知环境要实现完全自主,机器人必须依靠自己的感知来定位。对世界的感知在与已有概念相关的心理框架内产生拓扑或几何的表征概念(3)。机器人执行此任务可用的可能感知空间可分为两类:内部感知(本体感觉)或感知自身与世界的相互作用,相关的原始执行器行为变化,如运动状态;外部或感官知觉(外部知觉)是感知外部世界的事物。机器人的外部感受器包括各种传感器,如近距离探测器和摄像机。我们的系统只使用视觉信息来构建地图和导航。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信