{"title":"Sub-goal based Robot Visual Navigation through Sensorial Space Tesselation","authors":"G. Palamas, J. Ware","doi":"10.14569/IJARAI.2013.021106","DOIUrl":null,"url":null,"abstract":"In this paper, we propose an evolutionary cognitive architecture to enable a mobile robot to cope with the task of visual navigation. Initially a graph based world representation is used to build a map, prior to navigation, through an appearance based scheme using only features associated with color information. During the next step, a genetic algorithm evolves a navigation controller that the robot uses for visual servoing, driving through a set of nodes on the topological map. Experiments in simulation show that an evolved robot, adapted to both exteroceptive and proprioceptive data, is able to successfully drive through a list of sub-goals minimizing the problem of local minima in which evolutionary process can sometimes get trapped. We also show that this approach is more expressive for defining a simplistic fitness formula yet descriptive enough for targeting specific goals. With respect of vision based robot navigation, most research work is focused on four major areas: map building and interpretation; self-localization; path planning; and obstacle- avoidance. Of these four major research areas, self-localization is of key importance. The recognition of the initial position, the target position, and the current position occupied by the robot while wandering around are all bound to a self-localization process. The main two approaches used for robot localization are landmark based and appearance based techniques. In this paper, we describe a combination of a developmental method for autonomous map building and an evolutionary strategy to verify the results of the map interpretation in terms of navigation usability. Our strategy involves two discrete phases: map building and navigation phase. In the first phase an agent freely explores a pre-determined simulated terrain, collecting visual signatures corresponding to positions in the environment. After the exploration, a self-organizing algorithm builds a graph representation of the environment with nodes corresponding to known places and edges to known pathways. During the second phase, a population of robot controllers is evolved to evaluate map usability. Robots evolve to autonomously navigate from an initial position to a goal position. In order to facilitate successful translation, a shortest path algorithm is employed to extract the best path for the robot to follow. This algorithm also reveals all those intermediate positions that the robot needs to traverse in order to reach the goal position. These intermediate positions act also as sub- goals for the evolution process. II. SENSING THE ENVIRONMENT To be fully autonomous, a robot must rely on its own perceptions to localize. Perception of the world generates representation concepts, topological or geometrical, within a mental framework relating new concepts to pre-existing ones (3). The space of possible perceptions available to the robot for carrying out this task may be divided into two categories: Internal perception (proprioception) or perceptions of its own interactions with the world, associate changes of primitive actuator behavior like motor states; external or sensory perception (exteroception) is sensing things of the outside world. A robot's exteroceptors include all kinds of sensors such as proximity detectors and video cameras. Our system uses only visual information for map building and navigation.","PeriodicalId":323606,"journal":{"name":"International Journal of Advanced Research in Artificial Intelligence","volume":"25 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Advanced Research in Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.14569/IJARAI.2013.021106","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
In this paper, we propose an evolutionary cognitive architecture to enable a mobile robot to cope with the task of visual navigation. Initially a graph based world representation is used to build a map, prior to navigation, through an appearance based scheme using only features associated with color information. During the next step, a genetic algorithm evolves a navigation controller that the robot uses for visual servoing, driving through a set of nodes on the topological map. Experiments in simulation show that an evolved robot, adapted to both exteroceptive and proprioceptive data, is able to successfully drive through a list of sub-goals minimizing the problem of local minima in which evolutionary process can sometimes get trapped. We also show that this approach is more expressive for defining a simplistic fitness formula yet descriptive enough for targeting specific goals. With respect of vision based robot navigation, most research work is focused on four major areas: map building and interpretation; self-localization; path planning; and obstacle- avoidance. Of these four major research areas, self-localization is of key importance. The recognition of the initial position, the target position, and the current position occupied by the robot while wandering around are all bound to a self-localization process. The main two approaches used for robot localization are landmark based and appearance based techniques. In this paper, we describe a combination of a developmental method for autonomous map building and an evolutionary strategy to verify the results of the map interpretation in terms of navigation usability. Our strategy involves two discrete phases: map building and navigation phase. In the first phase an agent freely explores a pre-determined simulated terrain, collecting visual signatures corresponding to positions in the environment. After the exploration, a self-organizing algorithm builds a graph representation of the environment with nodes corresponding to known places and edges to known pathways. During the second phase, a population of robot controllers is evolved to evaluate map usability. Robots evolve to autonomously navigate from an initial position to a goal position. In order to facilitate successful translation, a shortest path algorithm is employed to extract the best path for the robot to follow. This algorithm also reveals all those intermediate positions that the robot needs to traverse in order to reach the goal position. These intermediate positions act also as sub- goals for the evolution process. II. SENSING THE ENVIRONMENT To be fully autonomous, a robot must rely on its own perceptions to localize. Perception of the world generates representation concepts, topological or geometrical, within a mental framework relating new concepts to pre-existing ones (3). The space of possible perceptions available to the robot for carrying out this task may be divided into two categories: Internal perception (proprioception) or perceptions of its own interactions with the world, associate changes of primitive actuator behavior like motor states; external or sensory perception (exteroception) is sensing things of the outside world. A robot's exteroceptors include all kinds of sensors such as proximity detectors and video cameras. Our system uses only visual information for map building and navigation.