Lilian de Oliveira Carneiro, J. B. C. Neto, C. Vidal, Y. L. Nogueira, Arnaldo B. Vila Nova
{"title":"Generation of Emergent Navigation Behavior in Autonomous Agents Using Artificial Vision","authors":"Lilian de Oliveira Carneiro, J. B. C. Neto, C. Vidal, Y. L. Nogueira, Arnaldo B. Vila Nova","doi":"10.1109/SVR.2014.19","DOIUrl":null,"url":null,"abstract":"In this work, we deal with the dynamics of the movements of autonomous agents, which are able to move in the environment using their own vision. For this, we apply the Continuous Time Recurrent Artificial Neural Network and the genetic encoding proposed in [1] [2]. However, we use a new sensorial description, which consists in captured images by a virtual camera, evolving an artificial visual cortex. The experiments show that the agents are able to navigate in the environment and to find the exit, in a non-programmed way, using only the visual data passed to the neural network. This has the flexibility to be applied in various environments, without displaying a forced tendency by a possible behavioral modeling as in other techniques.","PeriodicalId":291858,"journal":{"name":"2014 XVI Symposium on Virtual and Augmented Reality","volume":"431 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 XVI Symposium on Virtual and Augmented Reality","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SVR.2014.19","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In this work, we deal with the dynamics of the movements of autonomous agents, which are able to move in the environment using their own vision. For this, we apply the Continuous Time Recurrent Artificial Neural Network and the genetic encoding proposed in [1] [2]. However, we use a new sensorial description, which consists in captured images by a virtual camera, evolving an artificial visual cortex. The experiments show that the agents are able to navigate in the environment and to find the exit, in a non-programmed way, using only the visual data passed to the neural network. This has the flexibility to be applied in various environments, without displaying a forced tendency by a possible behavioral modeling as in other techniques.