L. Lonini, Yu Zhao, Pramod Chandrashekhariah, Bertram E. Shi, J. Triesch
{"title":"Autonomous learning of active multi-scale binocular vision","authors":"L. Lonini, Yu Zhao, Pramod Chandrashekhariah, Bertram E. Shi, J. Triesch","doi":"10.1109/DEVLRN.2013.6652541","DOIUrl":null,"url":null,"abstract":"We present a method for autonomously learning representations of visual disparity between images from left and right eye, as well as appropriate vergence movements to fixate objects with both eyes. A sparse coding model (perception) encodes sensory information using binocular basis functions, while a reinforcement learner (behavior) generates the eye movement, according to the sensed disparity. Perception and behavior develop in parallel, by minimizing the same cost function: the reconstruction error of the stimulus by the generative model. In order to efficiently cope with multiple disparity ranges, sparse coding models are learnt at multiple scales, encoding disparities at various resolutions. Similarly, vergence commands are defined on a logarithmic scale to allow both coarse and fine actions. We demonstrate the efficacy of the proposed method using the humanoid robot iCub. We show that the model is fully self-calibrating and does not require any prior information about the camera parameters or the system dynamics.","PeriodicalId":106997,"journal":{"name":"2013 IEEE Third Joint International Conference on Development and Learning and Epigenetic Robotics (ICDL)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"26","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2013 IEEE Third Joint International Conference on Development and Learning and Epigenetic Robotics (ICDL)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DEVLRN.2013.6652541","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 26
Abstract
We present a method for autonomously learning representations of visual disparity between images from left and right eye, as well as appropriate vergence movements to fixate objects with both eyes. A sparse coding model (perception) encodes sensory information using binocular basis functions, while a reinforcement learner (behavior) generates the eye movement, according to the sensed disparity. Perception and behavior develop in parallel, by minimizing the same cost function: the reconstruction error of the stimulus by the generative model. In order to efficiently cope with multiple disparity ranges, sparse coding models are learnt at multiple scales, encoding disparities at various resolutions. Similarly, vergence commands are defined on a logarithmic scale to allow both coarse and fine actions. We demonstrate the efficacy of the proposed method using the humanoid robot iCub. We show that the model is fully self-calibrating and does not require any prior information about the camera parameters or the system dynamics.