{"title":"Effect of exploratory perturbation on the formation of kinematic synergies in Goal Babbling","authors":"Kenichi Narioka, R. F. Reinhart, Jochen J. Steil","doi":"10.1109/DEVLRN.2015.7346120","DOIUrl":"https://doi.org/10.1109/DEVLRN.2015.7346120","url":null,"abstract":"Motor synergies explain how humans deal with control of their highly-redundant motor apparatus. In this study, we analyze the formation of motor synergies in a computational model for online motor learning, which is named Goal Babbling. The recently proposed Goal Babbling scheme addresses the learning of inverse models for motor control by goal-directed exploration and solves the redundancy problem by preferring efficient motions in the learning process. We show experimental results that verify the formation of kinematic motor synergies in the beginning of Goal Babbling. We further extract characteristics of the formed synergies under various conditions of exploratory perturbations. It turns out that the learned synergies strongly depend on the exploratory perturbation in action space.","PeriodicalId":164756,"journal":{"name":"2015 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116077129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Carlos Maestre, Antoine Cully, Christophe Gonzales, S. Doncieux
{"title":"Bootstrapping interactions with objects from raw sensorimotor data: A novelty search based approach","authors":"Carlos Maestre, Antoine Cully, Christophe Gonzales, S. Doncieux","doi":"10.1109/DEVLRN.2015.7346098","DOIUrl":"https://doi.org/10.1109/DEVLRN.2015.7346098","url":null,"abstract":"Determining in advance all objects that a robot will interact with in an open environment is very challenging, if not impossible. It makes difficult the development of models that will allow to perceive and recognize objects, to interact with them and to predict how these objects will react to interactions with other objects or with the robot. Developmental robotics proposes to make robots learn by themselves such models through a dedicated exploration step. It raises a chicken-and-egg problem: the robot needs to learn about objects to discover how to interact with them and, to this end, it needs to interact with them. In this work, we propose Novelty-driven Evolutionary Babbling (NovEB), an approach enabling to bootstrap this process and to acquire knowledge about objects in the surrounding environment without requiring to include a priori knowledge about the environment, including objects, or about the means to interact with them. Our approach consists in using an evolutionary algorithm driven by a novelty criterion defined in the raw sensorimotor flow: behaviours, described by a trajectory of the robot end effector, are generated with the goal to maximize the novelty of raw perceptions. The approach is tested on a simulated PR2 robot and is compared to a random motor babbling.","PeriodicalId":164756,"journal":{"name":"2015 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116813509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The SB-ST decomposition in the study of Developmental Coordination Disorder","authors":"L. Claudino, Jane E. Clark, Y. Aloimonos","doi":"10.1109/DEVLRN.2015.7346131","DOIUrl":"https://doi.org/10.1109/DEVLRN.2015.7346131","url":null,"abstract":"To deal with redundancy and high-dimensionality that are typical of movement data, we propose to decompose action matrices in two decoupled steps: first, we discover a set of key postures, that is, vectors corresponding to key relationships among degrees of freedom (like angles between body parts) which we call spatial basis (SB) and second, we impose a parametric model to the spatio-temporal (ST) profiles of each SB vector. These two steps constitute the SB-ST decomposition of an action: SB vectors represent the key postures, their ST profiles represent trajectories of these postures and ST parameters express how these postures are being controlled and coordinated. SB-ST shares elements in common with computational models of motor synergies and biological motion perception, and it relates to human manifold models that are popular in machine learning. We showcase the method by applying SB vectors and ST parameters to study vertical jumps of adults, typically developing children and children with Developmental Coordination Disorder obtained with motion capture. Using that data, we also evaluate SB-ST alone and against other techniques in terms of reconstruction ability and number of dimensions used.","PeriodicalId":164756,"journal":{"name":"2015 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115979015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Grounding object perception in a naive agent's sensorimotor experience","authors":"Alban Laflaquière, Nikolas J. Hemion","doi":"10.1109/DEVLRN.2015.7346156","DOIUrl":"https://doi.org/10.1109/DEVLRN.2015.7346156","url":null,"abstract":"Artificial object perception usually relies on a priori defined models and feature extraction algorithms. We study how the concept of object can be grounded in the sensorimotor experience of a naive agent. Without any knowledge about itself or the world it is immersed in, the agent explores its sensorimotor space and identifies objects as consistent networks of sensorimotor transitions, independent from their context. A fundamental drive for prediction is assumed to explain the emergence of such networks from a developmental standpoint. An algorithm is proposed and tested to illustrate the approach.","PeriodicalId":164756,"journal":{"name":"2015 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126588664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Folgheraiter, Nazgul Tazhigaliyeva, Aibek S. Niyetkaliyev
{"title":"Adaptive joint trajectory generator based on a chaotic recurrent neural network","authors":"M. Folgheraiter, Nazgul Tazhigaliyeva, Aibek S. Niyetkaliyev","doi":"10.1109/DEVLRN.2015.7346158","DOIUrl":"https://doi.org/10.1109/DEVLRN.2015.7346158","url":null,"abstract":"The aim of this paper is to introduce a scalable and adaptable joint trajectory generator based on a recurrent neural network. As main application we target highly redundant kinematic structures like humanoid and multi-legged robotic systems. The network architecture consists of a set of leak integrators which outputs are limited by sigmoidal activation functions. The neural circuit exhibits very rich dynamics and is capable to generate complex periodic signals without the direct excitation of external inputs. Spontaneous internal activity is possible thanks to the presence of recurrent connections and a source of Gaussian noise that is overlapped with the signals. By modulating the internal chaotic level of the network it is possible to make the system exploring high-dimensional spaces and therefore to learn very complex time sequences. A preliminary set of simulations demonstrated how a relatively small network composed of hundred units is capable to generate different motor paths which can be triggered by exteroceptive sensory signals.","PeriodicalId":164756,"journal":{"name":"2015 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128661726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Acquisition of viewpoint representation in imitative learning from own sensory-motor experiences","authors":"Ryoichi Nakajo, Shingo Murata, H. Arie, T. Ogata","doi":"10.1109/DEVLRN.2015.7346166","DOIUrl":"https://doi.org/10.1109/DEVLRN.2015.7346166","url":null,"abstract":"This paper introduces an imitative model that enables a robot to acquire viewpoints of the self and others from its own sensory-motor experiences. This is important for recognizing and imitating actions generated from various directions. Existing methods require coordinate transformations input by human designers or complex learning modules to acquire a viewpoint. In the proposed model, several neurons dedicated to generated actions and viewpoints of the self and others are added to a dynamic nueral network model reffered as continuous time recurrent neural network (CTRNN). The training data are labeled with types of actions and viewpoints, and are linked to each internal state. We implemented this model in a robot and trained the model to perform actions of object manipulation. Representations of behavior and viewpoint were formed in the internal states of the CTRNN. In addition, we analyzed the initial values of the internal states that represent the viewpoint information. We confirmed the distinction of the observational perspective of other's actions self-organized in the space of the initial values. Combining the initial values of the internal states that describe the behavior and the viewpoint, the system can generate unlearned data.","PeriodicalId":164756,"journal":{"name":"2015 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123326993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shingo Murata, Saki Tomioka, Ryoichi Nakajo, Tatsuro Yamada, H. Arie, T. Ogata, S. Sugano
{"title":"Predictive learning with uncertainty estimation for modeling infants' cognitive development with caregivers: A neurorobotics experiment","authors":"Shingo Murata, Saki Tomioka, Ryoichi Nakajo, Tatsuro Yamada, H. Arie, T. Ogata, S. Sugano","doi":"10.1109/DEVLRN.2015.7346162","DOIUrl":"https://doi.org/10.1109/DEVLRN.2015.7346162","url":null,"abstract":"Dynamic interactions with caregivers are essential for infants to develop cognitive abilities, including aspects of action, perception, and attention. We hypothesized that these abilities can be acquired through the predictive learning of sensory inputs including their uncertainty (inverse precision) in terms of variance. To examine our hypothesis from the perspective of cognitive developmental robotics, we conducted a neurorobotics experiment involving a ball-playing interaction task between a human experimenter representing a caregiver and a small humanoid robot representing an infant. The robot was equipped with a dynamic generative model called a stochastic continuous-time recurrent neural network (S-CTRNN). The S-CTRNN learned to generate predictions about both the visuo-proprioceptive states of the robot and the uncertainty of these states by minimizing a negative log-likelihood consisting of log-uncertainty and precision-weighted prediction error. The experimental results showed that predictive learning with uncertainty estimation enabled the robot to acquire infant-like cognitive abilities through dynamic interactions with the experimenter. We also discuss the effects of infant-directed modifications observed in caregiver-infant interactions on the development of these abilities.","PeriodicalId":164756,"journal":{"name":"2015 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124861302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exploration strategies for incremental learning of object-based visual saliency","authors":"Céline Craye, David Filliat, Jean-François Goudou","doi":"10.1109/DEVLRN.2015.7346099","DOIUrl":"https://doi.org/10.1109/DEVLRN.2015.7346099","url":null,"abstract":"Searching for objects in an indoor environment can be drastically improved if a task-specific visual saliency is available. We describe a method to learn such an object-based visual saliency in an intrinsically motivated way using an environment exploration mechanism. We first define saliency in a geometrical manner and use this definition to discover salient elements given an attentive but costly observation of the environment. These elements are used to train a fast classifier that predicts salient objects given large-scale visual features. In order to get a better and faster learning, we use intrinsic motivation to drive our observation selection, based on uncertainty and novelty detection. Our approach has been tested on RGB-D images, is real-time, and outperforms several state-of-the-art methods in the case of indoor object detection.","PeriodicalId":164756,"journal":{"name":"2015 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127441225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Active learning of local predictable representations with artificial curiosity","authors":"Mathieu Lefort, A. Gepperth","doi":"10.1109/devlrn.2015.7346145","DOIUrl":"https://doi.org/10.1109/devlrn.2015.7346145","url":null,"abstract":"In this article, we present some preliminary work on integrating an artificial curiosity mechanism in PROPRE, a generic and modular neural architecture, to obtain online, open-ended and active learning of a sensory-motor space, where large areas can be unlearnable. PROPRE consists of the combination of the projection of the input motor flow, using a self-organizing map, with the regression of the sensory output flow from this projection representation, using a linear regression. The main feature of PROPRE is the use of a predictability module that provides an interestingness measure for the current motor stimulus depending on a simple evaluation of the sensory prediction quality. This measure modulates the projection learning so that to favor the representations that predict the output better than a local average. Especially, this leads to the learning of local representations where an input/output relationship is defined [1]. In this article, we propose an artificial curiosity mechanism based on the monitoring of learning progress, as proposed in [2], in the neighborhood of each local representation. Thus, PROPRE simultaneously learns interesting representations of the input flow (depending on their capacities to predict the output) and explores actively this input space where the learning progress is the higher. We illustrate our architecture on the learning of a direct model of an arm whose hand can only be perceived in a restricted visual space. The modulation of the projection learning leads to a better performance and the use of the curiosity mechanism provides quicker learning and even improves the final performance.","PeriodicalId":164756,"journal":{"name":"2015 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"125 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134339290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Representations of body schemas for infant robot development","authors":"Patricia Shaw, J. Law, Mark H. Lee","doi":"10.1109/DEVLRN.2015.7346128","DOIUrl":"https://doi.org/10.1109/DEVLRN.2015.7346128","url":null,"abstract":"Psychological studies have often suggested that internal models of the body and its structure are used to process sensory inputs such as proprioception from muscles and joints. Within robotics, there is often a need to have an internal representation of the body, integrating the multi-modal and multi-dimensional spaces in which it operates. Here we propose a body model in the form of a series of distributed spatial maps, that have not been purpose designed but have emerged through our experiments on developmental stages using a minimalist content-neutral approach. The result is an integrated series of 2D maps storing correlations and contingencies across modalities, which has some resonances with the structures used in the brain for sensorimotor coordination.","PeriodicalId":164756,"journal":{"name":"2015 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127806529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}