{"title":"Efficient exploration and learning of whole body kinematics","authors":"Matthias Rolf, Jochen J. Steil, M. Gienger","doi":"10.1109/DEVLRN.2009.5175522","DOIUrl":"https://doi.org/10.1109/DEVLRN.2009.5175522","url":null,"abstract":"We present a neural network approach to early motor learning. The goal is to explore the needs for boot-strapping the control of hand movements in a biologically plausible learning scenario. The model is applied to the control of hand postures of the humanoid robot ASIMO by means of full upper body movements. For training, we use an efficient online scheme for recurrent reservoir networks consisting of supervised backpropagation-decorrelation output adaptation and an unsupervised intrinsic plasticity reservoir optimization. We demonstrate that the network can acquire accurate inverse models for the highly redundant ASIMO, applying bi-manual target motions and exploiting all upper body degrees of freedom. We show that very few, but highly symmetric training data is sufficient to generate excellent generalization capabilities to untrained target motions. We also succeed in reproducing real motion recorded from a human demonstrator, massively differing from the training data in range and dynamics. The demonstrated generalization capabilities provide a fundamental prerequisite for an autonomous and incremental motor learning in an developmentally plausible way. Our exploration process - though not yet fully autonomous - clearly shows that goal-directed exploration can, in contrast to “babbling” of joints angles, be done very efficiently even for many degrees of freedom and non-linear kinematic configurations as ASIMOs.","PeriodicalId":192225,"journal":{"name":"2009 IEEE 8th International Conference on Development and Learning","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125700385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Orientation Plasticity in Visual Cortex of Mice Reared under Single-Orientation Exposure","authors":"Takamasa Yoshida, Toshiki Tani, Shigeru Tanaka","doi":"10.1109/DEVLRN.2009.5175530","DOIUrl":"https://doi.org/10.1109/DEVLRN.2009.5175530","url":null,"abstract":"To examine whether orientation selectivity in the mouse visual cortex can change depending on visual experience, we reared juvenile and adult mice under single-orientation exposure using cylindrical-lens-fitted goggles that extremely elongate visual images vertically. Immediately after goggle rearing, we performed optical imaging of intrinsic signals in the visual cortex of the mouse, while presenting 6 oriented grating stimuli. The distribution of preferred orientations was markedly biased toward the exposed vertical orientation in juvenile goggle-reared mice, whereas the distribution in normally reared mice showed a maximum at horizontal orientation and a minimum at vertical orientation. In contrast, no significant differences in the orientation distribution were found between 1-week goggle-reared and normally reared adult mice. However, in 2- or 3-week goggle-reared adult mice, the relative area maximally responding to the vertical orientation was slightly larger than that in normally reared adult mice whereas the horizontal bias was preserved. The present study demonstrated that postnatal visual experience can modify orientation selectivity in both juvenile and adult mice.","PeriodicalId":192225,"journal":{"name":"2009 IEEE 8th International Conference on Development and Learning","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128814110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Laterally connected lobe component analysis: Precision and topography","authors":"M. Luciw, J. Weng","doi":"10.1109/DEVLRN.2009.5175541","DOIUrl":"https://doi.org/10.1109/DEVLRN.2009.5175541","url":null,"abstract":"Due to the pressure of evolution, the brains of organisms need to self-organize at different scales during different developmental stages. In early stages, the brain must organize globally (e.g., large cortical areas) to form “smooth” topographic representation that is critical for superior generalization with its limited connections. At later stages, the brain must fine tune its microstructures of representation for “precision” - high-level performance and specialization. But smoothness and precision are two conflicting criteria. The self-organizing map (SOM) mechanisms of self-organization through isotropic updating and other published computational variants have dealt with global to local smoothing and lateral adaptation, but we show in our work that they are insufficient to deliver superior performance. In this paper, we introduce a combination of several mechanisms that, together, address these two conflicting criteria: lateral excitation through adaptive connections, explicit adaptive top-down connections (attention), dually-optimal lobe component analysis (LCA) for synaptic updating, simulated lateral inhibition through winners-take-all, and a developmental schedule that sets the number of winners, which decreases over time. Major performance improvements due to the combination of these mechanisms are shown in the reported experiments.","PeriodicalId":192225,"journal":{"name":"2009 IEEE 8th International Conference on Development and Learning","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128474734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Learning from a tutor: Embodied speech acquisition and imitation learning","authors":"M. Vaz, H. Brandl, F. Joublin, C. Goerick","doi":"10.1109/DEVLRN.2009.5175543","DOIUrl":"https://doi.org/10.1109/DEVLRN.2009.5175543","url":null,"abstract":"This work presents a new developmentally inspired data-driven framework to bootstrap speech perception and imitation abilities in interaction with a tutor. The proposed system architecture extends our work presented in [1], that implements a cascade of interconnected layers to acquire the structure of speech in terms of phones, syllables and words. Here, we show how to couple such a perceptual model with a speech imitation system that is based on an acoustic synthesizer bound to produce speech sounds with a child's voice.","PeriodicalId":192225,"journal":{"name":"2009 IEEE 8th International Conference on Development and Learning","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115955451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Learning independent causes in natural images explains the spacevariant oblique effect","authors":"C. Rothkopf, Thomas H. Weisswange, J. Triesch","doi":"10.1109/DEVLRN.2009.5175534","DOIUrl":"https://doi.org/10.1109/DEVLRN.2009.5175534","url":null,"abstract":"The efficient coding hypothesis posits that sensory processing increases independence between neural responses to natural stimuli by removing their statistical redundancy reflective of the structure present in the natural environment. While there is consensus on the role of the statistical structure of the physical environment in shaping the natural input to the sensory system, it is not well understood how the sensory apparatus itself and its active use during behavior determine the statistics of the input. To explore this issue, a virtual human agent is simulated navigating through a wooded environment under full control of its gaze allocation during walking. Independent causes for the images obtained during navigation are learned with algorithms that have been shown to extract computationally useful representations similar to those encountered in the primary visual cortex of the mammalian brain. The distributions of properties of the learned simple cell like units are in good agreement with a wealth of data on the visual system including the oblique effect, the meridional effect, properties of neurons in the macaque visual cortex, and functional Magnetic Resonance Imaging (fMRI) data on orientation selectivity in humans and monkeys. Finally, this analysis sheds new light on the discussion on orientation anisotropies based on carpented environments. Thus, when learning computational representations it is not sufficient to consider only the regularities of the environment but also the regularities imposed by the sensory apparatus and its use during behavior need to be taken into account.","PeriodicalId":192225,"journal":{"name":"2009 IEEE 8th International Conference on Development and Learning","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122092892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Human-Robot Interaction in Concept Acquisition: a computational model","authors":"Joachim de Greeff, F. Delaunay, Tony Belpaeme","doi":"10.1109/DEVLRN.2009.5175532","DOIUrl":"https://doi.org/10.1109/DEVLRN.2009.5175532","url":null,"abstract":"This paper presents a discussion and simulation results which support the case for interaction during the acquisition of conceptual knowledge. Taking a developmental perspective, we first review a number of relevant insights on word-meaning acquisition in young children and specifically focus on concept learning supported by linguistic input. We present a computational model implementing a number of acquisition strategies, which enable a learning agent to actively steer the learning process. This is contrasted to a one way learning method, where the learner does not actively influence the learning experience. We present results demonstrating how dyadic interaction between a teacher and learner may result in a better acquisition of concepts.","PeriodicalId":192225,"journal":{"name":"2009 IEEE 8th International Conference on Development and Learning","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130034744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Learning motor dependent Crutchfield's information distance to anticipate changes in the topology of sensory body maps","authors":"Thomas Schatz, Pierre-Yves Oudeyer","doi":"10.1109/DEVLRN.2009.5175526","DOIUrl":"https://doi.org/10.1109/DEVLRN.2009.5175526","url":null,"abstract":"What can a robot learn about the structure of its own body when he does not already know the semantics, the type and the position of its sensors and motors? Previous work has shown that an information theoretic approach, based on pairwise Crutchfield's information distance on sensorimotor channels, could allow to measure the informational topology of the set of sensors, i.e. reconstruct approximately the topology of the sensory body map. In this paper, we argue that the informational sensors topology changes with motor configurations in many robotic bodies, but yet, because measuring Crutchfield's distance is very time consuming, it is impossible to remeasure the body's topology for each novel motor configuration. Rather, a model should be learnt that allows the robot to predict Crutchfield's informational distances, and thus anticipate informational body maps, for novel motor configurations. We present experiments showing that learning motor dependent Crutchfield distances can indeed be achieved.","PeriodicalId":192225,"journal":{"name":"2009 IEEE 8th International Conference on Development and Learning","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130007245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A theory of architecture for spatial abstraction","authors":"J. Weng","doi":"10.1109/DEVLRN.2009.5175545","DOIUrl":"https://doi.org/10.1109/DEVLRN.2009.5175545","url":null,"abstract":"A great mystery is how the brain abstracts during the process of development. It is also unclear how motor actions alter cortical representation. The architecture theory introduced here indicates that for each cortical area, the bottom-up space and top-down space are two sources of its representation — bridge representation which embeds manifolds of both spaces into a single space. A bridge representation has the following properties (a) responses from developed neurons are relatively less sensitive to irrelevant sensory information (i.e., invariants) but are relatively more sensitive to relevant sensory information for classification (i.e., discriminants), (b) neurons form topographic cortical areas according to abstract classes. Both properties transform meaningless (iconic, pixel like) raw sensory inputs into an internal representation with abstract meanings. The most abstract area can be considered as frontal cortex (or motor area if each firing pattern of the motor represents a unique abstract class). Such a cortical representation system is neither a purely symbolic system nor a monolithic meaning system, but is iconic-abstract two-way: bottom-up attention, top-down attention and recognition are all tightly integrated and highly distributed throughout the developmental network.","PeriodicalId":192225,"journal":{"name":"2009 IEEE 8th International Conference on Development and Learning","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129413699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thomas H. Weisswange, C. Rothkopf, Tobias Rodemann, J. Triesch
{"title":"Can reinforcement learning explain the development of causal inference in multisensory integration?","authors":"Thomas H. Weisswange, C. Rothkopf, Tobias Rodemann, J. Triesch","doi":"10.1109/DEVLRN.2009.5175531","DOIUrl":"https://doi.org/10.1109/DEVLRN.2009.5175531","url":null,"abstract":"Bayesian inference techniques have been used to understand the performance of human subjects on a large number of sensory tasks. Particularly, it has been shown that humans integrate sensory inputs from multiple cues in an optimal way in many conditions. Recently it has also been proposed that causal inference [1] can well describe the way humans select the most plausible model for a given input. It is still unclear how those problems are solved in the brain. Also, considering that infants do not yet behave as ideal observers [2]–[4], it is interesting to ask how the related abilities can develop. We present a reinforcement learning approach to this problem. An orienting task is used in which we reward the model for a correct movement to the origin of noisy audio visual signals. We show that the model learns to do cue-integration and model selection, in this case inferring the number of objects. Its behaviour also includes differences in reliability between the two modalities. All of that comes without any prior knowledge by simple interaction with the environment.","PeriodicalId":192225,"journal":{"name":"2009 IEEE 8th International Conference on Development and Learning","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126440859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robotic hand-eye coordination without global reference: A biologically inspired learning scheme","authors":"Martin Hulse, Sebastian McBrid, Mark H. Lee","doi":"10.1109/DEVLRN.2009.5175514","DOIUrl":"https://doi.org/10.1109/DEVLRN.2009.5175514","url":null,"abstract":"Understanding the mechanism mediating the change from inaccurate pre-reaching to accurate reaching in infants may confer advantage from both a robotic and biological research perspective. In this work, we present a biologically meaningful learning scheme applied to the coordination between reach and gaze within a robotic structure. The system is model-free and does not utilize a global reference system. The integration of reach and gaze emerges from the learned cross-modal mapping between reach and vision space as it occurs during the robot-environment interaction. The scheme showed high learning speed and plasticity compared with other approaches due to the low level of training data required. We discuss our findings with respect to biological plausibility and from an engineering perspective, with emphasis on autonomous learning as well as strategies for the selection of new training data.","PeriodicalId":192225,"journal":{"name":"2009 IEEE 8th International Conference on Development and Learning","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127912513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}