{"title":"Development of compositional and contextual communication of robots by using the multiple timescales dynamic neural network","authors":"Gibeom Park, J. Tani","doi":"10.1109/DEVLRN.2015.7346137","DOIUrl":"https://doi.org/10.1109/DEVLRN.2015.7346137","url":null,"abstract":"The current paper introduces neurorobotics experiment on acquisition of complex communicative skills with human via learning. A dynamic neural network model which is characterized by its multiple timescale dynamics characteristics was utilized as a neuronal model for controlling a humanoid robot. In the experimental task, the humanoid robot was trained to generate specific sequential movement patterns as responding to various sequences of imperative gesture patterns demonstrated by the human subjects by following predefined compositional semantic rules. The experimental results showed that (1) the MTRNN can learn to extract compositional semantic rules with generalization in the higher cognitive level, (2) the MTRNN can develop further higher-order cognition capability for controlling the internal contextual processes as situated to on-going task sequences without being provided with cues for explicitly indicating task segmentation points. The analysis on the dynamic characteristics developed in the MTRNN through learning indicated that the aforementioned cognitive mechanisms were achieved by developing adequate functional hierarchy by utilizing the constraint of the multiple timescale property and the topological connectivity imposed on the network configuration.","PeriodicalId":164756,"journal":{"name":"2015 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128485470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Active learning strategies and active control of complexity growth in naming games","authors":"William Schueller, Pierre-Yves Oudeyer","doi":"10.1109/DEVLRN.2015.7346144","DOIUrl":"https://doi.org/10.1109/DEVLRN.2015.7346144","url":null,"abstract":"Naming Games are models of the dynamic formation of lexical conventions in populations of agents. In this work we introduce new Naming Game strategies, using developmental and active learning mechanisms to control the growth of complexity. An information theoretical measure to compare those strategies is introduced, and used to study their impact on the dynamics of the Naming Game.","PeriodicalId":164756,"journal":{"name":"2015 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126233838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Gepperth, Thomas Hecht, Mathieu Lefort, Ursula Körner
{"title":"Biologically inspired incremental learning for high-dimensional spaces","authors":"A. Gepperth, Thomas Hecht, Mathieu Lefort, Ursula Körner","doi":"10.1109/DEVLRN.2015.7346155","DOIUrl":"https://doi.org/10.1109/DEVLRN.2015.7346155","url":null,"abstract":"We propose an incremental, highly parallelizable, and constant-time complexity neural learning architecture for multi-class classification (and regression) problems that remains resource-efficient even when the number of input dimensions is very high (≥ 1000). This so-called projection-prediction (PRO-PRE) architecture is strongly inspired by biological information processing in that it uses a prototype-based, topologically organized hidden layer that updates hidden layer weights whenever an error occurs. The employed self-organizing map (SOM) learning adapts only the weights of localized neural sub-populations that are similar to the input, which explicitly avoids the catastrophic forgetting effect of MLPs in case new input statistics are presented. The readout layer applies linear regression to hidden layer activities subjected to a transfer function, making the whole system capable of representing strongly non-linear decision boundaries. The resource-efficiency of the algorithm stems from approximating similarity in the input space by proximity in the SOM layer due to the topological SOM projection property. This avoids the storage of inter-cluster distances (quadratic in number of hidden layer elements) or input space covariance matrices (quadratic in input dimensions) as other incremental algorithms typically do. Tests on the popular MNIST handwritten digit benchmark show that the algorithm compares favorably to state-of-the-art results, and parallelizability is demonstrated by analyzing the efficiency of a parallel GPU implementation of the architecture.","PeriodicalId":164756,"journal":{"name":"2015 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"101 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122816628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Diversity-driven selection of exploration strategies in multi-armed bandits","authors":"Fabien C. Y. Benureau, Pierre-Yves Oudeyer","doi":"10.1109/devlrn.2015.7346130","DOIUrl":"https://doi.org/10.1109/devlrn.2015.7346130","url":null,"abstract":"We consider a scenario where an agent has multiple available strategies to explore an unknown environment. For each new interaction with the environment, the agent must select which exploration strategy to use. We provide a new strategy-agnostic method that treat the situation as a Multi-Armed Bandits problem where the reward signal is the diversity of effects that each strategy produces. We test the method empirically on a simulated planar robotic arm, and establish that the method is both able discriminate between strategies of dissimilar quality, even when the differences are tenuous, and that the resulting performance is competitive with the best fixed mixture of strategies.","PeriodicalId":164756,"journal":{"name":"2015 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117323250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Incremental grounded language learning in robot-robot interactions — Examples from spatial language","authors":"Michael Spranger","doi":"10.1109/DEVLRN.2015.7346140","DOIUrl":"https://doi.org/10.1109/DEVLRN.2015.7346140","url":null,"abstract":"This paper reports on models of the grounded co-acquisition of syntax and semantics of locative spatial language in developmental robots. We instantiate theories from Cognitive Linguistics and Developmental Psychology and show how a learner robot can learn to produce and interpret spatial utterances in guided-learning interactions with a tutor robot. Particular emphasis is put on the role of the tutor. Our experiments show that the learner rapidly becomes successful in communication given the right tutoring strategy and learning operators.","PeriodicalId":164756,"journal":{"name":"2015 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115005764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"To hear and to hold: Maternal naming and infant object exploration","authors":"Lucas Chang, K. D. Barbaro, G. Deák","doi":"10.1109/DEVLRN.2015.7346125","DOIUrl":"https://doi.org/10.1109/DEVLRN.2015.7346125","url":null,"abstract":"To acquire language, infants must associate the language they hear with concurrent nonlinguistic experiences - the word-world mapping problem. Caregivers help structure the infant's environment by monitoring infants' attention and producing speech at informative times. In particular, children's learning of object names depends on their sensory experiences at times when objects are named. At 18 months, children's learning of novel words is predicted by the size of the object in their visual field when it is named [1]. However, there is not a direct relationship between infant's attention to objects in the world and speech produced by caregivers. Infant's multimodal experiences unfold in interactions with caregivers where both partners' behavior, including vocalizations, gaze, and manual activity, dynamically structure the visual and auditory scene [2,3].","PeriodicalId":164756,"journal":{"name":"2015 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"199 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121629854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Patricia Shaw, Daniel Lewkowicz, Alexandros Giagkos, J. Law, Suresh Kumar, Mark H. Lee, Q. Shen
{"title":"Babybot challenge: Motor skills","authors":"Patricia Shaw, Daniel Lewkowicz, Alexandros Giagkos, J. Law, Suresh Kumar, Mark H. Lee, Q. Shen","doi":"10.1109/DEVLRN.2015.7346114","DOIUrl":"https://doi.org/10.1109/DEVLRN.2015.7346114","url":null,"abstract":"In 1984, von Hofsten performed a longitudinal study of early reaching in infants between the ages of 1 week and 19 weeks. This paper proposes a possible model using excitation of various subsystems to reproduce the longitudinal study. The model is then implemented and tested on an iCub humanoid robot, and the results compared to the original study. The resulting model shares interesting similarities to the data presented by von Hofsten, in particular a slight dip in the quantity of reaching. However, the dip is shifted along by a few weeks, and the analysis of hand behaviour is inconclusive based on the data recorded.","PeriodicalId":164756,"journal":{"name":"2015 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125965149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using spatial representations in gesture to facilitate early word learning: A priming process model","authors":"J. Trafton, Anthony M. Harrison, W. Lawson","doi":"10.1109/DEVLRN.2015.7346117","DOIUrl":"https://doi.org/10.1109/DEVLRN.2015.7346117","url":null,"abstract":"As children learn to speak, they also gesture; previous empirical work has suggested that there is a direct link between the two. In this paper, we propose a priming process model that uses gesture to facilitate language. Our model uses the ACT-R/E cognitive architecture and uses a combination of repetition naming and priming from gesture spatial representations to increase the probability that a word will be remembered. Our model simulates 11 months of learning and runs on an embodied platform.","PeriodicalId":164756,"journal":{"name":"2015 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125993309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards hierarchical curiosity-driven exploration of sensorimotor models","authors":"Sébastien Forestier, Pierre-Yves Oudeyer","doi":"10.1109/DEVLRN.2015.7346146","DOIUrl":"https://doi.org/10.1109/DEVLRN.2015.7346146","url":null,"abstract":"Curiosity-driven exploration mechanisms have been proposed to allow robots to actively explore high dimensional sensorimotor spaces in an open-ended manner [1], [2]. In such setups, competence-based intrinsic motivations show better results than knowledge-based exploration mechanisms which only monitor the learner's prediction performance [2], [3]. With competence-based intrinsic motivations, the learner explores its sensor space with a bias toward regions which are predicted to yield a high competence progress. Also, throughout its life, a developmental robot has to incrementally explore skills that add up to the hierarchy of previously learned skills, with a constraint being the cost of experimentation. Thus, a hierarchical exploration architecture could allow to reuse the sensorimotor models previously explored and to combine them to explore more efficiently new complex sensorimotor models. Here, we rely more specifically on the R-IAC and SAGG-RIAC series of architectures [3]. These architectures allow the learning of a single mapping between a motor and a sensor space with a competence-based intrinsic motivation. We describe some ways to extend these architectures with different tasks spaces that can be explored in a hierarchical manner, and mechanisms to handle this hierarchy of sensorimotor models that all need to be explored with an adequate amount of trials. We also describe an interactive task to evaluate the hierarchical learning mechanisms, where a robot has to explore its motor space in order to push an object to different locations. The robot can first explore how to make movements with its hand and then reuse this skill to explore the task of pushing an object.","PeriodicalId":164756,"journal":{"name":"2015 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131892016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Seeing [u] aids vocal learning: Babbling and imitation of vowels using a 3D vocal tract model, reinforcement learning, and reservoir computing","authors":"M. Murakami, B. Kröger, P. Birkholz, J. Triesch","doi":"10.1109/DEVLRN.2015.7346142","DOIUrl":"https://doi.org/10.1109/DEVLRN.2015.7346142","url":null,"abstract":"We present a model of imitative vocal learning consisting of two stages. First, the infant is exposed to the ambient language and forms auditory knowledge of the speech items to be acquired. Second, the infant attempts to imitate these speech items and thereby learns to control the articulators for speech production. We model these processes using a recurrent neural network and a realistic vocal tract model. We show that vowel production can be successfully learnt by imitation. Moreover, we find that acquisition of [u] is impaired if visual information is discarded during imitation. This might give sighted infants an advantage over blind infants during vocal learning, which is in agreement with experimental evidence.","PeriodicalId":164756,"journal":{"name":"2015 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121425487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}