Jun Ichikawa, Kazuhiro Mitsukuni, Yukari Hori, Y. Ikeno, Leblanc Alexandre, T. Kawamoto, Yukiko Nishizaki, N. Oka
{"title":"Analysis of How Personality Traits Affect Children's Conversational Play with an Utterance-Output Device","authors":"Jun Ichikawa, Kazuhiro Mitsukuni, Yukari Hori, Y. Ikeno, Leblanc Alexandre, T. Kawamoto, Yukiko Nishizaki, N. Oka","doi":"10.1109/DEVLRN.2019.8850700","DOIUrl":"https://doi.org/10.1109/DEVLRN.2019.8850700","url":null,"abstract":"Smart speakers (such as Amazon Echo and Google Home) are becoming a popular addition to homes, and it is expected that they will be used as parental support in the future. Previous studies have analyzed the history logs of the uses of their services and gathered impressions of them using questionnaire surveys and qualitative interviews. Details of how children interact with smart speakers have not been fully investigated. This study investigated children's interactions with an utterance-output device, such as a smart speaker, and focuses on the influences of the children's personality traits. The experiment was conducted in a realistic setting. The results indicate that the less nervous, more emotionally stable, or more adaptable to communication different from that at home children are, the more closely they engage in conversational play with an utterance-output device. The findings suggest that effective conversational play, which can lessen anxiety toward a novel utterance-output device, will be required in future interaction design.","PeriodicalId":318973,"journal":{"name":"2019 Joint IEEE 9th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121394591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"How development in the Bayesian brain facilitates learning","authors":"D. Oliva, A. Philippsen, Y. Nagai","doi":"10.1109/DEVLRN.2019.8850720","DOIUrl":"https://doi.org/10.1109/DEVLRN.2019.8850720","url":null,"abstract":"The Bayesian brain model has been proposed as a possible simplified view of the inner workings of the human brain. According to this view, the brain is a prediction machine with an internal model of the world which can be improved by comparing the generated predictions to sensory observations. Knowledge about expected events (prior predictions) are combined with observations (sensory input) into posterior beliefs, helping us to infer what is happening in the environment. While it is commonly acknowledged that this integration is optimally performed in a Bayesian way, the effects of Bayesian inference on the developmental process are less well investigated. In this study, we propose a computational framework which combines Bayesian inference with recurrent neural network training. We demonstrate that learning in this framework proceeds in a human-like manner in that the system is able to appropriately weight sensory input and prior predictions depending on their reliability which increases during development. As a result, during the course of learning, the model gradually switches from relying on sensory information to a stronger reliance on own predictions and becomes more robust against disturbances in the environment.","PeriodicalId":318973,"journal":{"name":"2019 Joint IEEE 9th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129395736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Mindreading for Robots: Predicting Intentions via Dynamical Clustering of Human Postures","authors":"Samuele Vinanzi, C. Goerick, A. Cangelosi","doi":"10.1109/DEVLRN.2019.8850698","DOIUrl":"https://doi.org/10.1109/DEVLRN.2019.8850698","url":null,"abstract":"Recent advancements in robotics suggest a future where social robots will be deeply integrated in our society. In order to understand humans and engage in finer interactions, robots would greatly benefit from the ability of intention reading: the capacity to discern the high-level goal that is driving the low-level actions of an observed agent. This is particularly useful in joint action scenarios, where human and robot must collaborate to reach a shared goal: if the latter can predict the actions of the former, it will be able to use this information for decision making in order to improve the quality of the cooperation. This research proposes a novel artificial cognitive architecture, based on the developmental robotics paradigm, that can estimate the goals of a human partner engaged in a joint task to modulate synergistic behavior. This is accomplished using unsupervised dynamical clustering of human skeletal data and a hidden semi-Markov chain. The effectiveness of this architecture has been tested through an interactive cooperative experiment involving a block building game, the iCub robot and a human. The results show that the former is able to adopt a collaborative behavior by performing intention reading based on the partner's physical clues.","PeriodicalId":318973,"journal":{"name":"2019 Joint IEEE 9th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126313353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Hierarchical Control for Bipedal Locomotion using Central Pattern Generators and Neural Networks","authors":"S. Auddy, S. Magg, S. Wermter","doi":"10.1109/DEVLRN.2019.8850683","DOIUrl":"https://doi.org/10.1109/DEVLRN.2019.8850683","url":null,"abstract":"The complexity of bipedal locomotion may be attributed to the difficulty in synchronizing joint movements while at the same time achieving high-level objectives such as walking in a particular direction. Artificial central pattern generators (CPGs) can produce synchronized joint movements and have been used in the past for bipedal locomotion. However, most existing CPG-based approaches do not address the problem of high-level control explicitly. We propose a novel hierarchical control mechanism for bipedal locomotion where an optimized CPG network is used for joint control and a neural network acts as a high-level controller for modulating the CPG network. By separating motion generation from motion modulation, the high-level controller does not need to control individual joints directly but instead can develop to achieve a higher goal using a low-dimensional control signal. The feasibility of the hierarchical controller is demonstrated through simulation experiments using the Neuro-Inspired Companion (NICO) robot. Experimental results demonstrate the controller's ability to function even without the availability of an exact robot model.","PeriodicalId":318973,"journal":{"name":"2019 Joint IEEE 9th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"6 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129497822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gabriella Pizzuto, Timothy M. Hospedales, O. Capirci, A. Cangelosi
{"title":"Modelling the Single Word to Multi-Word Transition Using Matrix Completion","authors":"Gabriella Pizzuto, Timothy M. Hospedales, O. Capirci, A. Cangelosi","doi":"10.1109/DEVLRN.2019.8850708","DOIUrl":"https://doi.org/10.1109/DEVLRN.2019.8850708","url":null,"abstract":"Infants acquire language in distinct stages, starting from single gestures and single words, and through utilising gestures, they learn multi-word combinations. To achieve this language development on artificial agents, we propose a multimodal computational model for single to multi-word transition through gesture-word combinations. Our approach relies on advancements in deep models for feature extraction and on casting the supplementary word generation problem into a matrix completion task. Experimental evaluation is carried out on a dataset recorded directly from the humanoid iCub's cameras, comprising the deictic gesture of pointing and real-world objects. Illustrated by our results, the proposed architecture further strengthens its potential to model early stage language acquisition.","PeriodicalId":318973,"journal":{"name":"2019 Joint IEEE 9th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116782025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An interdisciplinary overview of developmental indices and behavioral measures of the minimal self","authors":"Yasmin Kim Georgie, G. Schillaci, V. Hafner","doi":"10.1109/DEVLRN.2019.8850703","DOIUrl":"https://doi.org/10.1109/DEVLRN.2019.8850703","url":null,"abstract":"In this review paper we discuss the development of the minimal self in humans, the behavioural measures indicating the presence of different aspects of the minimal self, namely, body ownership and sense of agency, and also discuss robotics research investigating and developing these concepts in artificial agents. We investigate possible avenues for expanding the research in robotics to further explore the development of an artificial minimal self.","PeriodicalId":318973,"journal":{"name":"2019 Joint IEEE 9th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126840606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Contextual Relabelling of Detected Objects","authors":"Faisal Alamri, N. Pugeault","doi":"10.1109/DEVLRN.2019.8850686","DOIUrl":"https://doi.org/10.1109/DEVLRN.2019.8850686","url":null,"abstract":"Contextual information, such as the co-occurrence of objects and the spatial and relative size among objects provides deep and complex information about scenes. It also can play an important role in improving object detection. In this work, we present two contextual models (rescoring and re-labeling models) that leverage contextual information (16 contextual relationships are applied in this paper) to enhance the state-of-the-art RCNN-based object detection (Faster RCNN). We experimentally demonstrate that our models lead to enhancement in detection performance using the most common dataset used in this field (MSCOCO).","PeriodicalId":318973,"journal":{"name":"2019 Joint IEEE 9th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131315905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Autonomous Reinforcement Learning of Multiple Interrelated Tasks","authors":"V. Santucci, G. Baldassarre, Emilio Cartoni","doi":"10.1109/DEVLRN.2019.8850713","DOIUrl":"https://doi.org/10.1109/DEVLRN.2019.8850713","url":null,"abstract":"Autonomous multiple tasks learning is a fundamental capability to develop versatile artificial agents that can act in complex environments. In real-world scenarios, tasks may be interrelated (or “hierarchical”) so that a robot has to first learn to achieve some of them to set the preconditions for learning other ones. Even though different strategies have been used in robotics to tackle the acquisition of interrelated tasks, in particular within the developmental robotics framework, autonomous learning in this kind of scenarios is still an open question. Building on previous research in the framework of intrinsically motivated open-ended learning, in this work we describe how this question can be addressed working on the level of task selection, in particular considering the multiple interrelated tasks scenario as an MDP where the system is trying to maximise its competence over all the tasks.","PeriodicalId":318973,"journal":{"name":"2019 Joint IEEE 9th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130751489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Self-supervised Body Image Acquisition Using a Deep Neural Network for Sensorimotor Prediction","authors":"Alban Laflaquière, V. Hafner","doi":"10.1109/DEVLRN.2019.8850717","DOIUrl":"https://doi.org/10.1109/DEVLRN.2019.8850717","url":null,"abstract":"This work investigates how a naive agent can acquire its own body image in a self-supervised way, based on the predictability of its sensorimotor experience. Our working hypothesis is that, due to its temporal stability, an agent's body produces more consistent sensory experiences than the environment, which exhibits a greater variability. Given its motor experience, an agent can thus reliably predict what appearance its body should have. This intrinsic predictability can be used to automatically isolate the body image from the rest of the environment. We propose a two-branches deconvolutional neural network to predict the visual sensory state associated with an input motor state, as well as the prediction error associated with this input. We train the network on a dataset of first-person images collected with a simulated Pepper robot, and show how the network outputs can be used to automatically isolate its visible arm from the rest of the environment. Finally, the quality of the body image produced by the network is evaluated.","PeriodicalId":318973,"journal":{"name":"2019 Joint IEEE 9th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121580659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Optimization Framework for Task Sequencing in Curriculum Learning","authors":"Francesco Foglino, M. Leonetti","doi":"10.1109/DEVLRN.2019.8850690","DOIUrl":"https://doi.org/10.1109/DEVLRN.2019.8850690","url":null,"abstract":"Curriculum learning in reinforcement learning is used to shape exploration by presenting the agent with increasingly complex tasks. The idea of curriculum learning has been largely applied in both animal training and pedagogy. In reinforcement learning, all previous task sequencing methods have shaped exploration with the objective of reducing the time to reach a given performance level. We propose novel uses of curriculum learning, which arise from choosing different objective functions. Furthermore, we define a general optimization framework for task sequencing and evaluate the performance of popular metaheuristic search methods on several tasks. We show that curriculum learning can be successfully used to: improve the initial performance, take fewer suboptimal actions during exploration, and discover better policies.","PeriodicalId":318973,"journal":{"name":"2019 Joint IEEE 9th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122110951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}