{"title":"Robot-Care for the Older People: Ethically Justified or Not?","authors":"F. Noori, Md. Zia Uddin, J. Tørresen","doi":"10.1109/DEVLRN.2019.8850706","DOIUrl":"https://doi.org/10.1109/DEVLRN.2019.8850706","url":null,"abstract":"With the growing popularity of robotics technology, robots have also been introduced to take care of older people. Robots can assist seniors in their daily life tasks, monitor their health or provide them companionship. Despite the benefits of robots, several ethical concerns arise in elderly-robot interaction. The old people might get lonely due to less human interaction, or they feel less control in their lives. The elderly might lose their personal liberty. In this paper, we try to highlight the ethical concerns and provide some preliminary suggestions which would be helpful to reduce the ethical concerns by using customized systems with proper guidelines and consultation with older people.","PeriodicalId":318973,"journal":{"name":"2019 Joint IEEE 9th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123636075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Oscar Javier Castelblanco, Laura Milena Donado, E. Gerlein, E. González
{"title":"KALA: Robotic Platform for Teaching Algorithmic Thinking to Children","authors":"Oscar Javier Castelblanco, Laura Milena Donado, E. Gerlein, E. González","doi":"10.1109/DEVLRN.2019.8850694","DOIUrl":"https://doi.org/10.1109/DEVLRN.2019.8850694","url":null,"abstract":"In the decade of 1970s, KAREL was proposed as innovative way to teach algorithmic thinking to children. This paper discusses the design and implementation of KALA the robot. KALA attempts to bring the world of KAREL to real life, incorporating a platform that consists of a differential robot, a mobile application and a customizable and interactive labyrinth board. KALAs programming is performed using the mobile app, where the user programs a series of instructions that KALA will execute navigating across an interactive labyrinth board. Several tests with users showed improvement in algorithm thinking with just a few attempts to use the platform.","PeriodicalId":318973,"journal":{"name":"2019 Joint IEEE 9th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129380129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Phuong D. H. Nguyen, M. Hoffmann, U. Pattacini, G. Metta
{"title":"Reaching development through visuo-proprioceptive-tactile integration on a humanoid robot - a deep learning approach","authors":"Phuong D. H. Nguyen, M. Hoffmann, U. Pattacini, G. Metta","doi":"10.1109/DEVLRN.2019.8850681","DOIUrl":"https://doi.org/10.1109/DEVLRN.2019.8850681","url":null,"abstract":"The development of reaching in infants has been studied for nearly nine decades. Originally, it was thought that early reaching is visually guided, but more recent evidence is suggestive of “visually elicited” reaching, i.e. infant is gazing at the object rather than its hand during the reaching movement. The importance of haptic feedback has also been emphasized. Inspired by these findings, in this work we use the simulated iCub humanoid robot to construct a model of reaching development. The robot is presented with different objects, gazes at them, and performs motor babbling with one of its arms. Successful contacts with the object are detected through tactile sensors on hand and forearm. Such events serve as the training set, constituted by images from the robot's two eyes, head joints, tactile activation, and arm joints. A deep neural network is trained with images and head joints as inputs and arm configuration and touch as output. After learning, the network can successfully infer arm configurations that would result in a successful reach, together with prediction of tactile activation (i.e. which body part would make contact). Our main contribution is twofold: (i) our pipeline is end-to-end from stereo images and head joints (6 DoF) to armtorso configurations (10 DoF) and tactile activations, without any preprocessing, explicit coordinate transformations etc.; (ii) unique to this approach, reaches with multiple effectors corresponding to different regions of the sensitive skin are possible.","PeriodicalId":318973,"journal":{"name":"2019 Joint IEEE 9th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128876773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A predictive coding model of representational drawing in human children and chimpanzees","authors":"A. Philippsen, Y. Nagai","doi":"10.1109/DEVLRN.2019.8850701","DOIUrl":"https://doi.org/10.1109/DEVLRN.2019.8850701","url":null,"abstract":"Humans and chimpanzees differ in the way that they draw. Human children from a certain age tend to create representational drawings, that is, drawings which represent objects. Chimpanzees, although equipped with sufficient motor skills, do not improve beyond the stage of scribbling behavior. To investigate the underlying cognitive mechanisms, we propose a computational model of predictive coding which allows us to change the way that sensory information and prior predictions are updated into posterior beliefs during time series prediction. We replicate the results of a study from experimental psychology which examined the ability of children and chimpanzees to complete partial drawings of a face. Our results reveal that typical or stronger reliance on the prior enables the network to perform representational drawings as observed in children. In contrast, too weak reliance on the prior replicates the findings that were observed in chimpanzees: existing lines are traced with high accuracy, but non-existing parts are not added to complete a representational drawing. The ability to perform representational drawings, thus, could be explained by subtle changes in how strongly prior information is integrated with sensory percepts rather than by the presence or absence of a specific cognitive mechanism.","PeriodicalId":318973,"journal":{"name":"2019 Joint IEEE 9th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121318880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Michael Goudzwaard, Matthijs H. J. Smakman, E. Konijn
{"title":"Robots are Good for Profit: A Business Perspective on Robots in Education","authors":"Michael Goudzwaard, Matthijs H. J. Smakman, E. Konijn","doi":"10.1109/DEVLRN.2019.8850726","DOIUrl":"https://doi.org/10.1109/DEVLRN.2019.8850726","url":null,"abstract":"This paper aims to chart the (moral) values from a robotic industry's perspective regarding the introduction of robots in education. To our knowledge, no studies thus far have addressed this perspective in considering the moral values within this robotic domain. However, their values could conflict with the values upheld by other relevant stakeholders, such as the values of teachers, parents or children. Hence, it is crucial to take the various perspectives of relevant stakeholder's moral values into account. For this study, multiple focus group sessions $(n=3)$ were conducted in The Netherlands with representatives $(n=13)$ of robotic companies on their views of robots in primary education. Their perceptions in terms of opportunities and concerns, were then linked to business values reported in the extant literature. Results show that out of 26 business values, mainly six business values appeared relevant for robot tutors: 1) profitability, 2) productivity, 3 & 4) innovation and creativity, 5) competitiveness, and 6) risk orientation organization.","PeriodicalId":318973,"journal":{"name":"2019 Joint IEEE 9th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127256393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Muhammad Burhan Hafez, C. Weber, Matthias Kerzel, S. Wermter
{"title":"Efficient Intrinsically Motivated Robotic Grasping with Learning-Adaptive Imagination in Latent Space","authors":"Muhammad Burhan Hafez, C. Weber, Matthias Kerzel, S. Wermter","doi":"10.1109/DEVLRN.2019.8850723","DOIUrl":"https://doi.org/10.1109/DEVLRN.2019.8850723","url":null,"abstract":"Combining model-based and model-free deep reinforcement learning has shown great promise for improving sample efficiency on complex control tasks while still retaining high performance. Incorporating imagination is a recent effort in this direction inspired by human mental simulation of motor behavior. We propose a learning-adaptive imagination approach which, unlike previous approaches, takes into account the reliability of the learned dynamics model used for imagining the future. Our approach learns an ensemble of disjoint local dynamics models in latent space and derives an intrinsic reward based on learning progress, motivating the controller to take actions leading to data that improves the models. The learned models are used to generate imagined experiences, augmenting the training set of real experiences. We evaluate our approach on learning vision-based robotic grasping and show that it significantly improves sample efficiency and achieves near-optimal performance in a sparse reward environment.","PeriodicalId":318973,"journal":{"name":"2019 Joint IEEE 9th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125738751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Replication of Infant Behaviours with a Babybot: Early Pointing Gesture Comprehension","authors":"Baris Serhan, A. Cangelosi","doi":"10.1109/DEVLRN.2019.8850680","DOIUrl":"https://doi.org/10.1109/DEVLRN.2019.8850680","url":null,"abstract":"Natural deictic communication with humanoid robots requires a mechanism for understanding pointing gestures. This mechanism should have a representation for space and time dynamics to accurately model joint covert attention. Here, we introduce a babybot that actualise a hybrid computational architecture for spatial covert attention which is embodied in the iCub humanoid robot. This developmental robotics architecture was an extension of our previous model that combines a connectionist model of pointing comprehension and a dynamic neural field model of infant saccade generation. In order to test the babybot's abilities, an attentional cueing design was built as it is a common methodology to study pointing gesture comprehension in the current developmental psychology literature. The babybot was evaluated by modelling two different age groups (i.e. 5- and 7-month-old infants) in two different attentional cueing experiments from Rohlfing et al.'s study where they have shown that a dynamic pointing covertly orients attention in early infancy as opposed to static pointing. These experiments were replicated by our babybot for all modelled age groups. The resemblance between infant and babybot behaviours supported the idea that motion information is important to disengage from the centrally salient stimulus to orient the attention to the distal ones. The current experimental setup and the model's ability to simulate different age groups provide a new platform to replicate other developmental studies on pointing comprehension to spotlight the reasons of the discrepancies between them.","PeriodicalId":318973,"journal":{"name":"2019 Joint IEEE 9th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126472331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Autonomously learning beliefs is facilitated by a neural dynamic network driving an intentional agent","authors":"Jan Tekülve, G. Schöner","doi":"10.1109/DEVLRN.2019.8850684","DOIUrl":"https://doi.org/10.1109/DEVLRN.2019.8850684","url":null,"abstract":"Intentionality is the capacity of mental states to be about the world, both in its “action” (world-to-mind) and its “perception” (mind-to-world) direction of fit. An intentional agent must be able to perceive, act, memorize, and plan. These psychological modes may be driven by desires and be informed by beliefs. We have previously proposed a neural process account of intentionality, in which intentional states are stabilized by interactions within populations of neurons that represent perceptual features and movement parameters. Instabilities in such neural dynamics activated the conditions of satisfaction of intentional states and induced sequences of intentional behavior. Here we explore the idea that the process organization of such intentional neural systems enables autonomous learning. We show how beliefs may be learned from single experiences, may be activated in new situations, and be used to guide behavior. Beliefs may also be dis-activated when their predictions do not match experience, leading to the learning of a new belief. We demonstrate the idea in a simple scenario in which a simulated agent autonomously explores an environment, directs action at objects and learns simple contingencies in this environment to form beliefs. The beliefs can be used to realize fixed desires of the agent.","PeriodicalId":318973,"journal":{"name":"2019 Joint IEEE 9th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"171 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133632963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Creative Problem Solving by Robots Using Action Primitive Discovery","authors":"E. Gizzi, Mateo Guaman Castro, J. Sinapov","doi":"10.1109/DEVLRN.2019.8850711","DOIUrl":"https://doi.org/10.1109/DEVLRN.2019.8850711","url":null,"abstract":"Humans and many other species have the remarkable ability to innovate and creatively problem solve on-the-fly. Inspired by these abilities, we propose a framework for action discovery in problem solving scenarios similar to puzzle-boxes used to evaluate intelligence in animal species. The proposed framework assumes that the robot starts with a knowledge base including predicates and actions, which, however, are insufficient to solve the problem faced by the robot. We describe a method for discovering new action primitives through object exploration and action segmentation, which is able to iteratively update the robot's knowledge base on-the-fly until the solution becomes feasible. We implemented and evaluated the framework using a 3D physics-based simulated object retrieval task for the Baxter bi-manual robot. Results suggest that action segmentation is one viable path towards enabling autonomous agents to adapt on-the-fly and in short amounts of time to new situations that were unforeseen by their programmers and engineers.","PeriodicalId":318973,"journal":{"name":"2019 Joint IEEE 9th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130300915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
François De La Bourdonnaye, Céline Teulière, T. Chateau, J. Triesch
{"title":"Within Reach? Learning to touch objects without prior models","authors":"François De La Bourdonnaye, Céline Teulière, T. Chateau, J. Triesch","doi":"10.1109/DEVLRN.2019.8850702","DOIUrl":"https://doi.org/10.1109/DEVLRN.2019.8850702","url":null,"abstract":"Human infants learn to manipulate objects in a largely autonomous fashion, starting without precise models of their bodies' kinematics and dynamics. Replicating such learning abilities in robots would make them more flexible and robust and is considered a grand challenge of Developmental Robotics. In this paper, we propose a developmental method that allows a robot to learn to touch an object, while also learning to predict if the object is within reach or not. Importantly, our method does not rely on any forward or inverse kinematics models. Instead it uses a stage-wise learning approach combining deep reinforcement learning and a form of self-supervised learning. In this approach, complex skills such as touching an object or predicting if it is within reach are learned on top of more basic skills such as object fixation and eye-hand-coordination.","PeriodicalId":318973,"journal":{"name":"2019 Joint IEEE 9th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123924668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}