2019 Joint IEEE 9th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)最新文献

筛选
英文 中文
The Sound of Actuators: Disturbance in Human - Robot Interactions? 致动器的声音:人-机器人交互中的干扰?
Mélanie Jouaiti, P. Hénaff
{"title":"The Sound of Actuators: Disturbance in Human - Robot Interactions?","authors":"Mélanie Jouaiti, P. Hénaff","doi":"10.1109/DEVLRN.2019.8850697","DOIUrl":"https://doi.org/10.1109/DEVLRN.2019.8850697","url":null,"abstract":"Human-Robot interactions promise to increase as robots become more pervasive. One important aspect is gestural communication which is quite popular in rehabilitation and therapeutic robotics. Indeed, synchrony is a key component of interpersonal interactions which affects the interaction on the behavioural level, as well as on the social level. When interacting physically with a robot, one perceives the robot movements but robot actuators also produce sound. In this work, we wonder whether the sound of actuators can hamper human coordination in human-robot rhythmic interactions. Indeed, the human brain processes the auditory input in priority compared to the visual input. This property can sometimes be so powerful so as to alter or even remove the visual perception. However, under given circumstances, the auditory signal and the visual perception can reinforce each other. In this paper, we propose a study where participants were asked to perform a waving-like gesture back at a robot in three different conditions: with visual perception only, auditory perception only and both perceptions. We analyze coordination performance and focus of gaze in each condition. Results show that the combination of visual and auditory perceptions perturbs the rhythmic interaction.","PeriodicalId":318973,"journal":{"name":"2019 Joint IEEE 9th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123987756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Robotic Interactive Physics Parameters Estimator (RIPPE) 机器人交互物理参数估计器(RIPPE)
Atabak Dehban, Carlos Cardoso, Pedro Vicente, A. Bernardino, J. Santos-Victor
{"title":"Robotic Interactive Physics Parameters Estimator (RIPPE)","authors":"Atabak Dehban, Carlos Cardoso, Pedro Vicente, A. Bernardino, J. Santos-Victor","doi":"10.1109/DEVLRN.2019.8850710","DOIUrl":"https://doi.org/10.1109/DEVLRN.2019.8850710","url":null,"abstract":"The ability to reason about natural laws of an environment directly contributes to successful performance in it. In this work, we present RIPPE, a framework that allows a robot to leverage existing physics simulators as its knowledge base for learning interactions with in-animate objects. To achieve this, the robot needs to initially interact with its surrounding environment and observe the effects of its behaviours. Relying on the simulator to efficiently solve the partial differential equations describing these physical interactions, the robot infers consistent physical parameters of its surroundings by repeating the same actions in simulation and evaluate how closely they match its real observations. The learning process is performed using Bayesian Optimisation techniques to sample efficiently the parameter space. We assess the utility of these inferred parameters by measuring how well they can explain physical interactions using previously unseen actions and tools.","PeriodicalId":318973,"journal":{"name":"2019 Joint IEEE 9th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128116625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Musculoskeletal Bias on Infant Sensorimotor Development Driven by Predictive Learning 预测学习驱动下婴儿感觉运动发展的肌肉骨骼偏差
Kaoruko Higuchi, Hoshinori Kanazawa, Yuma Suzuki, Keiko Fujii, Y. Kuniyoshi
{"title":"Musculoskeletal Bias on Infant Sensorimotor Development Driven by Predictive Learning","authors":"Kaoruko Higuchi, Hoshinori Kanazawa, Yuma Suzuki, Keiko Fujii, Y. Kuniyoshi","doi":"10.1109/DEVLRN.2019.8850722","DOIUrl":"https://doi.org/10.1109/DEVLRN.2019.8850722","url":null,"abstract":"In the early developmental stages, infants learn to control complex and redundant body movements using sensory inputs. Reaching with the arm and hand position control are fundamental features of motor development. However, it remains unclear how infants aquire such kind ofmotor control. In the current study, we propose a network model that learns the relationship between motor commands and visual and proprioceptive sensory input using predictive learning to perform reaching based on infantile musculoskeletal body. Based on assumption that human motion is generated from combinations of muscle activation patterns deïňĄned as a motor primitive, we examine the contribution of motor primitive to sensorimotor development. The results of this predictive learning model revealed that acquisition of motor primitives promoted sensorimotor learning of reaching.","PeriodicalId":318973,"journal":{"name":"2019 Joint IEEE 9th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121131522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Categorization of Reinforcement Learning Exploration Techniques which Facilitates Combination of Different Methods 促进不同方法组合的强化学习探索技术分类
Bjørn Ivar Teigen, K. Ellefsen, J. Tørresen
{"title":"A Categorization of Reinforcement Learning Exploration Techniques which Facilitates Combination of Different Methods","authors":"Bjørn Ivar Teigen, K. Ellefsen, J. Tørresen","doi":"10.1109/DEVLRN.2019.8850685","DOIUrl":"https://doi.org/10.1109/DEVLRN.2019.8850685","url":null,"abstract":"The exploration vs. exploitation problem in reinforcement learning has received much attention lately. It is difficult to get a complete overview of all the exploration methods, and what methods can or can not be used together in a combined algorithm. We propose a categorization of exploration techniques based on the mechanism through which they generate exploratory policies. This enables plug-and-play combination of exploration techniques from each category. We show how to combine methods from different categories and demonstrate that such a combination can result in a new exploration technique which outperforms each individual method.","PeriodicalId":318973,"journal":{"name":"2019 Joint IEEE 9th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"124 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130694224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sensorimotor Cross-Behavior Knowledge Transfer for Grounded Category Recognition 基于类别识别的感觉运动跨行为知识转移
Gyan Tatiya, Ramtin Hosseini, M. C. Hughes, J. Sinapov
{"title":"Sensorimotor Cross-Behavior Knowledge Transfer for Grounded Category Recognition","authors":"Gyan Tatiya, Ramtin Hosseini, M. C. Hughes, J. Sinapov","doi":"10.1109/DEVLRN.2019.8850715","DOIUrl":"https://doi.org/10.1109/DEVLRN.2019.8850715","url":null,"abstract":"Humans use exploratory behaviors coupled with multi-modal perception to learn about the objects around them. Research in robotics has shown that robots too can use such behaviors (e.g., grasping, pushing, shaking) to infer object properties that cannot always be detected using visual input alone. However, such learned representations are specific to each individual robot and cannot be directly transferred to another robot with different actions, sensors, and morphology. To address this challenge, we propose a framework for knowledge transfer across different behaviors and modalities that enables a source robot to transfer knowledge about objects to a target robot that has never interacted with them. The intuition behind our approach is that if two robots interact with a shared set of objects, the produced sensory data can be used to learn a mapping between the two robots' feature spaces. We evaluate the framework on a category recognition task using a dataset containing 9 robot behaviors performed multiple times on a set of 100 objects. The results show that the proposed framework can enable a target robot to perform category recognition on a set of novel objects and categories without the need to physically interact with the objects to learn the categorization model.","PeriodicalId":318973,"journal":{"name":"2019 Joint IEEE 9th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116514510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Babbling elicits simplified caregiver speech: Findings from natural interaction and simulation 咿呀学语引出简化的照顾者语言:来自自然互动和模拟的发现
Steven L. Elmlinger, J. Schwade, M. Goldstein
{"title":"Babbling elicits simplified caregiver speech: Findings from natural interaction and simulation","authors":"Steven L. Elmlinger, J. Schwade, M. Goldstein","doi":"10.1109/DEVLRN.2019.8850677","DOIUrl":"https://doi.org/10.1109/DEVLRN.2019.8850677","url":null,"abstract":"What is the function of babbling in language learning? Our recent findings suggest that infants' immature vocalizations may elicit simplified linguistic responses from their caregivers. The contributions of parental speech to infant development are well established; individual differences in the number of words in infants' ambient language environment predict communicative and cognitive development. It is unclear whether the number or the diversity of words in infants' environments is more critical for understanding infant language development. We present a new solution that observes the relation between the total number of words (tokens) and the diversity of words in infants' environments. Comparing speech corpora containing different numbers of tokens is challenging because the number of tokens strongly influences measures of corpus word diversity. However, here we offer a method for minimizing the effects of corpus size by deriving control samples of words and comparing them to test samples. We find that parents' speech in response to infants' babbling is simpler in that it contains fewer word types; our estimates based on sampling also suggest simplification of word diversity with larger numbers of tokens. Thus, infants, via their immature vocalizations, elicit speech from caregivers that is easier to learn.","PeriodicalId":318973,"journal":{"name":"2019 Joint IEEE 9th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128907434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
The role of object motion in visuo-haptic exploration during development 发展过程中物体运动在视触觉探索中的作用
A. Sciutti, G. Sandini
{"title":"The role of object motion in visuo-haptic exploration during development","authors":"A. Sciutti, G. Sandini","doi":"10.1109/DEVLRN.2019.8850687","DOIUrl":"https://doi.org/10.1109/DEVLRN.2019.8850687","url":null,"abstract":"Since infancy we explore novel objects to infer their shape. However, how exploration strategies are planned to combine different sensory inputs is still an open question. In this work we focus on the development of visuo-haptic exploration strategies, by analyzing how school-aged children explore iCube, a sensorized cube measuring its orientation in space and contacts location. Participants' task was to find specific cube faces while they could either only touch the static cube (tactile), move and touch it (haptic) or move, touch and look at it (visuo-haptic). Visuo-haptic performances were adult-like at 7 years of age, whereas haptic exploration was not as effective until 9 years. Moreover, the possibility to rotate the object represented a difficulty rather than an advantage for the youngest age group. These findings are discussed in relation to the development of visuo-haptic integration and in the perspective of enabling early anomalies detection in explorative behaviors.","PeriodicalId":318973,"journal":{"name":"2019 Joint IEEE 9th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126087812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Looking Back and Ahead: Adaptation and Planning by Gradient Descent 回顾与展望:梯度下降法的适应与规划
Shingo Murata, Hiroki Sawa, S. Sugano, T. Ogata
{"title":"Looking Back and Ahead: Adaptation and Planning by Gradient Descent","authors":"Shingo Murata, Hiroki Sawa, S. Sugano, T. Ogata","doi":"10.1109/DEVLRN.2019.8850693","DOIUrl":"https://doi.org/10.1109/DEVLRN.2019.8850693","url":null,"abstract":"Adaptation and planning are crucial for both biological and artificial agents. In this study, we treat these as an inference problem that we solve using a gradient-based optimization approach. We propose adaptation and planning by gradient descent (APGraDe), a gradient-based computational framework with a hierarchical recurrent neural network (RNN) for adaptation and planning. This framework computes (counterfactual) prediction errors by looking back on past situations based on actual observations and by looking ahead to future situations based on preferred observations (or goal). The internal state of the higher level of the RNN is optimized in the direction of minimizing these errors. The errors for the past contribute to the adaptation while errors for the future contribute to the planning. The proposed APGraDe framework is implemented in a humanoid robot and the robot performs a ball manipulation task with a human experimenter. Experimental results show that given a particular preference, the robot can adapt to unexpected situations while pursuing its own preference through the planning of future actions.","PeriodicalId":318973,"journal":{"name":"2019 Joint IEEE 9th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133742064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Motor Coordination Learning for Rhythmic Movements 韵律运动的运动协调学习
Mélanie Jouaiti, P. Hénaff
{"title":"Motor Coordination Learning for Rhythmic Movements","authors":"Mélanie Jouaiti, P. Hénaff","doi":"10.1109/DEVLRN.2019.8850678","DOIUrl":"https://doi.org/10.1109/DEVLRN.2019.8850678","url":null,"abstract":"The perspective of ubiquitous robots raises the issue of social acceptance. It is our belief that a successful robot integration relies on adequate social responses. Human social interactions heavily rely on synchrony which leads humans to connect emotionally. It is henceforth, our opinion, that motor coordination mechanisms should be fully integrated to robot controllers, allowing coordination, and thus social synchrony, when required. The aim of the work presented in this paper is to learn motor coordination with a human partner performing rhythmic movements. For that purpose, plastic Central Pattern Generators (CPG) are implemented in the joints of the Pepper robot. Hence, in this paper, we present an adaptive versatile model which can be used for any rhythmic movement and combination of joints. This is demonstrated with various arm movements.","PeriodicalId":318973,"journal":{"name":"2019 Joint IEEE 9th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133201245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Cognitive Architecture for Socially Adaptable Robots 社会适应性机器人的认知架构
Ana Tanevska, F. Rea, G. Sandini, Lola Cañamero, A. Sciutti
{"title":"A Cognitive Architecture for Socially Adaptable Robots","authors":"Ana Tanevska, F. Rea, G. Sandini, Lola Cañamero, A. Sciutti","doi":"10.1109/DEVLRN.2019.8850688","DOIUrl":"https://doi.org/10.1109/DEVLRN.2019.8850688","url":null,"abstract":"A social robot that's aware of our needs and continuously adapts its behaviour to them has the potential of creating a complex, personalized, human-like interaction of the kind we are used to have with our peers in our everyday lives. However adaptability, being a result of a process of learning and making errors, brings with itself also uncertainty, and we as humans are heavily relying on the machines we use to always be predictable and consistent. To further explore this, we propose a cognitive architecture for the humanoid robot iCub supporting adaptability and we attempt to validate its functionality and establish the potential benefits it could bring with respect to the more traditional pre-scripted interaction protocols for robots.","PeriodicalId":318973,"journal":{"name":"2019 Joint IEEE 9th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123570818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信