2013 IEEE Third Joint International Conference on Development and Learning and Epigenetic Robotics (ICDL)最新文献

筛选
英文 中文
Building specific contexts for on-line learning of dynamical tasks through non-verbal interaction 通过非语言互动建立动态任务在线学习的特定语境
A. D. Rengervé, Souheil Hanoune, P. Andry, M. Quoy, P. Gaussier
{"title":"Building specific contexts for on-line learning of dynamical tasks through non-verbal interaction","authors":"A. D. Rengervé, Souheil Hanoune, P. Andry, M. Quoy, P. Gaussier","doi":"10.1109/DEVLRN.2013.6652564","DOIUrl":"https://doi.org/10.1109/DEVLRN.2013.6652564","url":null,"abstract":"Trajectories can be encoded as attraction basin resulting from recruited associations between visually based localization and orientations to follow (low level behaviors). Navigation to different places according to some other multimodal information needs a particular learning. We propose a minimal model explaining such a behavior adaptation from non-verbal interaction with a teacher. Specific contexts can be recruited to prevent the behaviors to activate in cases the interaction showed they were inadequate. Still, the model is compatible with the recruitment of new low level behaviors. The tests done in simulation show the capabilities of the architecture, the limitations regarding the generalization and the learning speed. We also discuss the possible evolutions towards more bio-inspired models.","PeriodicalId":106997,"journal":{"name":"2013 IEEE Third Joint International Conference on Development and Learning and Epigenetic Robotics (ICDL)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126673079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The significance of social input, early motion experiences, and attentional selection 社会输入、早期运动经验和注意选择的意义
Joseph M. Burling, Hanako Yoshida, Y. Nagai
{"title":"The significance of social input, early motion experiences, and attentional selection","authors":"Joseph M. Burling, Hanako Yoshida, Y. Nagai","doi":"10.1109/DEVLRN.2013.6652556","DOIUrl":"https://doi.org/10.1109/DEVLRN.2013.6652556","url":null,"abstract":"Before babies acquire an adult-like visual capacity, they participate in a social world as a human learning system which promotes social activities around them and in turn dramatically alters their own social participation. Visual input becomes more dynamic as they gain self-generated movement, and such movement has a potential role in learning. The present study specifically looks at the expected change in motion of the early visual input that infants are exposed to, and the corresponding attentional coordination within the specific context of parent-infant interactions. The results will be discussed in terms of the significance of social input for development.","PeriodicalId":106997,"journal":{"name":"2013 IEEE Third Joint International Conference on Development and Learning and Epigenetic Robotics (ICDL)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121747729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Frustration as a way toward autonomy and self-improvement in robotic navigation 挫折是机器人导航走向自主和自我完善的一种方式
Adrien Jauffret, Marwen Belkaid, N. Cuperlier, P. Gaussier, P. Tarroux
{"title":"Frustration as a way toward autonomy and self-improvement in robotic navigation","authors":"Adrien Jauffret, Marwen Belkaid, N. Cuperlier, P. Gaussier, P. Tarroux","doi":"10.1109/DEVLRN.2013.6652540","DOIUrl":"https://doi.org/10.1109/DEVLRN.2013.6652540","url":null,"abstract":"Autonomy and self-improvement capabilities are still challenging in the field of robotics. Allowing a robot to autonomously navigate in wide and unknown environments not only requires a set of robust strategies to cope with miscellaneous situations, but also needs mechanisms of self-assessment for guiding learning and for monitoring strategies. Monitoring strategies requires feedbacks on the behavior's quality, from a given fitness system in order to take correct decisions. In this work, we focus on how an emotional controller can be used to modulate robot behaviors. Following an incremental and constructivist approach, we present a generic neural architecture, based on an online novelty detection algorithm that may be able to evaluate any sensory-motor strategies. This architecture learns contingencies between sensations and actions, giving the expected sensation from the past perception. Prediction error, coming from surprising events, provides a direct measure of the quality of the underlying sensory-motor contingencies involved. We show how a simple emotional controller based on the prediction progress allows the system to regulate its behavior to solve complex navigation tasks and to communicate its disability in deadlock situations. We propose that this model could be a key structure toward self-monitoring. We made several experiments that can account for such properties with different behaviors (road following and place cells based navigation).","PeriodicalId":106997,"journal":{"name":"2013 IEEE Third Joint International Conference on Development and Learning and Epigenetic Robotics (ICDL)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133906341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Learning semantic components from subsymbolic multimodal perception 从亚符号多模态感知中学习语义成分
Olivier Mangin, Pierre-Yves Oudeyer
{"title":"Learning semantic components from subsymbolic multimodal perception","authors":"Olivier Mangin, Pierre-Yves Oudeyer","doi":"10.1109/DEVLRN.2013.6652563","DOIUrl":"https://doi.org/10.1109/DEVLRN.2013.6652563","url":null,"abstract":"Perceptual systems often include sensors from several modalities. However, existing robots do not yet sufficiently discover patterns that are spread over the flow of multimodal data they receive. In this paper we present a framework that learns a dictionary of words from full spoken utterances, together with a set of gestures from human demonstrations and the semantic connection between words and gestures. We explain how to use a nonnegative matrix factorization algorithm to learn a dictionary of components that represent meaningful elements present in the multimodal perception, without providing the system with a symbolic representation of the semantics. We illustrate this framework by showing how a learner discovers word-like components from observation of gestures made by a human together with spoken descriptions of the gestures, and how it captures the semantic association between the two.","PeriodicalId":106997,"journal":{"name":"2013 IEEE Third Joint International Conference on Development and Learning and Epigenetic Robotics (ICDL)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130033685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
Autonomous reuse of motor exploration trajectories 运动探索轨迹的自主重用
Fabien C. Y. Benureau, Pierre-Yves Oudeyer
{"title":"Autonomous reuse of motor exploration trajectories","authors":"Fabien C. Y. Benureau, Pierre-Yves Oudeyer","doi":"10.1109/DEVLRN.2013.6652567","DOIUrl":"https://doi.org/10.1109/DEVLRN.2013.6652567","url":null,"abstract":"We present an algorithm for transferring exploration strategies between tasks that share a common motor space in the context of lifelong autonomous learning in robotics. The algorithm does not transfer observations, or make assumptions about how the learning is conducted. Instead, only selected motor commands are transferred between tasks, chosen autonomously according to an empirical measure of learning progress. We show that on a wide variety of variations from a source task, such as changing the object the robot is interacting with or altering the morphology of the robot, this simple and flexible transfer method increases early performance significantly in the new task. We also provide examples of situations where the transfer is not helpful.","PeriodicalId":106997,"journal":{"name":"2013 IEEE Third Joint International Conference on Development and Learning and Epigenetic Robotics (ICDL)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116156700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A generative probabilistic framework for learning spatial language 空间语言学习的生成概率框架
C. Dawson, Jeremy B. Wright, Antons Rebguns, M. Valenzuela-Escarcega, Daniel Fried, P. Cohen
{"title":"A generative probabilistic framework for learning spatial language","authors":"C. Dawson, Jeremy B. Wright, Antons Rebguns, M. Valenzuela-Escarcega, Daniel Fried, P. Cohen","doi":"10.1109/DEVLRN.2013.6652560","DOIUrl":"https://doi.org/10.1109/DEVLRN.2013.6652560","url":null,"abstract":"The language of space and spatial relations is a rich source of abstract semantic structure. We develop a probabilistic model that learns to understand utterances that describe spatial configurations of objects in a tabletop scene by seeking the meaning that best explains the sentence chosen. The inference problem is simplified by assuming that sentences express symbolic representations of (latent) semantic relations between referents and landmarks in space, and that given these symbolic representations, utterances and physical locations are conditionally independent. As such, the inference problem factors into a symbol-grounding component (linking propositions to physical locations) and a symbol-translation component (linking propositions to parse trees). We evaluate the model by eliciting production and comprehension data from human English speakers and find that our system recovers the referent of spatial utterances at a level of proficiency approaching human performance.","PeriodicalId":106997,"journal":{"name":"2013 IEEE Third Joint International Conference on Development and Learning and Epigenetic Robotics (ICDL)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132583270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Extracting image features in static images for depth estimation 在静态图像中提取图像特征进行深度估计
M. Ogino, Junji Suzuki, M. Asada
{"title":"Extracting image features in static images for depth estimation","authors":"M. Ogino, Junji Suzuki, M. Asada","doi":"10.1109/DEVLRN.2013.6652551","DOIUrl":"https://doi.org/10.1109/DEVLRN.2013.6652551","url":null,"abstract":"Human feels three-dimensional effect for static image with the cues of various kinds of image features such as relative sizes of objects, up and down, rules of perspective, texture gradient, and shadow. The features are called pictorial depth cues. Human is thought to learn to extract these features as important cues for depth estimation in the developmental process. In this paper, we make a hypothesis that pictorial depth cues are acquired so that disparities can be predicted well and make a model that extracts features appropriate for depth estimation from static images. Random forest network is trained to extract important ones among a large amount image features so as to estimate motion and stereo disparities. The experiments with simulation and real environments show high correlation between estimated and real disparities.","PeriodicalId":106997,"journal":{"name":"2013 IEEE Third Joint International Conference on Development and Learning and Epigenetic Robotics (ICDL)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126726676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Aquila 2.0 software architecture for cognitive robotics 认知机器人的Aquila 2.0软件架构
M. Peniak, Anthony F. Morse, A. Cangelosi
{"title":"Aquila 2.0 software architecture for cognitive robotics","authors":"M. Peniak, Anthony F. Morse, A. Cangelosi","doi":"10.1109/DEVLRN.2013.6652565","DOIUrl":"https://doi.org/10.1109/DEVLRN.2013.6652565","url":null,"abstract":"The modelling of the integration of various cognitive skills and modalities requires complex and computationally intensive algorithms running in parallel while controlling high-performance systems. The distribution of processing across many computers has certainly advanced our software ecosystem and opened up research to new possibilities. While this was an essential move, we are aspiring to augment the field of cognitive robotics by providing Aquila 2.0, a novel hi-performance software architecture utilising cross-platform, heterogeneous CPU-GPU modules loosely coupled with GUIs used for module management and data visualisation.","PeriodicalId":106997,"journal":{"name":"2013 IEEE Third Joint International Conference on Development and Learning and Epigenetic Robotics (ICDL)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123169777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Emergence of flexible prediction-based discrete decision making and continuous motion generation through actor-Q-learning 基于柔性预测的离散决策和通过actor- q学习的连续运动生成的出现
K. Shibata, Kenta Goto
{"title":"Emergence of flexible prediction-based discrete decision making and continuous motion generation through actor-Q-learning","authors":"K. Shibata, Kenta Goto","doi":"10.1109/DEVLRN.2013.6652559","DOIUrl":"https://doi.org/10.1109/DEVLRN.2013.6652559","url":null,"abstract":"In this paper, the authors first point the importance of three factors for filling the gap between humans and robots in the flexibility in the real world. Those are (1)parallel processing, (2)emergence through learning and solving “what” problems, and (3)abstraction and generalization on the abstract space. To explore the possibility of human-like flexibility in robots, a prediction-required task in which an agent (robot) gets a reward by capturing a moving target that sometimes becomes invisible was learned by reinforcement learning using a recurrent neural network. Even though the agent did not know in advance that “prediction is required” or “what information should be predicted”, appropriate discrete decision making, in which `capture' or `move' was chosen, and also continuous motion generation in two-dimensional space, could be acquired. Furthermore, in this task, the target sometimes changed its moving direction randomly when it became visible again from invisible state. Then the agent could change its moving direction promptly and appropriately without introducing any special architecture or technique. Such emergent property is what general parallel processing systems such as Subsumption architecture do not have, and the authors believe it is a key to solve the “Frame Problem” fundamentally.","PeriodicalId":106997,"journal":{"name":"2013 IEEE Third Joint International Conference on Development and Learning and Epigenetic Robotics (ICDL)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127952319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Learning to recognize objects through curiosity-driven manipulation with the iCub humanoid robot 学习识别物体通过好奇心驱动操作与iCub人形机器人
S. Nguyen, S. Ivaldi, Natalia Lyubova, Alain Droniou, Damien Gérardeaux-Viret, David Filliat, V. Padois, Olivier Sigaud, Pierre-Yves Oudeyer
{"title":"Learning to recognize objects through curiosity-driven manipulation with the iCub humanoid robot","authors":"S. Nguyen, S. Ivaldi, Natalia Lyubova, Alain Droniou, Damien Gérardeaux-Viret, David Filliat, V. Padois, Olivier Sigaud, Pierre-Yves Oudeyer","doi":"10.1109/DEVLRN.2013.6652525","DOIUrl":"https://doi.org/10.1109/DEVLRN.2013.6652525","url":null,"abstract":"In this paper we address the problem of learning to recognize objects by manipulation in a developmental robotics scenario. In a life-long learning perspective, a humanoid robot should be capable of improving its knowledge of objects with active perception. Our approach stems from the cognitive development of infants, exploiting active curiosity-driven manipulation to improve perceptual learning of objects. These functionalities are implemented as perception, control and active exploration modules as part of the Cognitive Architecture of the MACSi project. In this paper we integrate these functionalities into an active perception system which learns to recognise objects through manipulation. Our work in this paper integrates a bottom-up vision system, a control system of a complex robot system and a top-down interactive exploration method, which actively chooses an exploration method to collect data and whether interacting with humans is profitable or not. Experimental results show that the humanoid robot iCub can learn to recognize 3D objects by manipulation and in interaction with teachers by choosing the adequate exploration strategy to enhance competence progress and by focusing its efforts on the most complex tasks. Thus the learner can learn interactively with humans by actively self-regulating its requests for help.","PeriodicalId":106997,"journal":{"name":"2013 IEEE Third Joint International Conference on Development and Learning and Epigenetic Robotics (ICDL)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131943364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 33
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信