2011 IEEE International Conference on Development and Learning (ICDL)最新文献

筛选
英文 中文
The power of words 语言的力量
2011 IEEE International Conference on Development and Learning (ICDL) Pub Date : 2011-10-10 DOI: 10.1109/DEVLRN.2011.6037349
Anthony F. Morse, Paul E. Baxter, Tony Belpaeme, Linda B. Smith, A. Cangelosi
{"title":"The power of words","authors":"Anthony F. Morse, Paul E. Baxter, Tony Belpaeme, Linda B. Smith, A. Cangelosi","doi":"10.1109/DEVLRN.2011.6037349","DOIUrl":"https://doi.org/10.1109/DEVLRN.2011.6037349","url":null,"abstract":"Language is special, yet its power to facilitate communication may have distracted researchers from the power of another, potential precursor ability: the ability to label things, and the effect this can have in transforming or extending cognitive abilities. In this paper we present a simple robotic model, using the iCub robot, demonstrating the effects of spatial grouping, binding, and linguistic tagging in extending our cognitive abilities.","PeriodicalId":256921,"journal":{"name":"2011 IEEE International Conference on Development and Learning (ICDL)","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127076276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
People-aware navigation for goal-oriented behavior involving a human partner 涉及人类伙伴的目标导向行为的人感知导航
2011 IEEE International Conference on Development and Learning (ICDL) Pub Date : 2011-10-10 DOI: 10.1109/DEVLRN.2011.6037331
David Feil-Seifer, M. Matarić
{"title":"People-aware navigation for goal-oriented behavior involving a human partner","authors":"David Feil-Seifer, M. Matarić","doi":"10.1109/DEVLRN.2011.6037331","DOIUrl":"https://doi.org/10.1109/DEVLRN.2011.6037331","url":null,"abstract":"In order to facilitate effective autonomous interaction behavior for human-robot interaction the robot should be able to execute goal-oriented behavior while reacting to sensor feedback related to the people with which it is interacting. Prior work has demonstrated that autonomously sensed distance-based features can be used to correctly detect user state. We wish to demonstrate that such models can also be used to weight action selection as well. This paper considers the problem of moving to a goal along with a partner, demonstrating that a learned model can be used to weight trajectories of a navigation system for autonomous movement. This paper presents a realization of a person-aware navigation system which requires no ad-hoc parameter tuning, and no input other than a small set of training examples. This system is validated using an in-lab demonstration of people-aware navigation using the described system.","PeriodicalId":256921,"journal":{"name":"2011 IEEE International Conference on Development and Learning (ICDL)","volume":"146 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132161322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 46
It's the child's body: The role of toddler and parent in selecting toddler's visual experience 这是孩子的身体:幼儿与家长在幼儿视觉经验选择中的作用
2011 IEEE International Conference on Development and Learning (ICDL) Pub Date : 2011-10-10 DOI: 10.1109/DEVLRN.2011.6037330
Tian Xu, Yu Chen, Linda B. Smith
{"title":"It's the child's body: The role of toddler and parent in selecting toddler's visual experience","authors":"Tian Xu, Yu Chen, Linda B. Smith","doi":"10.1109/DEVLRN.2011.6037330","DOIUrl":"https://doi.org/10.1109/DEVLRN.2011.6037330","url":null,"abstract":"Human visual experience is tightly coupled to action - to the perceiver's eye, head, hand and body movements. Social interactions and joint attention are also tied to action, to the mutually influencing and coupled eye, head, hand and body movements of the participants. This study considers the role of the child's own sensory-motor dynamics and those of the social partner in structuring the visual experiences of the toddler. To capture the first-person visual experience, a mini head-mounted camera was placed on the participants' forehead. Two social contexts were studied: 1) parent-child play wherein children and parents jointly played with toys; and 2) child play alone wherein parents were asked to read a document while letting the child play by himself. Visual information from the child's first person view and manual actions from both participants were processed and analyzed. The main finding is that the dynamics of the toddler's visual experience did not differ significantly between the two conditions, showing in both conditions highly selective views that largely reduced noise perceived by the child. These views were strongly related to the child's own head and hand actions. Although the dynamics of children's visual experience appear dependent mainly on their own body dynamics, parents also play a complementary role in selecting the targets for the child's momentary attention.","PeriodicalId":256921,"journal":{"name":"2011 IEEE International Conference on Development and Learning (ICDL)","volume":"449 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134359925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
A generative model for developmental understanding of visuomotor experience 视觉运动经验发育理解的生成模型
2011 IEEE International Conference on Development and Learning (ICDL) Pub Date : 2011-10-10 DOI: 10.1109/DEVLRN.2011.6037357
K. Noda, Kenta Kawamoto, Takashi Hasuo, K. Sabe
{"title":"A generative model for developmental understanding of visuomotor experience","authors":"K. Noda, Kenta Kawamoto, Takashi Hasuo, K. Sabe","doi":"10.1109/DEVLRN.2011.6037357","DOIUrl":"https://doi.org/10.1109/DEVLRN.2011.6037357","url":null,"abstract":"By manipulating objects in their environment, infants learn about the surrounding environment and continuously improve their internal model of their own body. Moreover, infants learn to distinguish parts of their own body from other objects in the environment. In the field of neuroscience, studies have revealed that the posterior parietal cortex of the primate brain is involved in the awareness of self-generated movements. In the field of robotics, however, little has been done to propose computationally reasonable models to explain these biological findings. In the present paper, we propose a generative model by which an agent can estimate appearance as well as motion models from its visuomotor experience through Bayesian inference. By introducing a factorial representation, we show that multiple objects can be segmented from an unsupervised sensory-motor sequence, single frames of which appear as a random patterns of dots. Moreover, we propose a novel approach by which to identify an object associated with self-generating action.","PeriodicalId":256921,"journal":{"name":"2011 IEEE International Conference on Development and Learning (ICDL)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131018713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Towards using prosody to scaffold lexical meaning in robots 用韵律支撑机器人词汇意义的研究
2011 IEEE International Conference on Development and Learning (ICDL) Pub Date : 2011-10-10 DOI: 10.1109/DEVLRN.2011.6037328
J. Saunders, H. Lehmann, Yo Sato, Chrystopher L. Nehaniv
{"title":"Towards using prosody to scaffold lexical meaning in robots","authors":"J. Saunders, H. Lehmann, Yo Sato, Chrystopher L. Nehaniv","doi":"10.1109/DEVLRN.2011.6037328","DOIUrl":"https://doi.org/10.1109/DEVLRN.2011.6037328","url":null,"abstract":"We present a case-study analysing the prosodic contours and salient word markers of a small corpus of robot-directed speech where the human participants had been asked to talk to a socially interactive robot as if it were a child. We assess whether such contours and salience characteristics could be used to extract relevant information for the subsequent learning and scaffolding of meaning in robots. The study uses measures of pitch, energy and word duration from the participants speech and exploits Pierrehumbert and Hirschberg's theory of the meaning of intonational contours which may provide information on shared belief between speaker and listener. The results indicate that 1) participants use a high number of contours which provide new information markers to the robot, 2) that prosodic question contours reduce as the interactions proceed and 3) that pitch, energy and duration features can provide strong markers for relevant words and 4) there was little evidence that participants altered their prosodic contours in recognition of shared belief. A description and verification of our software which allows the semi-automatic marking of prosodic phrases is also described.","PeriodicalId":256921,"journal":{"name":"2011 IEEE International Conference on Development and Learning (ICDL)","volume":"101 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116095428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Uncertain semantics, representation nuisances, and necessary invariance properties of bootstrapping agents 不确定语义、表示干扰和自举代理的必要不变性
2011 IEEE International Conference on Development and Learning (ICDL) Pub Date : 2011-10-10 DOI: 10.1109/DEVLRN.2011.6037313
A. Censi, R. Murray
{"title":"Uncertain semantics, representation nuisances, and necessary invariance properties of bootstrapping agents","authors":"A. Censi, R. Murray","doi":"10.1109/DEVLRN.2011.6037313","DOIUrl":"https://doi.org/10.1109/DEVLRN.2011.6037313","url":null,"abstract":"In the problem of bootstrapping, an agent must learn to use an unknown body, in an unknown world, starting from zero information about the world, its sensors, and its actuators. So far, this fascinating problem has not; been given a proper normalization. In this paper, we provide a possible rigorous definition of one of the key aspects of bootstrapping, namely the fact that an agent must be able to use “uninterpreted” observations and commands. We show that this can be formalized by positing the existence of representation nuisances that act on the data, and which must be tolerated by an agent. The classes of nuisances tolerate d in directly encode the assumptions needed about the world, and therefore the agent's ability to solve smaller or larger classes of bootstrapping problem instances. Moreover, we argue that the behavior of an agent that claims optimality must actually be invariant to the representation nuisances, and we discuss several design principles to obtain such invariance.","PeriodicalId":256921,"journal":{"name":"2011 IEEE International Conference on Development and Learning (ICDL)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125139900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Emergence of higher-order transitivity across development: The importance of local task difficulty 高阶及物性在发展过程中的出现:局部任务难度的重要性
2011 IEEE International Conference on Development and Learning (ICDL) Pub Date : 2011-10-10 DOI: 10.1109/DEVLRN.2011.6037374
H. Kloos
{"title":"Emergence of higher-order transitivity across development: The importance of local task difficulty","authors":"H. Kloos","doi":"10.1109/DEVLRN.2011.6037374","DOIUrl":"https://doi.org/10.1109/DEVLRN.2011.6037374","url":null,"abstract":"This study investigates the effect of local task difficulty on children's tendency to combine pieces of information into larger wholes. The particular hypothesis is that the emergence of higher-order Gestalts is guided neither by innate capabilities nor by laborious thought processes. Instead, it is - at least partly - tied to the difficult of the local task, adaptively allowing the mind to reduce cognitive demand. The higher-order Gestalt used here was the transitive congruence among three feature relations. And the local task was to remember two of the feature relations from brief exposures, after having learned the third relation to criterion. The two to-be-learned relations violated transitivity with the third relation, such that a bias toward higher-order transitivity could be determined on the basis of children's performance mistakes. Importantly, the two to-be-learned relations either matched in direction (posing a low cognitive demand), or they had opposite directions (posting higher cognitive demand). Results show that 5- to 9-year-olds were affected by higher-order transitivity only in the locally difficult task, not when the local task was easy.","PeriodicalId":256921,"journal":{"name":"2011 IEEE International Conference on Development and Learning (ICDL)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127260683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Development of joint attention and social referencing 共同关注和社会参照的发展
2011 IEEE International Conference on Development and Learning (ICDL) Pub Date : 2011-10-10 DOI: 10.1109/DEVLRN.2011.6037317
S. Boucenna, P. Gaussier, L. Hafemeister
{"title":"Development of joint attention and social referencing","authors":"S. Boucenna, P. Gaussier, L. Hafemeister","doi":"10.1109/DEVLRN.2011.6037317","DOIUrl":"https://doi.org/10.1109/DEVLRN.2011.6037317","url":null,"abstract":"In this work, we are interested in understanding how emotional interactions with a social partner can bootstrap increasingly complex behaviors such as social referencing. Our idea is that social referencing, facial expression recognition and the joint attention can emerge from a simple sensori-motor architecture. Without knowing that the other is an agent, we show our robot is able to learn some complex tasks if the human partner has a low level emotional resonance with the robot head. Hence we advocate the idea that social referencing can be bootstrapped from a simple sensori-motor system not dedicated to social interactions.","PeriodicalId":256921,"journal":{"name":"2011 IEEE International Conference on Development and Learning (ICDL)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114169290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
On-line learning and planning in a pick-and-place task demonstrated through body manipulation 在线学习和计划在拾取和放置任务通过身体操纵演示
2011 IEEE International Conference on Development and Learning (ICDL) Pub Date : 2011-10-10 DOI: 10.1109/DEVLRN.2011.6037336
A. D. Rengervé, Julien Hirel, P. Andry, M. Quoy, P. Gaussier
{"title":"On-line learning and planning in a pick-and-place task demonstrated through body manipulation","authors":"A. D. Rengervé, Julien Hirel, P. Andry, M. Quoy, P. Gaussier","doi":"10.1109/DEVLRN.2011.6037336","DOIUrl":"https://doi.org/10.1109/DEVLRN.2011.6037336","url":null,"abstract":"When a robot is brought into a new environment, it has a very limited knowledge of what surrounds it and what it can do. One way to build up that knowledge is through exploration but it is a slow process. Programming by demonstration is an efficient way to learn new things from interaction. A robot can imitate gestures it was shown through passive manipulation. Depending on the representation of the task, the robot may also be able to plan its actions and even adapt its representation when further interactions change its knowledge about the task to be done. In this paper we present a bio-inspired neural network used in a robot to learn arm gestures demonstrated through passive manipulation. It also allows the robot to plan arm movements according to activated goals. The model is applied to learning a pick-and-place task. The robot learns how to pick up objects at a specific location and drop them in two different boxes depending on their color. As our system is continuously learning, the behavior of the robot can always be adapted by the human interacting with it. This ability is demonstrated by teaching the robot to switch the goals for both types of objects.","PeriodicalId":256921,"journal":{"name":"2011 IEEE International Conference on Development and Learning (ICDL)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127885804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Learning of audiovisual integration 视听整合学习
2011 IEEE International Conference on Development and Learning (ICDL) Pub Date : 2011-10-10 DOI: 10.1109/DEVLRN.2011.6037323
Rujiao Yan, Tobias Rodemann, B. Wrede
{"title":"Learning of audiovisual integration","authors":"Rujiao Yan, Tobias Rodemann, B. Wrede","doi":"10.1109/DEVLRN.2011.6037323","DOIUrl":"https://doi.org/10.1109/DEVLRN.2011.6037323","url":null,"abstract":"We present a system for learning audiovisual integration based on temporal and spatial coincidence. The current sound is sometimes related to a visual signal that has not yet been seen, we consider this situation as well. Our learning algorithm is tested in online adaptation of audio-motor maps. Since audio-motor maps are not reliable at the beginning of the experiment, learning is bootstrapped using temporal coincidence when there is only one auditory and one visual stimulus. In the course of time, the system can automatically decide to use both spatial and temporal coincidence depending on the quality of maps and the number of visual sources. We can show that this audio-visual integration can work when more than one visual source appears. The integration performance does not decrease when the related visual source has not yet been spotted. The experiment is executed on a humanoid robot head.","PeriodicalId":256921,"journal":{"name":"2011 IEEE International Conference on Development and Learning (ICDL)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129466195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信