2008 7th IEEE International Conference on Development and Learning最新文献

筛选
英文 中文
Sensorimotor abstraction selection for efficient, autonomous robot skill acquisition 高效自主机器人技能习得的感觉运动抽象选择
2008 7th IEEE International Conference on Development and Learning Pub Date : 2008-10-10 DOI: 10.1109/DEVLRN.2008.4640821
G. Konidaris, A. Barto
{"title":"Sensorimotor abstraction selection for efficient, autonomous robot skill acquisition","authors":"G. Konidaris, A. Barto","doi":"10.1109/DEVLRN.2008.4640821","DOIUrl":"https://doi.org/10.1109/DEVLRN.2008.4640821","url":null,"abstract":"To achieve truly autonomous robot skill acquisition, a robot can use neither a single large general state space (because learning is not feasible), nor a small problem-specific state space (because it is not general).We propose that instead a robot should have a set of sensorimotor abstractions that can be considered small candidate state spaces, and select one that is appropriate for learning a skill when it decides to do so. We introduce an incremental algorithm that selects a state space in which to learn a skill from among a set of potential spaces given a successful sample trajectory. The algorithm returns a policy fitting that trajectory in the new state space so that learning does not have to begin from scratch. We demonstrate that the algorithm selects an appropriate space for a sequence of demonstration skills on a physically realistic simulated mobile robot, and that the resulting initial policies closely match the sample trajectory.","PeriodicalId":366099,"journal":{"name":"2008 7th IEEE International Conference on Development and Learning","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114698187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
The role of observational learning in perceiving pbject properties in infants (March 2008) 观察学习在婴儿感知客体属性中的作用(2008年3月)
2008 7th IEEE International Conference on Development and Learning Pub Date : 2008-10-10 DOI: 10.1109/DEVLRN.2008.4640829
J. Fagard, R. Esseily, J. Nadel
{"title":"The role of observational learning in perceiving pbject properties in infants (March 2008)","authors":"J. Fagard, R. Esseily, J. Nadel","doi":"10.1109/DEVLRN.2008.4640829","DOIUrl":"https://doi.org/10.1109/DEVLRN.2008.4640829","url":null,"abstract":"Infants become skilful at manipulating objects around the end of the first year of life. The question asked here is how do they learn about object properties and what is the role of observation in learning to manipulate objects. In order to answer these questions we designed an experiment where we compared the effect of practice versus observation on learning new motor skills. We tested 84 infants aged 8, 10, 12, 15 and 18 months on two different tasks: a simple grasping task and a more complex retrieval task. We compared two groups of infants: an observation group where the experimenter presented directly the infants with the demonstration of the targeted action and then gave the infant the object to manipulate; and a self-exploratory group where infants were presented with a spontaneous trial before the demonstration. The results show that for a simple grasping task, only the youngest infants benefit both from practice and observation because of their poor performance at the very first, spontaneous, trial. As for the retrieval task, infants learned only by observation and not before 15 month of age.","PeriodicalId":366099,"journal":{"name":"2008 7th IEEE International Conference on Development and Learning","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117035146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Detection and categorization of facial image through the interaction with caregiver
2008 7th IEEE International Conference on Development and Learning Pub Date : 2008-10-10 DOI: 10.1109/DEVLRN.2008.4640837
M. Ogino, A. Watanabe, M. Asada
{"title":"Detection and categorization of facial image through the interaction with caregiver","authors":"M. Ogino, A. Watanabe, M. Asada","doi":"10.1109/DEVLRN.2008.4640837","DOIUrl":"https://doi.org/10.1109/DEVLRN.2008.4640837","url":null,"abstract":"This paper models the process of applied behavior analysis (ABA) therapy of autistic children for eye contact as the learning of the categorization and preference through the interaction with a caregiver. The proposed model consists of the learning module and visual attention module. The learning module learns the visual features of higher order local autocorrelation (HLAC) that are important to discriminate the visual image before and after the reward is given. The visual attention module determines the attention point by a bottom-up process based on saliency map and a top-down process based on the learned visual feature. The experiment with a virtual robot shows that the robot successfully learns visual features corresponding to the face firstly and the eyes afterwards through the interaction with a caregiver. After the learning, the robot can attend to the caregiverpsilas face and eyes as autistic children do in the actual ABA therapy.","PeriodicalId":366099,"journal":{"name":"2008 7th IEEE International Conference on Development and Learning","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129858445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Building a more effective teaching robot using apprenticeship learning 利用学徒制学习构建更有效的教学机器人
2008 7th IEEE International Conference on Development and Learning Pub Date : 2008-10-10 DOI: 10.1109/DEVLRN.2008.4640831
P. Ruvolo, Jacob Whitehill, Marjo Vimes, J. Movellan
{"title":"Building a more effective teaching robot using apprenticeship learning","authors":"P. Ruvolo, Jacob Whitehill, Marjo Vimes, J. Movellan","doi":"10.1109/DEVLRN.2008.4640831","DOIUrl":"https://doi.org/10.1109/DEVLRN.2008.4640831","url":null,"abstract":"What defines good teaching? While attributes such as timing, responsiveness to social cues, and pacing of material clearly play a role, it is difficult to create a comprehensive specification of what it means to be a good teacher. On the other hand, it is relatively easy to obtain examples of expert teaching behavior by observing a real teacher. With this inspiration as our guide, we investigated apprenticeship learning methods [1] that use data recorded from expert teachers as a means of improving the teaching abilities of RUBI, a social robot immersed in a classroom of 18-24 month old children. While this approach has achieved considerable success in mechanical control, such as automated helicopter flight [2], until now there has been little work on applying it to the field of social robotics. This paper explores two particular approaches to apprenticeship learning, and analyzes the models of teaching that each approach learns from the data of the human teacher. Empirical results indicate that the apprenticeship learning paradigm, though still nascent in its use in the social robotics field, holds promise, and that our proposed methods can already extract meaningful teaching models from demonstrations of a human expert.","PeriodicalId":366099,"journal":{"name":"2008 7th IEEE International Conference on Development and Learning","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126602465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Acquiring linguistic argument structure from multimodal input using attentive focus 利用注意焦点从多模态输入获取语言论证结构
2008 7th IEEE International Conference on Development and Learning Pub Date : 2008-10-10 DOI: 10.1109/DEVLRN.2008.4640803
G. Satish, A. Mukerjee
{"title":"Acquiring linguistic argument structure from multimodal input using attentive focus","authors":"G. Satish, A. Mukerjee","doi":"10.1109/DEVLRN.2008.4640803","DOIUrl":"https://doi.org/10.1109/DEVLRN.2008.4640803","url":null,"abstract":"This work is premised on three assumptions: that the semantics of certain actions may be learned prior to language, that objects in attentive focus are likely to indicate the arguments participating in that action, and that knowing such arguments helps align linguistic attention on the relevant predicate (verb). Using a computational model of dynamic attention, we present an algorithm that clusters visual events into action classes in an unsupervised manner using the Merge Neural Gas algorithm. With few clusters, the model correlates to coarse concepts such as come-closer, but with a finer granularity, it reveals hierarchical substructure such as come-closer-one-object-static and come-closer-both-moving. That the argument ordering is non-commutative is discovered for actions such as chase or come-closer-one-object-static. Knowing the arguments, and given that noun-referent mappings that are easily learned, language learning can now be constrained by considering only linguistic expressions and actions that refer to the objects in perceptual focus. We learn action schemas for linguistic units like ldquomoving towardsrdquo or ldquochaserdquo, and validate our results by producing output commentaries for 3D video.","PeriodicalId":366099,"journal":{"name":"2008 7th IEEE International Conference on Development and Learning","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133507992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Modeling the development of overselectivity in autism 自闭症过度选择的发展模型
2008 7th IEEE International Conference on Development and Learning Pub Date : 2008-10-10 DOI: 10.1109/DEVLRN.2008.4640809
T. Kriete, D. Noelle
{"title":"Modeling the development of overselectivity in autism","authors":"T. Kriete, D. Noelle","doi":"10.1109/DEVLRN.2008.4640809","DOIUrl":"https://doi.org/10.1109/DEVLRN.2008.4640809","url":null,"abstract":"People with autism consistently demonstrate a lack sensitivity to the full range of important aspects of everyday situations. Often, an overly restricted subset of the information available in a given situation gains control over their behavior. This can result in problems generalizing learned behaviors to novel situations. This phenomenon has been called overselectivity. Indeed, many behavioral intervention techniques seek to mitigate overselectivity effects in this population. In this paper, we offer an account of overselectivity as arising from an inability to flexibly adjust the attentional influences of the prefrontal cortex on behavior. We posit that dysfunctional dopamine interactions with the prefrontal cortex result in overly perseverative attention in people with autism. Limiting attention to only a few of the features of a situation hinders the learning of associations between the full range of relevant environmental properties and appropriate behavior. Thus, a restricted subset of features gain control over responding. A simple neurocomputational model of the attentional effects of prefrontal cortex on learning is presented, demonstrating how weak dopamine modulation of frontal areas can lead to overselectivity.","PeriodicalId":366099,"journal":{"name":"2008 7th IEEE International Conference on Development and Learning","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132228124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Visual attention by saliency leads cross-modal body representation 显著性视觉注意导致跨模态身体表征
2008 7th IEEE International Conference on Development and Learning Pub Date : 2008-10-10 DOI: 10.1109/DEVLRN.2008.4640822
M. Hikita, S. Fuke, M. Ogino, T. Minato, M. Asada
{"title":"Visual attention by saliency leads cross-modal body representation","authors":"M. Hikita, S. Fuke, M. Ogino, T. Minato, M. Asada","doi":"10.1109/DEVLRN.2008.4640822","DOIUrl":"https://doi.org/10.1109/DEVLRN.2008.4640822","url":null,"abstract":"One of the most fundamental issues for physical agents (humans, primates, and robots) in performing various kinds of tasks is body representation. Especially during tool-use by monkeys, neurophysiological evidence shows that the representation can be dynamically reconstructed by spatio-temporal integration of different sensor modalities so that it can be adaptive to environmental changes. However, to construct such a representation, an issue to be resolved is how to associate which information among various sensory data. This paper presents a method that constructs cross-modal body representation from vision, touch, and proprioception. Tactile sensation, when the robot touches something, triggers the construction process of the visual receptive field for body parts that can be found by visual attention based on a saliency map and consequently regarded as the end effector. Simultaneously, proprioceptive information is associated with this visual receptive field to achieve the cross-modal body representation. The proposed model is applied to a real robot and results comparable to the activities of parietal neurons observed in monkeys are shown.","PeriodicalId":366099,"journal":{"name":"2008 7th IEEE International Conference on Development and Learning","volume":"174 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116132352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
Input affects uptake: How early language experience influences processing efficiency and vocabulary learning 输入影响吸收:早期语言经验如何影响加工效率和词汇学习
2008 7th IEEE International Conference on Development and Learning Pub Date : 2008-10-10 DOI: 10.1109/DEVLRN.2008.4640802
A. Fernald, V. Marchman, Nereyda Hurtado
{"title":"Input affects uptake: How early language experience influences processing efficiency and vocabulary learning","authors":"A. Fernald, V. Marchman, Nereyda Hurtado","doi":"10.1109/DEVLRN.2008.4640802","DOIUrl":"https://doi.org/10.1109/DEVLRN.2008.4640802","url":null,"abstract":"Two studies explore how early vocabulary learning is influenced both by maternal speech to the child and by the childpsilas developing skill in real-time comprehension. Study 1 shows that amount and quality of motherspsila speech predict language growth in Spanish-learning children, providing the first evidence that language input shapes speech processing efficiency as well as lexical development. Study 2 demonstrates that early efficiency in speech processing is beneficial for vocabulary growth, showing how fluency in online comprehension facilitates learning.","PeriodicalId":366099,"journal":{"name":"2008 7th IEEE International Conference on Development and Learning","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128689356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Acquisition of lexical semantics through unsupervised discovery of associations between perceptual symbols 通过无监督地发现知觉符号之间的关联而获得词汇语义
2008 7th IEEE International Conference on Development and Learning Pub Date : 2008-10-10 DOI: 10.1109/DEVLRN.2008.4640799
T. Oezer
{"title":"Acquisition of lexical semantics through unsupervised discovery of associations between perceptual symbols","authors":"T. Oezer","doi":"10.1109/DEVLRN.2008.4640799","DOIUrl":"https://doi.org/10.1109/DEVLRN.2008.4640799","url":null,"abstract":"This paper introduces an unsupervised method to acquire the lexical semantics of action verbs. The eventual goal of the presented method is allowing a robot to acquire language under realistic conditions. The method acquires lexical semantics by forming association sets that contain general perceptual symbols associated with a certain concept as well as perceptual symbols of the utterances of the name of a concept. The lexical semantics is learned with the help of a narrator who comments on what the robot sees. The technique works even if the narrator only occasionally comments on what the robot sees. The paper presents experimental results that show that the method can acquire the lexical semantics of action verbs while the robot is watching a human who performs actions and hearing a narration that only occasionally actually describes what the robot is currently seeing. A comparison with supervised learning algorithms shows that the method discussed in this paper outperforms other techniques.","PeriodicalId":366099,"journal":{"name":"2008 7th IEEE International Conference on Development and Learning","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124445100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Ockham’s razor as inductive bias in preschooler’s causal explanations 奥卡姆剃刀在学龄前儿童因果解释中的归纳偏差
2008 7th IEEE International Conference on Development and Learning Pub Date : 2008-10-10 DOI: 10.1109/DEVLRN.2008.4640797
E. B. Bonawitz, Isabel Y. Chang, Catherine Clark, Tania Lombrozo
{"title":"Ockham’s razor as inductive bias in preschooler’s causal explanations","authors":"E. B. Bonawitz, Isabel Y. Chang, Catherine Clark, Tania Lombrozo","doi":"10.1109/DEVLRN.2008.4640797","DOIUrl":"https://doi.org/10.1109/DEVLRN.2008.4640797","url":null,"abstract":"A growing literature suggests that generating and evaluating explanations is a key mechanism for learning and development, but little is known about how children evaluate explanations, especially in the absence of probability information or robust prior beliefs. Previous findings demonstrate that adults balance several explanatory virtues in evaluating competing explanations, including simplicity and probability. Specifically, adults treat simplicity as a probabilistic cue that trades-off with frequency information. However, no work has investigated whether children are similarly sensitive to simplicity and probability. We report an experiment investigating how preschoolers evaluate causal explanations, and in particular whether they employ a principle of parsimony like Ockhampsilas razor as an inductive constraint. Results suggest that even preschoolers are sensitive to the simplicity of explanations, and require disproportionate probabilistic evidence before a complex explanation will be favored over a simpler alternative.","PeriodicalId":366099,"journal":{"name":"2008 7th IEEE International Conference on Development and Learning","volume":"53 9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123655781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信