IEEE Transactions on Autonomous Mental Development最新文献

筛选
英文 中文
Object Learning Through Active Exploration 通过主动探索的对象学习
IEEE Transactions on Autonomous Mental Development Pub Date : 2014-03-01 DOI: 10.1109/TAMD.2013.2280614
S. Ivaldi, S. Nguyen, Natalia Lyubova, Alain Droniou, V. Padois, David Filliat, Pierre-Yves Oudeyer, Olivier Sigaud
{"title":"Object Learning Through Active Exploration","authors":"S. Ivaldi, S. Nguyen, Natalia Lyubova, Alain Droniou, V. Padois, David Filliat, Pierre-Yves Oudeyer, Olivier Sigaud","doi":"10.1109/TAMD.2013.2280614","DOIUrl":"https://doi.org/10.1109/TAMD.2013.2280614","url":null,"abstract":"This paper addresses the problem of active object learning by a humanoid child-like robot, using a developmental approach. We propose a cognitive architecture where the visual representation of the objects is built incrementally through active exploration. We present the design guidelines of the cognitive architecture, its main functionalities, and we outline the cognitive process of the robot by showing how it learns to recognize objects in a human-robot interaction scenario inspired by social parenting. The robot actively explores the objects through manipulation, driven by a combination of social guidance and intrinsic motivation. Besides the robotics and engineering achievements, our experiments replicate some observations about the coupling of vision and manipulation in infants, particularly how they focus on the most informative objects. We discuss the further benefits of our architecture, particularly how it can be improved and used to ground concepts.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"26 1","pages":"56-72"},"PeriodicalIF":0.0,"publicationDate":"2014-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2013.2280614","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62762391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 69
Development of First Social Referencing Skills: Emotional Interaction as a Way to Regulate Robot Behavior 第一社会参照技能的发展:情感互动是调节机器人行为的一种方式
IEEE Transactions on Autonomous Mental Development Pub Date : 2014-03-01 DOI: 10.1109/TAMD.2013.2284065
S. Boucenna, P. Gaussier, L. Hafemeister
{"title":"Development of First Social Referencing Skills: Emotional Interaction as a Way to Regulate Robot Behavior","authors":"S. Boucenna, P. Gaussier, L. Hafemeister","doi":"10.1109/TAMD.2013.2284065","DOIUrl":"https://doi.org/10.1109/TAMD.2013.2284065","url":null,"abstract":"In this paper, we study how emotional interactions with a social partner can bootstrap increasingly complex behaviors such as social referencing. Our idea is that social referencing as well as facial expression recognition can emerge from a simple sensory-motor system involving emotional stimuli. Without knowing that the other is an agent, the robot is able to learn some complex tasks if the human partner has some “empathy” or at least “resonate” with the robot head (low level emotional resonance). Hence, we advocate the idea that social referencing can be bootstrapped from a simple sensory-motor system not dedicated to social interactions.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"6 1","pages":"42-55"},"PeriodicalIF":0.0,"publicationDate":"2014-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2013.2284065","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62762202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Erratum to "Modeling cross-modal interactions in early word learning" [Dec 13 288-297] 对“早期单词学习中跨模态交互建模”的勘误[Dec 13 288-297]
IEEE Transactions on Autonomous Mental Development Pub Date : 2014-03-01 DOI: 10.1109/TAMD.2014.2310061
Nadja Althaus, D. Mareschal
{"title":"Erratum to \"Modeling cross-modal interactions in early word learning\" [Dec 13 288-297]","authors":"Nadja Althaus, D. Mareschal","doi":"10.1109/TAMD.2014.2310061","DOIUrl":"https://doi.org/10.1109/TAMD.2014.2310061","url":null,"abstract":"In the above paper (ibid., vol. 5, no. 4, pp. 288-297, Dec. 2013), Fig. 4 was mistakenly misrepresented. The current correct Fig. 4 is presented here.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"6 1","pages":"73-73"},"PeriodicalIF":0.0,"publicationDate":"2014-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2014.2310061","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62762448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LIDA: A Systems-level Architecture for Cognition, Emotion, and Learning LIDA:用于认知、情感和学习的系统级架构
IEEE Transactions on Autonomous Mental Development Pub Date : 2014-03-01 DOI: 10.1109/TAMD.2013.2277589
S. Franklin, Tamas Madl, S. D’Mello, Javier Snaider
{"title":"LIDA: A Systems-level Architecture for Cognition, Emotion, and Learning","authors":"S. Franklin, Tamas Madl, S. D’Mello, Javier Snaider","doi":"10.1109/TAMD.2013.2277589","DOIUrl":"https://doi.org/10.1109/TAMD.2013.2277589","url":null,"abstract":"We describe a cognitive architecture learning intelligent distribution agent (LIDA) that affords attention, action selection and human-like learning intended for use in controlling cognitive agents that replicate human experiments as well as performing real-world tasks. LIDA combines sophisticated action selection, motivation via emotions, a centrally important attention mechanism, and multimodal instructionalist and selectionist learning. Empirically grounded in cognitive science and cognitive neuroscience, the LIDA architecture employs a variety of modules and processes, each with its own effective representations and algorithms. LIDA has much to say about motivation, emotion, attention, and autonomous learning in cognitive agents. In this paper, we summarize the LIDA model together with its resulting agent architecture, describe its computational implementation, and discuss results of simulations that replicate known experimental data. We also discuss some of LIDA's conceptual modules, propose nonlinear dynamics as a bridge between LIDA's modules and processes and the underlying neuroscience, and point out some of the differences between LIDA and other cognitive architectures. Finally, we discuss how LIDA addresses some of the open issues in cognitive architecture research.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"6 1","pages":"19-41"},"PeriodicalIF":0.0,"publicationDate":"2014-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2013.2277589","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62762073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 184
Editorial TAMD Update 编辑更新
IEEE Transactions on Autonomous Mental Development Pub Date : 2014-01-01 DOI: 10.1109/TAMD.2014.2309431
Zhengyou Zhang
{"title":"Editorial TAMD Update","authors":"Zhengyou Zhang","doi":"10.1109/TAMD.2014.2309431","DOIUrl":"https://doi.org/10.1109/TAMD.2014.2309431","url":null,"abstract":"","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"1 1","pages":"1-2"},"PeriodicalIF":0.0,"publicationDate":"2014-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75562194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Introduction of New Associate Editors 新副编辑的介绍
IEEE Transactions on Autonomous Mental Development Pub Date : 2014-01-01 DOI: 10.1109/TAMD.2014.2309443
Zhengyou Zhang
{"title":"Introduction of New Associate Editors","authors":"Zhengyou Zhang","doi":"10.1109/TAMD.2014.2309443","DOIUrl":"https://doi.org/10.1109/TAMD.2014.2309443","url":null,"abstract":"","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"21 1","pages":"3-4"},"PeriodicalIF":0.0,"publicationDate":"2014-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86425519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Modeling Cross-Modal Interactions in Early Word Learning 早期单词学习中的跨模态交互建模
IEEE Transactions on Autonomous Mental Development Pub Date : 2013-12-01 DOI: 10.1109/TAMD.2013.2264858
Nadja Althaus, D. Mareschal
{"title":"Modeling Cross-Modal Interactions in Early Word Learning","authors":"Nadja Althaus, D. Mareschal","doi":"10.1109/TAMD.2013.2264858","DOIUrl":"https://doi.org/10.1109/TAMD.2013.2264858","url":null,"abstract":"Infancy research demonstrating a facilitation of visual category formation in the presence of verbal labels suggests that infants' object categories and words develop interactively. This contrasts with the notion that words are simply mapped “onto” previously existing categories. To investigate the computational foundations of a system in which word and object categories develop simultaneously and in an interactive fashion, we present a model of word learning based on interacting self-organizing maps that represent the auditory and visual modalities, respectively. While other models of lexical development have employed similar dual-map architectures, our model uses active Hebbian connections to propagate activation between the visual and auditory maps during learning. Our results show that categorical perception emerges from these early audio-visual interactions in both domains. We argue that the learning mechanism introduced in our model could play a role in the facilitation of infants' categorization through verbal labeling.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"5 1","pages":"288-297"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2013.2264858","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62761686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Computational Audiovisual Scene Analysis in Online Adaptation of Audio-Motor Maps 音频-运动地图在线适配中的计算视听场景分析
IEEE Transactions on Autonomous Mental Development Pub Date : 2013-12-01 DOI: 10.1109/TAMD.2013.2257766
Rujiao Yan, Tobias Rodemann, B. Wrede
{"title":"Computational Audiovisual Scene Analysis in Online Adaptation of Audio-Motor Maps","authors":"Rujiao Yan, Tobias Rodemann, B. Wrede","doi":"10.1109/TAMD.2013.2257766","DOIUrl":"https://doi.org/10.1109/TAMD.2013.2257766","url":null,"abstract":"For sound localization, the binaural auditory system of a robot needs audio-motor maps, which represent the relationship between certain audio features and the position of the sound source. This mapping is normally learned during an offline calibration in controlled environments, but we show that using computational audiovisual scene analysis (CAVSA), it can be adapted online in free interaction with a number of a priori unknown speakers. CAVSA enables a robot to understand dynamic dialog scenarios, such as the number and position of speakers, as well as who is the current speaker. Our system does not require specific robot motions and thus can work during other tasks. The performance of online-adapted maps is continuously monitored by computing the difference between online-adapted and offline-calibrated maps and also comparing sound localization results with ground truth data (if available). We show that our approach is more robust in multiperson scenarios than the state of the art in terms of learning progress. We also show that our system is able to bootstrap with a randomized audio-motor map and adapt to hardware modifications that induce a change in audio-motor maps.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"5 1","pages":"273-287"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2013.2257766","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62761367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A Robotic Model of Reaching and Grasping Development 机器人的伸手与抓握发展模型
IEEE Transactions on Autonomous Mental Development Pub Date : 2013-12-01 DOI: 10.1109/TAMD.2013.2264321
Piero Savastano, S. Nolfi
{"title":"A Robotic Model of Reaching and Grasping Development","authors":"Piero Savastano, S. Nolfi","doi":"10.1109/TAMD.2013.2264321","DOIUrl":"https://doi.org/10.1109/TAMD.2013.2264321","url":null,"abstract":"We present a neurorobotic model that develops reaching and grasping skills analogous to those displayed by infants during their early developmental stages. The learning process is realized in an incremental manner, taking into account the reflex behaviors initially possessed by infants and the neurophysiological and cognitive maturation occurring during the relevant developmental period. The behavioral skills acquired by the robots closely match those displayed by children. The comparison between incremental and nonincremental experiments demonstrates how some of the limitations characterizing the initial developmental phase channel the learning process toward better solutions.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"5 1","pages":"326-336"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2013.2264321","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62761537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Learning to Reproduce Fluctuating Time Series by Inferring Their Time-Dependent Stochastic Properties: Application in Robot Learning Via Tutoring 通过推断波动时间序列的随时间随机特性来学习再现波动时间序列:在机器人辅导学习中的应用
IEEE Transactions on Autonomous Mental Development Pub Date : 2013-12-01 DOI: 10.1109/TAMD.2013.2258019
Shingo Murata, Jun Namikawa, H. Arie, S. Sugano, J. Tani
{"title":"Learning to Reproduce Fluctuating Time Series by Inferring Their Time-Dependent Stochastic Properties: Application in Robot Learning Via Tutoring","authors":"Shingo Murata, Jun Namikawa, H. Arie, S. Sugano, J. Tani","doi":"10.1109/TAMD.2013.2258019","DOIUrl":"https://doi.org/10.1109/TAMD.2013.2258019","url":null,"abstract":"This study proposes a novel type of dynamic neural network model that can learn to extract stochastic or fluctuating structures hidden in time series data. The network learns to predict not only the mean of the next input state, but also its time-dependent variance. The training method is based on maximum likelihood estimation by using the gradient descent method and the likelihood function is expressed as a function of the estimated variance. Regarding the model evaluation, we present numerical experiments in which training data were generated in different ways utilizing Gaussian noise. Our analysis showed that the network can predict the time-dependent variance and the mean and it can also reproduce the target stochastic sequence data by utilizing the estimated variance. Furthermore, it was shown that a humanoid robot using the proposed network can learn to reproduce latent stochastic structures hidden in fluctuating tutoring trajectories. This learning scheme is essential for the acquisition of sensory-guided skilled behavior.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"10 1","pages":"298-310"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2013.2258019","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62761375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 62
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信