IEEE Transactions on Autonomous Mental Development最新文献

筛选
英文 中文
The Fourth IEEE International Conference on Development and Learning and on Epigenetic Robotics (ICDL-EpiRob) 2014: Conference Summary and Report 第四届IEEE发展与学习与表观遗传机器人国际会议(ICDL-EpiRob) 2014:会议总结与报告
IEEE Transactions on Autonomous Mental Development Pub Date : 2014-12-12 DOI: 10.1109/TAMD.2014.2377335
G. Metta, L. Natale
{"title":"The Fourth IEEE International Conference on Development and Learning and on Epigenetic Robotics (ICDL-EpiRob) 2014: Conference Summary and Report","authors":"G. Metta, L. Natale","doi":"10.1109/TAMD.2014.2377335","DOIUrl":"https://doi.org/10.1109/TAMD.2014.2377335","url":null,"abstract":"","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"96 1","pages":"243"},"PeriodicalIF":0.0,"publicationDate":"2014-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88807103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editorial Renewal for the IEEE Transactions on Autonomous Mental Development IEEE自主心理发展汇刊编辑更新
IEEE Transactions on Autonomous Mental Development Pub Date : 2014-12-01 DOI: 10.1109/TAMD.2014.2377274
Zhengyou Zhang
{"title":"Editorial Renewal for the IEEE Transactions on Autonomous Mental Development","authors":"Zhengyou Zhang","doi":"10.1109/TAMD.2014.2377274","DOIUrl":"https://doi.org/10.1109/TAMD.2014.2377274","url":null,"abstract":"","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"6 1","pages":"241-242"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2014.2377274","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62763220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
What Strikes the Strings of Your Heart?–Multi-Label Dimensionality Reduction for Music Emotion Analysis via Brain Imaging 是什么触动了你的心弦?基于脑成像的多标签降维音乐情感分析
IEEE Transactions on Autonomous Mental Development Pub Date : 2014-11-03 DOI: 10.1145/2647868.2655068
Yang Liu, Yan Liu, Yu Zhao, K. Hua
{"title":"What Strikes the Strings of Your Heart?–Multi-Label Dimensionality Reduction for Music Emotion Analysis via Brain Imaging","authors":"Yang Liu, Yan Liu, Yu Zhao, K. Hua","doi":"10.1145/2647868.2655068","DOIUrl":"https://doi.org/10.1145/2647868.2655068","url":null,"abstract":"After 20 years of extensive study in psychology, some musical factors have been identified that can evoke certain kinds of emotions. However, the underlying mechanism of the relationship between music and emotion remains unanswered. This paper intends to find the genuine correlates of music emotion by exploring a systematic and quantitative framework. The task is formulated as a dimensionality reduction problem, which seeks the complete and compact feature set with intrinsic correlates for the given objectives. Since a song generally elicits more than one emotion, we explore dimensionality reduction techniques for multi-label classification. One challenging problem is that the hard label cannot represent the extent of the emotion and it is also difficult to ask the subjects to quantize their feelings. This work tries utilizing the electroencephalography (EEG) signal to solve this challenge. A learning scheme called EEG-based emotion smoothing ( E2S) and a bilinear multi-emotion similarity preserving embedding (BME-SPE) algorithm are proposed. We validate the effectiveness of the proposed framework on standard dataset CAL-500. Several influential correlates have been identified and the classification via those correlates has achieved good performance. We build a Chinese music dataset according to the identified correlates and find that the music from different cultures may share similar emotions.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"7 1","pages":"176-188"},"PeriodicalIF":0.0,"publicationDate":"2014-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/2647868.2655068","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64160137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Optimal Rewards for Cooperative Agents 合作代理的最优奖励
IEEE Transactions on Autonomous Mental Development Pub Date : 2014-10-13 DOI: 10.1109/TAMD.2014.2362682
B. Liu, Satinder Singh, Richard L. Lewis, S. Qin
{"title":"Optimal Rewards for Cooperative Agents","authors":"B. Liu, Satinder Singh, Richard L. Lewis, S. Qin","doi":"10.1109/TAMD.2014.2362682","DOIUrl":"https://doi.org/10.1109/TAMD.2014.2362682","url":null,"abstract":"Following work on designing optimal rewards for single agents, we define a multiagent optimal rewards problem (ORP) in cooperative (specifically, common-payoff or team) settings. This new problem solves for individual agent reward functions that guide agents to better overall team performance relative to teams in which all agents guide their behavior with the same given team-reward function. We present a multiagent architecture in which each agent learns good reward functions from experience using a gradient-based algorithm in addition to performing the usual task of planning good policies (except in this case with respect to the learned rather than the given reward function). Multiagency introduces the challenge of nonstationarity: because the agents learn simultaneously, each agent's reward-learning problem is nonstationary and interdependent on the other agents evolving reward functions. We demonstrate on two simple domains that the proposed architecture outperforms the conventional approach in which all the agents use the same given team-reward function (even when accounting for the resource overhead of the reward learning); that the learning algorithm performs stably despite the nonstationarity; and that learning individual reward functions can lead to better specialization of roles than is possible with shared reward, whether learned or given.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"25 1","pages":"286-297"},"PeriodicalIF":0.0,"publicationDate":"2014-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2014.2362682","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62763598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Learning from Demonstration in Robots using the Shared Circuits Model 共享电路模型在机器人演示中的学习
IEEE Transactions on Autonomous Mental Development Pub Date : 2014-10-01 DOI: 10.1109/TAMD.2014.2359912
Khawaja M. U. Suleman, M. Awais
{"title":"Learning from Demonstration in Robots using the Shared Circuits Model","authors":"Khawaja M. U. Suleman, M. Awais","doi":"10.1109/TAMD.2014.2359912","DOIUrl":"https://doi.org/10.1109/TAMD.2014.2359912","url":null,"abstract":"Learning from demonstration presents an alternative method for programming robots for different nontrivial behaviors. Various techniques that address learning from demonstration in robots have been proposed but those do not scale up well. Thus there is a need to discover novel solutions to this problem. Given that the basic idea for such learning comes from nature in the form of imitation in few animals, it makes perfect sense to take advantage of the rigorous study of imitative learning available in relevant natural sciences. In this work a solution for robot learning from a relatively recent theory from natural sciences called the Shared Circuits Model, is sought. Shared Circuits Model theory is a comprehensive, multidiscipline representative theory. It is a modern synthesis that brings together different theories that explain imitation and other related social functions originating from various sciences. This paper attempts to import the shared circuits model to robotics for learning from demonstration. Specifically it: (1) expresses shared circuits model in a software design nomenclature; (2) heuristically extends the basic specification of Shared Circuits Model to implement a working imitative learning system; (3) applies the extended model on mobile robot navigation in a simulated indoor environment; and (4) attempts to validate the shared circuits model theory in the context of imitative learning. Results show that an extremely simple implementation of a theoretically sound theory, the shared circuits model, offers a realistic solution for robot learning from demonstration of nontrivial tasks.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"6 1","pages":"244-258"},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2014.2359912","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62763509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Hierarchical System for a Distributed Representation of the Peripersonal Space of a Humanoid Robot 仿人机器人周边空间分布式表示的层次系统
IEEE Transactions on Autonomous Mental Development Pub Date : 2014-06-26 DOI: 10.1109/TAMD.2014.2332875
Marco Antonelli, A. Gibaldi, Frederik Beuth, A. J. Duran, A. Canessa, Manuela Chessa, F. Solari, A. P. Pobil, F. Hamker, E. Chinellato, S. Sabatini
{"title":"A Hierarchical System for a Distributed Representation of the Peripersonal Space of a Humanoid Robot","authors":"Marco Antonelli, A. Gibaldi, Frederik Beuth, A. J. Duran, A. Canessa, Manuela Chessa, F. Solari, A. P. Pobil, F. Hamker, E. Chinellato, S. Sabatini","doi":"10.1109/TAMD.2014.2332875","DOIUrl":"https://doi.org/10.1109/TAMD.2014.2332875","url":null,"abstract":"Reaching a target object in an unknown and unstructured environment is easily performed by human beings. However, designing a humanoid robot that executes the same task requires the implementation of complex abilities, such as identifying the target in the visual field, estimating its spatial location, and precisely driving the motors of the arm to reach it. While research usually tackles the development of such abilities singularly, in this work we integrate a number of computational models into a unified framework, and demonstrate in a humanoid torso the feasibility of an integrated working representation of its peripersonal space. To achieve this goal, we propose a cognitive architecture that connects several models inspired by neural circuits of the visual, frontal and posterior parietal cortices of the brain. The outcome of the integration process is a system that allows the robot to create its internal model and its representation of the surrounding space by interacting with the environment directly, through a mutual adaptation of perception and action. The robot is eventually capable of executing a set of tasks, such as recognizing, gazing and reaching target objects, which can work separately or cooperate for supporting more structured and effective behaviors.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"14 1","pages":"259-273"},"PeriodicalIF":0.0,"publicationDate":"2014-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2014.2332875","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62762744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
A Wearable Camera Detects Gaze Peculiarities during Social Interactions in Young Children with Pervasive Developmental Disorders 一种可穿戴相机在患有广发性发育障碍的幼儿的社交互动中检测凝视特性
IEEE Transactions on Autonomous Mental Development Pub Date : 2014-06-03 DOI: 10.1109/TAMD.2014.2327812
Silvia Magrelli, Basilio Noris, Patrick Jermann, F. Ansermet, F. Hentsch, J. Nadel, A. Billard
{"title":"A Wearable Camera Detects Gaze Peculiarities during Social Interactions in Young Children with Pervasive Developmental Disorders","authors":"Silvia Magrelli, Basilio Noris, Patrick Jermann, F. Ansermet, F. Hentsch, J. Nadel, A. Billard","doi":"10.1109/TAMD.2014.2327812","DOIUrl":"https://doi.org/10.1109/TAMD.2014.2327812","url":null,"abstract":"We report on the study of gazes, conducted on children with pervasive developmental disorders (PDD), by using a novel head-mounted eye-tracking device called the WearCam. Due to the portable nature of the WearCam, we are able to monitor naturalistic interactions between the children and adults. The study involved a group of 3- to 11-year-old children ( n=13) with PDD compared to a group of typically developing (TD) children ( n=13) between 2- and 6-years old. We found significant differences between the two groups, in terms of the proportion and the frequency of episodes of directly looking at faces during the whole set of experiments. We also conducted a differentiated analysis, in two social conditions, of the gaze patterns directed to an adult's face when the adult addressed the child either verbally or through facial expression of emotion. We observe that children with PDD show a marked tendency to look more at the face of the adult when she makes facial expressions rather than when she speaks.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"6 1","pages":"274-285"},"PeriodicalIF":0.0,"publicationDate":"2014-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2014.2327812","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62762994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
The MEI Robot: Towards Using Motherese to Develop Multimodal Emotional Intelligence MEI机器人:利用母亲语言开发多模态情商
IEEE Transactions on Autonomous Mental Development Pub Date : 2014-06-01 DOI: 10.1109/TAMD.2014.2317513
Angelica Lim, HIroshi G. Okuno
{"title":"The MEI Robot: Towards Using Motherese to Develop Multimodal Emotional Intelligence","authors":"Angelica Lim, HIroshi G. Okuno","doi":"10.1109/TAMD.2014.2317513","DOIUrl":"https://doi.org/10.1109/TAMD.2014.2317513","url":null,"abstract":"We introduce the first steps in a developmental robot called MEI (multimodal emotional intelligence), a robot that can understand and express emotions in voice, gesture and gait using a controller trained only on voice. Whereas it is known that humans can perceive affect in voice, movement, music and even as little as point light displays, it is not clear how humans develop this skill. Is it innate? If not, how does this emotional intelligence develop in infants? The MEI robot develops these skills through vocal input and perceptual mapping of vocal features to other modalities. We base MEI's development on the idea that motherese is used as a way to associate dynamic vocal contours to facial emotion from an early age. MEI uses these dynamic contours to both understand and express multimodal emotions using a unified model called SIRE (Speed, Intensity, irRegularity, and Extent). Offline experiments with MEI support its cross-modal generalization ability: a model trained with voice data can recognize happiness, sadness, and fear in a completely different modality-human gait. User evaluations of the MEI robot speaking, gesturing and walking show that it can reliably express multimodal happiness and sadness using only the voice-trained model as a basis.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"6 1","pages":"126-138"},"PeriodicalIF":0.0,"publicationDate":"2014-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2014.2317513","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62762881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 45
Guest Editorial Behavior Understanding and Developmental Robotics 行为理解与发展机器人
IEEE Transactions on Autonomous Mental Development Pub Date : 2014-06-01 DOI: 10.1109/TAMD.2014.2328731
A. A. Salah, Pierre-Yves Oudeyer, Çetin Meriçli, Javier Ruiz-del-Solar
{"title":"Guest Editorial Behavior Understanding and Developmental Robotics","authors":"A. A. Salah, Pierre-Yves Oudeyer, Çetin Meriçli, Javier Ruiz-del-Solar","doi":"10.1109/TAMD.2014.2328731","DOIUrl":"https://doi.org/10.1109/TAMD.2014.2328731","url":null,"abstract":"The scientific, technological, and application challenges that arise from the mutual interaction of developmental robotics and computational human behavior understanding give rise to two different perspectives. Robots need to be capable to learn dynamically and incrementally how to interpret, and thus understand multimodal human behavior, which means behavior analysis can be performed for developmental robotics. On the other hand, behavior analysis can also be performed through developmental robotics, since developmental social robots can offer stimulating opportunities for improving scientific understanding of human behavior, and especially to allow a deeper analysis of the semantics and structure of human behavior. The contributions to the Special Issue explore these two perspectives.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"296 1","pages":"77-79"},"PeriodicalIF":0.0,"publicationDate":"2014-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76470343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Corrections to “An Approach to Subjective Computing: A Robot That Learns From Interaction With Humans” 对“一种主观计算方法:从与人类的互动中学习的机器人”的更正
IEEE Transactions on Autonomous Mental Development Pub Date : 2014-06-01 DOI: 10.1109/TAMD.2014.2328774
P. Gruneberg, Kenji Suzuki
{"title":"Corrections to “An Approach to Subjective Computing: A Robot That Learns From Interaction With Humans”","authors":"P. Gruneberg, Kenji Suzuki","doi":"10.1109/TAMD.2014.2328774","DOIUrl":"https://doi.org/10.1109/TAMD.2014.2328774","url":null,"abstract":"","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"6 1","pages":"168-168"},"PeriodicalIF":0.0,"publicationDate":"2014-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2014.2328774","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62763106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信