Predicting Human Interpretations of Affect and Valence in a Social Robot

D. McNeill, C. Kennington
{"title":"Predicting Human Interpretations of Affect and Valence in a Social Robot","authors":"D. McNeill, C. Kennington","doi":"10.15607/RSS.2019.XV.041","DOIUrl":null,"url":null,"abstract":"In this paper we seek to understand how people interpret a social robot’s performance of an emotion, what we term ‘affective display,’ and the positive or negative valence of that affect. To this end, we tasked annotators with observing the Anki Cozmo robot perform its over 900 pre-scripted behaviors and labeling those behaviors with 16 possible affective display labels (e.g., interest, boredom, disgust, etc.). In our first experiment, we trained a neural network to predict annotated labels given multimodal information about the robot’s movement, face, and audio. The results suggest that pairing affects to predict the valence between them is more informative, which we confirmed in a second experiment. Both experiments show that certain modalities are more useful for predicting displays of affect and valence. For our final experiment, we generated novel robot behaviors and tasked human raters with assigning scores to valence pairs instead of applying labels, then compared our model’s predictions of valence between the affective pairs and compared the results to the human ratings. We conclude that some modalities have information that can be contributory or inhibitive when considered in conjunction with other modalities, depending on the emotional valence pair being considered.","PeriodicalId":307591,"journal":{"name":"Robotics: Science and Systems XV","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"11","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Robotics: Science and Systems XV","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.15607/RSS.2019.XV.041","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 11

Abstract

In this paper we seek to understand how people interpret a social robot’s performance of an emotion, what we term ‘affective display,’ and the positive or negative valence of that affect. To this end, we tasked annotators with observing the Anki Cozmo robot perform its over 900 pre-scripted behaviors and labeling those behaviors with 16 possible affective display labels (e.g., interest, boredom, disgust, etc.). In our first experiment, we trained a neural network to predict annotated labels given multimodal information about the robot’s movement, face, and audio. The results suggest that pairing affects to predict the valence between them is more informative, which we confirmed in a second experiment. Both experiments show that certain modalities are more useful for predicting displays of affect and valence. For our final experiment, we generated novel robot behaviors and tasked human raters with assigning scores to valence pairs instead of applying labels, then compared our model’s predictions of valence between the affective pairs and compared the results to the human ratings. We conclude that some modalities have information that can be contributory or inhibitive when considered in conjunction with other modalities, depending on the emotional valence pair being considered.
在社交机器人中预测人类对情感和效价的解释
在本文中,我们试图理解人们如何解释社交机器人的情感表现,我们称之为“情感表现”,以及这种情感的积极或消极效价。为此,我们要求注释者观察Anki Cozmo机器人执行其900多个预先设定的行为,并将这些行为标记为16种可能的情感表现标签(例如,兴趣,无聊,厌恶等)。在我们的第一个实验中,我们训练了一个神经网络来预测关于机器人的运动、面部和音频的多模态信息的标注标签。结果表明,配对效应预测它们之间的价态信息更丰富,我们在第二次实验中证实了这一点。两个实验都表明,某些模式对预测情感和效价的表现更有用。在我们的最后一个实验中,我们生成了新的机器人行为,并让人类评分员给价对打分,而不是给价对贴上标签,然后比较我们的模型对情感对之间价的预测,并将结果与人类评分进行比较。我们得出的结论是,当与其他模式结合考虑时,某些模式的信息可能是有益的,也可能是抑制的,这取决于所考虑的情绪价对。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信