The Role of Social Dialogue and Errors in Robots

Gale M. Lucas, Jill Boberg, D. Traum, Ron Artstein, J. Gratch, Alesia Gainer, Emmanuel Johnson, A. Leuski, Mikio Nakano
{"title":"The Role of Social Dialogue and Errors in Robots","authors":"Gale M. Lucas, Jill Boberg, D. Traum, Ron Artstein, J. Gratch, Alesia Gainer, Emmanuel Johnson, A. Leuski, Mikio Nakano","doi":"10.1145/3125739.3132617","DOIUrl":null,"url":null,"abstract":"Social robots establish rapport with human users. This work explores the extent to which rapport-building can benefit (or harm) conversations with robots, and under what circumstances this occurs. For example, previous work has shown that agents that make conversational errors are less capable of influencing people than agents that do not make errors [1]. Some work has shown this effect with robots, but prior research has not considered additional factors such as the level of rapport between the person and the robot. We predicted that building rapport through a social dialogue (such as an ice-breaker) could mitigate the detrimental effect of a robot's errors on influence. Our study used a Nao robot programmed to persuade users to agree with its rankings on two \"survival tasks\" (e.g., lunar survival task). We manipulated both errors and social dialogue:the robot either exhibited errors in the second survival task or not, and users either engaged in an ice-breaker with the robot between the two survival tasks or completed a control task. Replicating previous research, errors tended to reduce the robot's influence in the second survival task. Contrary to our prediction, results revealed that the ice-breaker did not mitigate the effect of errors, and if anything, errors were more harmful after the ice-breaker (intended to build rapport) than in the control condition. This backfiring of attempted rapport-building may be due to a contrast effect, suggesting that the design of social robots should avoid introducing dialogues of incongruent quality.","PeriodicalId":346669,"journal":{"name":"Proceedings of the 5th International Conference on Human Agent Interaction","volume":"42 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"12","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 5th International Conference on Human Agent Interaction","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3125739.3132617","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 12

Abstract

Social robots establish rapport with human users. This work explores the extent to which rapport-building can benefit (or harm) conversations with robots, and under what circumstances this occurs. For example, previous work has shown that agents that make conversational errors are less capable of influencing people than agents that do not make errors [1]. Some work has shown this effect with robots, but prior research has not considered additional factors such as the level of rapport between the person and the robot. We predicted that building rapport through a social dialogue (such as an ice-breaker) could mitigate the detrimental effect of a robot's errors on influence. Our study used a Nao robot programmed to persuade users to agree with its rankings on two "survival tasks" (e.g., lunar survival task). We manipulated both errors and social dialogue:the robot either exhibited errors in the second survival task or not, and users either engaged in an ice-breaker with the robot between the two survival tasks or completed a control task. Replicating previous research, errors tended to reduce the robot's influence in the second survival task. Contrary to our prediction, results revealed that the ice-breaker did not mitigate the effect of errors, and if anything, errors were more harmful after the ice-breaker (intended to build rapport) than in the control condition. This backfiring of attempted rapport-building may be due to a contrast effect, suggesting that the design of social robots should avoid introducing dialogues of incongruent quality.
机器人社会对话和错误的作用
社交机器人与人类用户建立了融洽的关系。这项工作探讨了建立融洽关系在多大程度上有利于(或有害)与机器人的对话,以及在什么情况下会发生这种情况。例如,先前的研究表明,与不犯错的代理相比,犯会话错误的代理对人的影响能力更弱[1]。一些研究已经证明了机器人的这种影响,但之前的研究并没有考虑到其他因素,比如人与机器人之间的融洽程度。我们预测,通过社交对话(比如打破僵局)建立融洽的关系,可以减轻机器人错误对影响的有害影响。我们的研究使用了一个Nao机器人来说服用户同意它在两个“生存任务”中的排名(例如,月球生存任务)。我们操纵了错误和社会对话:机器人要么在第二个生存任务中出现错误,要么没有,用户要么在两个生存任务之间与机器人进行破冰,要么完成一个控制任务。重复先前的研究,错误倾向于降低机器人在第二个生存任务中的影响。与我们的预测相反,结果显示破冰并没有减轻错误的影响,如果有的话,破冰后的错误(旨在建立融洽关系)比控制条件下的错误更有害。这种试图建立融洽关系的反作用可能是由于对比效应,这表明社交机器人的设计应该避免引入质量不一致的对话。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信