The “Eve effect bias”: Epistemic Vigilance and Human Belief in Concealed Capacities of Social Robots

Robin Gigandet, Xénia Dutoit, Bing-chuan Li, Maria C. Diana, T. Nazir
{"title":"The “Eve effect bias”: Epistemic Vigilance and Human Belief in Concealed Capacities of Social Robots","authors":"Robin Gigandet, Xénia Dutoit, Bing-chuan Li, Maria C. Diana, T. Nazir","doi":"10.1109/ARSO56563.2023.10187469","DOIUrl":null,"url":null,"abstract":"Artificial social agents (ASAs) are gaining popularity, but reports suggest that humans don't always coexist harmoniously with them. This exploratory study examined whether humans pay attention to cues of falsehood or deceit when interacting with ASAs. To infer such epistemic vigilance, participants' N400 brain signals were analyzed in response to discrepancies between a robot's physical appearance and its speech, and ratings were collected for statements about the robot's cognitive ability. First results suggest that humans do exhibit epistemic vigilance, as evidenced 1) by a more pronounced N400 component when participants heard sentences contradicting the robot's physical abilities and 2) by overall lower rating scores for the robot's cognitive abilities. However, approximately two-thirds of participants showed a “concealed capacity bias,” whereby they reported believing that the robot could have concealed arms or legs, despite physical evidence to the contrary. This bias, referred to as the “Eve effect bias” reduced the N400 effect and amplified the perception of the robot, suggesting that individuals influenced by this bias may be less critical of the accuracy and plausibility of information provided by artificial agents. Consequently, humans may accept information from ASAs even when it contradicts common sense. These findings emphasize the need for transparency, unbiased information processing, and user education about the limitations and capabilities of ASAs.","PeriodicalId":382832,"journal":{"name":"2023 IEEE International Conference on Advanced Robotics and Its Social Impacts (ARSO)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE International Conference on Advanced Robotics and Its Social Impacts (ARSO)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ARSO56563.2023.10187469","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Artificial social agents (ASAs) are gaining popularity, but reports suggest that humans don't always coexist harmoniously with them. This exploratory study examined whether humans pay attention to cues of falsehood or deceit when interacting with ASAs. To infer such epistemic vigilance, participants' N400 brain signals were analyzed in response to discrepancies between a robot's physical appearance and its speech, and ratings were collected for statements about the robot's cognitive ability. First results suggest that humans do exhibit epistemic vigilance, as evidenced 1) by a more pronounced N400 component when participants heard sentences contradicting the robot's physical abilities and 2) by overall lower rating scores for the robot's cognitive abilities. However, approximately two-thirds of participants showed a “concealed capacity bias,” whereby they reported believing that the robot could have concealed arms or legs, despite physical evidence to the contrary. This bias, referred to as the “Eve effect bias” reduced the N400 effect and amplified the perception of the robot, suggesting that individuals influenced by this bias may be less critical of the accuracy and plausibility of information provided by artificial agents. Consequently, humans may accept information from ASAs even when it contradicts common sense. These findings emphasize the need for transparency, unbiased information processing, and user education about the limitations and capabilities of ASAs.
“夏娃效应偏差”:认知警惕与人类对社交机器人隐藏能力的信念
人工社会代理(asa)越来越受欢迎,但报告表明,人类并不总是与它们和谐共处。这项探索性研究考察了人类在与asa互动时是否会注意到谎言或欺骗的线索。为了推断出这种认知警惕性,研究人员分析了参与者对机器人外表和语言之间差异的N400大脑信号,并收集了对机器人认知能力的评价。第一个结果表明,人类确实表现出了认知警觉性,1)当参与者听到与机器人身体能力相矛盾的句子时,他们的N400成分更加明显,2)机器人认知能力的总体评分较低。然而,大约三分之二的参与者表现出“隐藏能力偏见”,即他们报告认为机器人可以隐藏手臂或腿,尽管物理证据与之相反。这种偏见,被称为“夏娃效应偏见”,降低了N400效应,放大了对机器人的感知,这表明受这种偏见影响的个体可能对人工智能提供的信息的准确性和合理性不那么挑剔。因此,人类可能会接受来自asa的信息,即使它与常识相矛盾。这些发现强调了透明度、公正的信息处理以及对asa的局限性和能力的用户教育的必要性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信