Robin Gigandet, Xénia Dutoit, Bing-chuan Li, Maria C. Diana, T. Nazir
{"title":"The “Eve effect bias”: Epistemic Vigilance and Human Belief in Concealed Capacities of Social Robots","authors":"Robin Gigandet, Xénia Dutoit, Bing-chuan Li, Maria C. Diana, T. Nazir","doi":"10.1109/ARSO56563.2023.10187469","DOIUrl":null,"url":null,"abstract":"Artificial social agents (ASAs) are gaining popularity, but reports suggest that humans don't always coexist harmoniously with them. This exploratory study examined whether humans pay attention to cues of falsehood or deceit when interacting with ASAs. To infer such epistemic vigilance, participants' N400 brain signals were analyzed in response to discrepancies between a robot's physical appearance and its speech, and ratings were collected for statements about the robot's cognitive ability. First results suggest that humans do exhibit epistemic vigilance, as evidenced 1) by a more pronounced N400 component when participants heard sentences contradicting the robot's physical abilities and 2) by overall lower rating scores for the robot's cognitive abilities. However, approximately two-thirds of participants showed a “concealed capacity bias,” whereby they reported believing that the robot could have concealed arms or legs, despite physical evidence to the contrary. This bias, referred to as the “Eve effect bias” reduced the N400 effect and amplified the perception of the robot, suggesting that individuals influenced by this bias may be less critical of the accuracy and plausibility of information provided by artificial agents. Consequently, humans may accept information from ASAs even when it contradicts common sense. These findings emphasize the need for transparency, unbiased information processing, and user education about the limitations and capabilities of ASAs.","PeriodicalId":382832,"journal":{"name":"2023 IEEE International Conference on Advanced Robotics and Its Social Impacts (ARSO)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE International Conference on Advanced Robotics and Its Social Impacts (ARSO)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ARSO56563.2023.10187469","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Artificial social agents (ASAs) are gaining popularity, but reports suggest that humans don't always coexist harmoniously with them. This exploratory study examined whether humans pay attention to cues of falsehood or deceit when interacting with ASAs. To infer such epistemic vigilance, participants' N400 brain signals were analyzed in response to discrepancies between a robot's physical appearance and its speech, and ratings were collected for statements about the robot's cognitive ability. First results suggest that humans do exhibit epistemic vigilance, as evidenced 1) by a more pronounced N400 component when participants heard sentences contradicting the robot's physical abilities and 2) by overall lower rating scores for the robot's cognitive abilities. However, approximately two-thirds of participants showed a “concealed capacity bias,” whereby they reported believing that the robot could have concealed arms or legs, despite physical evidence to the contrary. This bias, referred to as the “Eve effect bias” reduced the N400 effect and amplified the perception of the robot, suggesting that individuals influenced by this bias may be less critical of the accuracy and plausibility of information provided by artificial agents. Consequently, humans may accept information from ASAs even when it contradicts common sense. These findings emphasize the need for transparency, unbiased information processing, and user education about the limitations and capabilities of ASAs.