{"title":"在人工顾问和算法顾问之间进行选择:责任分担的作用","authors":"Lior Gazit , Ofer Arazy , Uri Hertz","doi":"10.1016/j.chbah.2023.100009","DOIUrl":null,"url":null,"abstract":"<div><p>Algorithms are increasingly employed to provide highly accurate advice and recommendations across domains, yet in many cases people tend to prefer human advisors. Studies to date have focused mainly on the advisor’s perceived competence and the outcome of the advice as determinants of advice takers’ willingness to accept advice from human and algorithmic advisors and to arbitrate between them. Here we examine the role of another factor that is not directly related to the outcome: the advice taker’s ability to psychologically offload responsibility for the decision’s potential consequences. Building on studies showing differences in responsibility attribution between human and algorithmic advisors, we hypothesize that, controlling for the effects of the advisor’s competence, the advisor's perceived responsibility is an important factor affecting advice takers’ choice between human and algorithmic advisors. In an experiment in two domains, Medical and Financial (N = 806), participants were asked to rate advisors’ perceived responsibility and choose between a human and algorithmic advisor. Our results show that human advisors were perceived as more responsible than algorithmic advisors and most importantly, that the perception of the advisor’s responsibility affected the preference for a human advisor over an algorithmic counterpart. Furthermore, we found that an experimental manipulation that impeded advice takers’ ability to offload responsibility affected the extent to which human, but not algorithmic, advisors were perceived as responsible. Together, our findings highlight the role of responsibility sharing in influencing algorithm aversion.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"1 2","pages":"Article 100009"},"PeriodicalIF":0.0000,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Choosing between human and algorithmic advisors: The role of responsibility sharing\",\"authors\":\"Lior Gazit , Ofer Arazy , Uri Hertz\",\"doi\":\"10.1016/j.chbah.2023.100009\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Algorithms are increasingly employed to provide highly accurate advice and recommendations across domains, yet in many cases people tend to prefer human advisors. Studies to date have focused mainly on the advisor’s perceived competence and the outcome of the advice as determinants of advice takers’ willingness to accept advice from human and algorithmic advisors and to arbitrate between them. Here we examine the role of another factor that is not directly related to the outcome: the advice taker’s ability to psychologically offload responsibility for the decision’s potential consequences. Building on studies showing differences in responsibility attribution between human and algorithmic advisors, we hypothesize that, controlling for the effects of the advisor’s competence, the advisor's perceived responsibility is an important factor affecting advice takers’ choice between human and algorithmic advisors. In an experiment in two domains, Medical and Financial (N = 806), participants were asked to rate advisors’ perceived responsibility and choose between a human and algorithmic advisor. Our results show that human advisors were perceived as more responsible than algorithmic advisors and most importantly, that the perception of the advisor’s responsibility affected the preference for a human advisor over an algorithmic counterpart. Furthermore, we found that an experimental manipulation that impeded advice takers’ ability to offload responsibility affected the extent to which human, but not algorithmic, advisors were perceived as responsible. Together, our findings highlight the role of responsibility sharing in influencing algorithm aversion.</p></div>\",\"PeriodicalId\":100324,\"journal\":{\"name\":\"Computers in Human Behavior: Artificial Humans\",\"volume\":\"1 2\",\"pages\":\"Article 100009\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers in Human Behavior: Artificial Humans\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2949882123000099\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in Human Behavior: Artificial Humans","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2949882123000099","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Choosing between human and algorithmic advisors: The role of responsibility sharing
Algorithms are increasingly employed to provide highly accurate advice and recommendations across domains, yet in many cases people tend to prefer human advisors. Studies to date have focused mainly on the advisor’s perceived competence and the outcome of the advice as determinants of advice takers’ willingness to accept advice from human and algorithmic advisors and to arbitrate between them. Here we examine the role of another factor that is not directly related to the outcome: the advice taker’s ability to psychologically offload responsibility for the decision’s potential consequences. Building on studies showing differences in responsibility attribution between human and algorithmic advisors, we hypothesize that, controlling for the effects of the advisor’s competence, the advisor's perceived responsibility is an important factor affecting advice takers’ choice between human and algorithmic advisors. In an experiment in two domains, Medical and Financial (N = 806), participants were asked to rate advisors’ perceived responsibility and choose between a human and algorithmic advisor. Our results show that human advisors were perceived as more responsible than algorithmic advisors and most importantly, that the perception of the advisor’s responsibility affected the preference for a human advisor over an algorithmic counterpart. Furthermore, we found that an experimental manipulation that impeded advice takers’ ability to offload responsibility affected the extent to which human, but not algorithmic, advisors were perceived as responsible. Together, our findings highlight the role of responsibility sharing in influencing algorithm aversion.