Aldo Chavez Gonzalez, Marlena R. Fraune, Ricarda Wullenkord
{"title":"在人-代理互动中,道德正义(功利主义方法)是否能压倒内群体偏爱偏见","authors":"Aldo Chavez Gonzalez, Marlena R. Fraune, Ricarda Wullenkord","doi":"10.1145/3527188.3561930","DOIUrl":null,"url":null,"abstract":"As robots increasingly assist more people, tendencies of becoming attached to these robots and treat them well have risen; even to the point of treating robot teammates better than human opponents in laboratory settings. We examined how far this ingroup favoritism extends and how to mitigate it. We did this by making participants play an online game in teams of two humans and two robots against two humans and two robots. After the game, they selected someone to perform an additional unpleasant task (according to the results of our pilot test); we manipulated that task to be equally unpleasant for ingroup and outgroup members in one condition, and more unpleasant for outgroup than for ingroup members in the other condition. We did this to examine if the moral principle of utilitarianism (i.e., social justice and fairness) would outweigh ingroup favoritism. In the results, participants showed typical group dynamics like ingroup favoritism. The opportunity to behave in a utilitarian way failed to reverse the ingroup favoritism effect. Interestingly, participants sacrificed their ingroup robot more than they sacrificed even outgroup players. We speculate about why the study showed these unexpected findings and what it may mean for HRI.","PeriodicalId":179256,"journal":{"name":"Proceedings of the 10th International Conference on Human-Agent Interaction","volume":"64 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Can Moral Rightness (Utilitarian Approach) Outweigh the Ingroup Favoritism Bias in Human-Agent Interaction\",\"authors\":\"Aldo Chavez Gonzalez, Marlena R. Fraune, Ricarda Wullenkord\",\"doi\":\"10.1145/3527188.3561930\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"As robots increasingly assist more people, tendencies of becoming attached to these robots and treat them well have risen; even to the point of treating robot teammates better than human opponents in laboratory settings. We examined how far this ingroup favoritism extends and how to mitigate it. We did this by making participants play an online game in teams of two humans and two robots against two humans and two robots. After the game, they selected someone to perform an additional unpleasant task (according to the results of our pilot test); we manipulated that task to be equally unpleasant for ingroup and outgroup members in one condition, and more unpleasant for outgroup than for ingroup members in the other condition. We did this to examine if the moral principle of utilitarianism (i.e., social justice and fairness) would outweigh ingroup favoritism. In the results, participants showed typical group dynamics like ingroup favoritism. The opportunity to behave in a utilitarian way failed to reverse the ingroup favoritism effect. Interestingly, participants sacrificed their ingroup robot more than they sacrificed even outgroup players. We speculate about why the study showed these unexpected findings and what it may mean for HRI.\",\"PeriodicalId\":179256,\"journal\":{\"name\":\"Proceedings of the 10th International Conference on Human-Agent Interaction\",\"volume\":\"64 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 10th International Conference on Human-Agent Interaction\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3527188.3561930\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 10th International Conference on Human-Agent Interaction","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3527188.3561930","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Can Moral Rightness (Utilitarian Approach) Outweigh the Ingroup Favoritism Bias in Human-Agent Interaction
As robots increasingly assist more people, tendencies of becoming attached to these robots and treat them well have risen; even to the point of treating robot teammates better than human opponents in laboratory settings. We examined how far this ingroup favoritism extends and how to mitigate it. We did this by making participants play an online game in teams of two humans and two robots against two humans and two robots. After the game, they selected someone to perform an additional unpleasant task (according to the results of our pilot test); we manipulated that task to be equally unpleasant for ingroup and outgroup members in one condition, and more unpleasant for outgroup than for ingroup members in the other condition. We did this to examine if the moral principle of utilitarianism (i.e., social justice and fairness) would outweigh ingroup favoritism. In the results, participants showed typical group dynamics like ingroup favoritism. The opportunity to behave in a utilitarian way failed to reverse the ingroup favoritism effect. Interestingly, participants sacrificed their ingroup robot more than they sacrificed even outgroup players. We speculate about why the study showed these unexpected findings and what it may mean for HRI.