Houston Claure, Kate Candon, Inyoung Shin, Marynel Vázquez
{"title":"Dynamic Fairness Perceptions in Human-Robot Interaction","authors":"Houston Claure, Kate Candon, Inyoung Shin, Marynel Vázquez","doi":"arxiv-2409.07560","DOIUrl":null,"url":null,"abstract":"People deeply care about how fairly they are treated by robots. The\nestablished paradigm for probing fairness in Human-Robot Interaction (HRI)\ninvolves measuring the perception of the fairness of a robot at the conclusion\nof an interaction. However, such an approach is limited as interactions vary\nover time, potentially causing changes in fairness perceptions as well. To\nvalidate this idea, we conducted a 2x2 user study with a mixed design (N=40)\nwhere we investigated two factors: the timing of unfair robot actions (early or\nlate in an interaction) and the beneficiary of those actions (either another\nrobot or the participant). Our results show that fairness judgments are not\nstatic. They can shift based on the timing of unfair robot actions. Further, we\nexplored using perceptions of three key factors (reduced welfare, conduct, and\nmoral transgression) proposed by a Fairness Theory from Organizational Justice\nto predict momentary perceptions of fairness in our study. Interestingly, we\nfound that the reduced welfare and moral transgression factors were better\npredictors than all factors together. Our findings reinforce the idea that\nunfair robot behavior can shape perceptions of group dynamics and trust towards\na robot and pave the path to future research directions on moment-to-moment\nfairness perceptions","PeriodicalId":501031,"journal":{"name":"arXiv - CS - Robotics","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Robotics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.07560","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
People deeply care about how fairly they are treated by robots. The
established paradigm for probing fairness in Human-Robot Interaction (HRI)
involves measuring the perception of the fairness of a robot at the conclusion
of an interaction. However, such an approach is limited as interactions vary
over time, potentially causing changes in fairness perceptions as well. To
validate this idea, we conducted a 2x2 user study with a mixed design (N=40)
where we investigated two factors: the timing of unfair robot actions (early or
late in an interaction) and the beneficiary of those actions (either another
robot or the participant). Our results show that fairness judgments are not
static. They can shift based on the timing of unfair robot actions. Further, we
explored using perceptions of three key factors (reduced welfare, conduct, and
moral transgression) proposed by a Fairness Theory from Organizational Justice
to predict momentary perceptions of fairness in our study. Interestingly, we
found that the reduced welfare and moral transgression factors were better
predictors than all factors together. Our findings reinforce the idea that
unfair robot behavior can shape perceptions of group dynamics and trust towards
a robot and pave the path to future research directions on moment-to-moment
fairness perceptions