{"title":"Decision making strategies differ in the presence of collaborative explanations: two conjoint studies","authors":"Ludovik Çoba, M. Zanker, L. Rook, P. Symeonidis","doi":"10.1145/3301275.3302304","DOIUrl":null,"url":null,"abstract":"Rating-based summary statistics are ubiquitous in e-commerce, and often are crucial components in personalized recommendation mechanisms. Especially visual rating summarizations have been identified as important means to explain, why an item is presented or proposed to an user. Largely left unexplored, however, is the issue to what extent the descriptives of these rating summary statistics influence decision making of the online consumer. Therefore, we conducted a series of two conjoint experiments to explore how different summarizations of rating distributions (i.e., in the form of number of ratings, mean, variance, skewness, bimodality, or origin of the ratings) impact users' decision making. In a first study with over 200 participants, we identified that users are primarily guided by the mean and the number of ratings, and - to lesser degree - by the variance and origin of a rating. When probing the maximizing behavioral tendencies of our participants, other sensitivities regarding the summary of rating distributions became apparent. We thus instrumented a follow-up eye-tracking study to explore in more detail, how the choices of participants vary in terms of their decision making strategies. This second round with over 40 additional participants supported our hypothesis that users, who usually experience higher decision difficulty, follow compensatory decision strategies, and focus more on the decisions they make. We conclude by outlining how the results of these studies can guide algorithm development, and counterbalance presumable biases in implicit user feedback.","PeriodicalId":153096,"journal":{"name":"Proceedings of the 24th International Conference on Intelligent User Interfaces","volume":"115 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"13","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 24th International Conference on Intelligent User Interfaces","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3301275.3302304","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 13
Abstract
Rating-based summary statistics are ubiquitous in e-commerce, and often are crucial components in personalized recommendation mechanisms. Especially visual rating summarizations have been identified as important means to explain, why an item is presented or proposed to an user. Largely left unexplored, however, is the issue to what extent the descriptives of these rating summary statistics influence decision making of the online consumer. Therefore, we conducted a series of two conjoint experiments to explore how different summarizations of rating distributions (i.e., in the form of number of ratings, mean, variance, skewness, bimodality, or origin of the ratings) impact users' decision making. In a first study with over 200 participants, we identified that users are primarily guided by the mean and the number of ratings, and - to lesser degree - by the variance and origin of a rating. When probing the maximizing behavioral tendencies of our participants, other sensitivities regarding the summary of rating distributions became apparent. We thus instrumented a follow-up eye-tracking study to explore in more detail, how the choices of participants vary in terms of their decision making strategies. This second round with over 40 additional participants supported our hypothesis that users, who usually experience higher decision difficulty, follow compensatory decision strategies, and focus more on the decisions they make. We conclude by outlining how the results of these studies can guide algorithm development, and counterbalance presumable biases in implicit user feedback.