{"title":"Distributional Fairness-aware Recommendation","authors":"Hao Yang, Xian Wu, Zhaopeng Qiu, Yefeng Zheng, Xu Chen","doi":"10.1145/3652854","DOIUrl":null,"url":null,"abstract":"<p>Fairness has been gradually recognized as a significant problem in the recommendation domain. Previous models usually achieve fairness by reducing the average performance gap between different user groups. However, the average performance may not sufficiently represent all the characteristics of the performances in a user group. Thus, equivalent average performance may not mean the recommender model is fair, for example, the variance of the performances can be different. To alleviate this problem, in this paper, we define a novel type of fairness, where we require that the performance distributions across different user groups should be similar. We prove that with the same performance distribution, the numerical characteristics of the group performance, including the expectation, variance and any higher order moment, are also the same. To achieve distributional fairness, we propose a generative and adversarial training framework. In specific, we regard the recommender model as the generator to compute the performance for each user in different groups, and then we deploy a discriminator to judge which group the performance is drawn from. By iteratively optimizing the generator and the discriminator, we can theoretically prove that the optimal generator (the recommender model) can indeed lead to the equivalent performance distributions. To smooth the adversarial training process, we propose a novel dual curriculum learning strategy for optimal scheduling of training samples. Additionally, we tailor our framework to better suit top-N recommendation tasks by incorporating softened ranking metrics as measures of performance discrepancies. We conduct extensive experiments based on real-world datasets to demonstrate the effectiveness of our model.</p>","PeriodicalId":50936,"journal":{"name":"ACM Transactions on Information Systems","volume":null,"pages":null},"PeriodicalIF":5.4000,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Information Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1145/3652854","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Fairness has been gradually recognized as a significant problem in the recommendation domain. Previous models usually achieve fairness by reducing the average performance gap between different user groups. However, the average performance may not sufficiently represent all the characteristics of the performances in a user group. Thus, equivalent average performance may not mean the recommender model is fair, for example, the variance of the performances can be different. To alleviate this problem, in this paper, we define a novel type of fairness, where we require that the performance distributions across different user groups should be similar. We prove that with the same performance distribution, the numerical characteristics of the group performance, including the expectation, variance and any higher order moment, are also the same. To achieve distributional fairness, we propose a generative and adversarial training framework. In specific, we regard the recommender model as the generator to compute the performance for each user in different groups, and then we deploy a discriminator to judge which group the performance is drawn from. By iteratively optimizing the generator and the discriminator, we can theoretically prove that the optimal generator (the recommender model) can indeed lead to the equivalent performance distributions. To smooth the adversarial training process, we propose a novel dual curriculum learning strategy for optimal scheduling of training samples. Additionally, we tailor our framework to better suit top-N recommendation tasks by incorporating softened ranking metrics as measures of performance discrepancies. We conduct extensive experiments based on real-world datasets to demonstrate the effectiveness of our model.
期刊介绍:
The ACM Transactions on Information Systems (TOIS) publishes papers on information retrieval (such as search engines, recommender systems) that contain:
new principled information retrieval models or algorithms with sound empirical validation;
observational, experimental and/or theoretical studies yielding new insights into information retrieval or information seeking;
accounts of applications of existing information retrieval techniques that shed light on the strengths and weaknesses of the techniques;
formalization of new information retrieval or information seeking tasks and of methods for evaluating the performance on those tasks;
development of content (text, image, speech, video, etc) analysis methods to support information retrieval and information seeking;
development of computational models of user information preferences and interaction behaviors;
creation and analysis of evaluation methodologies for information retrieval and information seeking; or
surveys of existing work that propose a significant synthesis.
The information retrieval scope of ACM Transactions on Information Systems (TOIS) appeals to industry practitioners for its wealth of creative ideas, and to academic researchers for its descriptions of their colleagues'' work.