{"title":"FairGap:通过生成反事实图进行公平感知推荐","authors":"Wei Chen, Yiqing Wu, Zhao Zhang, Fuzhen Zhuang, Zhongshi He, Ruobing Xie, Feng xia","doi":"10.1145/3638352","DOIUrl":null,"url":null,"abstract":"The emergence of Graph Neural Networks (GNNs) has greatly advanced the development of recommendation systems. Recently, many researchers have leveraged GNN-based models to learn fair representations for users and items. However, current GNN-based models suffer from biased user-item interaction data, which negatively impacts recommendation fairness. Although there have been several studies employed adversarial learning to mitigate this issue in recommendation systems, they mostly focus on modifying the model training approach with fairness regularization and neglect direct intervention of biased interaction. Different from these models, this paper introduces a novel perspective by directly intervening in observed interactions to generate a counterfactual graph (called FairGap) that is not influenced by sensitive node attributes, enabling us to learn fair representations for users and items easily. We design the FairGap to answer the key counterfactual question: “ Would interactions with an item remain unchanged if user’s sensitive attributes were concealed? ”. We also provide theoretical proofs to show that our learning strategy via the counterfactual graph is unbiased in expectation. Moreover, we propose a fairness-enhancing mechanism to continuously improve user fairness in the graph-based recommendation. Extensive experimental results against state-of-the-art competitors and base models on three real-world datasets validate the effectiveness of our proposed model.","PeriodicalId":50936,"journal":{"name":"ACM Transactions on Information Systems","volume":null,"pages":null},"PeriodicalIF":5.4000,"publicationDate":"2023-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"FairGap: Fairness-aware Recommendation via Generating Counterfactual Graph\",\"authors\":\"Wei Chen, Yiqing Wu, Zhao Zhang, Fuzhen Zhuang, Zhongshi He, Ruobing Xie, Feng xia\",\"doi\":\"10.1145/3638352\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The emergence of Graph Neural Networks (GNNs) has greatly advanced the development of recommendation systems. Recently, many researchers have leveraged GNN-based models to learn fair representations for users and items. However, current GNN-based models suffer from biased user-item interaction data, which negatively impacts recommendation fairness. Although there have been several studies employed adversarial learning to mitigate this issue in recommendation systems, they mostly focus on modifying the model training approach with fairness regularization and neglect direct intervention of biased interaction. Different from these models, this paper introduces a novel perspective by directly intervening in observed interactions to generate a counterfactual graph (called FairGap) that is not influenced by sensitive node attributes, enabling us to learn fair representations for users and items easily. We design the FairGap to answer the key counterfactual question: “ Would interactions with an item remain unchanged if user’s sensitive attributes were concealed? ”. We also provide theoretical proofs to show that our learning strategy via the counterfactual graph is unbiased in expectation. Moreover, we propose a fairness-enhancing mechanism to continuously improve user fairness in the graph-based recommendation. Extensive experimental results against state-of-the-art competitors and base models on three real-world datasets validate the effectiveness of our proposed model.\",\"PeriodicalId\":50936,\"journal\":{\"name\":\"ACM Transactions on Information Systems\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":5.4000,\"publicationDate\":\"2023-12-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACM Transactions on Information Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1145/3638352\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Information Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1145/3638352","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
FairGap: Fairness-aware Recommendation via Generating Counterfactual Graph
The emergence of Graph Neural Networks (GNNs) has greatly advanced the development of recommendation systems. Recently, many researchers have leveraged GNN-based models to learn fair representations for users and items. However, current GNN-based models suffer from biased user-item interaction data, which negatively impacts recommendation fairness. Although there have been several studies employed adversarial learning to mitigate this issue in recommendation systems, they mostly focus on modifying the model training approach with fairness regularization and neglect direct intervention of biased interaction. Different from these models, this paper introduces a novel perspective by directly intervening in observed interactions to generate a counterfactual graph (called FairGap) that is not influenced by sensitive node attributes, enabling us to learn fair representations for users and items easily. We design the FairGap to answer the key counterfactual question: “ Would interactions with an item remain unchanged if user’s sensitive attributes were concealed? ”. We also provide theoretical proofs to show that our learning strategy via the counterfactual graph is unbiased in expectation. Moreover, we propose a fairness-enhancing mechanism to continuously improve user fairness in the graph-based recommendation. Extensive experimental results against state-of-the-art competitors and base models on three real-world datasets validate the effectiveness of our proposed model.
期刊介绍:
The ACM Transactions on Information Systems (TOIS) publishes papers on information retrieval (such as search engines, recommender systems) that contain:
new principled information retrieval models or algorithms with sound empirical validation;
observational, experimental and/or theoretical studies yielding new insights into information retrieval or information seeking;
accounts of applications of existing information retrieval techniques that shed light on the strengths and weaknesses of the techniques;
formalization of new information retrieval or information seeking tasks and of methods for evaluating the performance on those tasks;
development of content (text, image, speech, video, etc) analysis methods to support information retrieval and information seeking;
development of computational models of user information preferences and interaction behaviors;
creation and analysis of evaluation methodologies for information retrieval and information seeking; or
surveys of existing work that propose a significant synthesis.
The information retrieval scope of ACM Transactions on Information Systems (TOIS) appeals to industry practitioners for its wealth of creative ideas, and to academic researchers for its descriptions of their colleagues'' work.