FairCoRe: Fairness-Aware Recommendation Through Counterfactual Representation Learning

IF 10.4 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Chenzhong Bin;Wenqiang Liu;Feng Zhang;Liang Chang;Tianlong Gu
{"title":"FairCoRe: Fairness-Aware Recommendation Through Counterfactual Representation Learning","authors":"Chenzhong Bin;Wenqiang Liu;Feng Zhang;Liang Chang;Tianlong Gu","doi":"10.1109/TKDE.2025.3557501","DOIUrl":null,"url":null,"abstract":"Eliminating bias from data representations is crucial to ensure fairness in recommendation. Existing studies primarily focus on weakening the correlation between data representations and sensitive attributes, yet may inadvertently steer the user representations toward another potential bias direction of the target attribute. Furthermore, they often overlook the impact of user preferences on capturing sensitive information, incurring inadequate bias elimination. In this paper, we propose a <bold>Fair</b> <bold>Co</b>unterfactual <bold>Re</b>presentations (<bold>FairCoRe</b>) learning framework, which aims to ensure the neutrality of representations among all bias directions. First, we intervene on sensitive attributes to construct a counterfactual scenario. Then, two opposing attribute prediction tasks are respectively performed in ground-truth and counterfactual scenarios to encode sensitive information along different bias directions. Second, we design a bias-aware enhancement learning method that quantifies the respective correlation of user preferences and sensitive attributes to enhance sensitive information encoding. Finally, we introduce two mutual information optimization methods that optimize the representations to capture users’ interests and disentangle sensitive factors. Moreover, we propose an attribute neutralization strategy that refines the learned representations, ensuring sensitive attribute neutrality. Extensive experiments demonstrate that our method achieves the optimal fairness and competitive accuracy compared to state-of-the-art methods.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"37 7","pages":"4049-4062"},"PeriodicalIF":10.4000,"publicationDate":"2025-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Knowledge and Data Engineering","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10948167/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Eliminating bias from data representations is crucial to ensure fairness in recommendation. Existing studies primarily focus on weakening the correlation between data representations and sensitive attributes, yet may inadvertently steer the user representations toward another potential bias direction of the target attribute. Furthermore, they often overlook the impact of user preferences on capturing sensitive information, incurring inadequate bias elimination. In this paper, we propose a Fair Counterfactual Representations (FairCoRe) learning framework, which aims to ensure the neutrality of representations among all bias directions. First, we intervene on sensitive attributes to construct a counterfactual scenario. Then, two opposing attribute prediction tasks are respectively performed in ground-truth and counterfactual scenarios to encode sensitive information along different bias directions. Second, we design a bias-aware enhancement learning method that quantifies the respective correlation of user preferences and sensitive attributes to enhance sensitive information encoding. Finally, we introduce two mutual information optimization methods that optimize the representations to capture users’ interests and disentangle sensitive factors. Moreover, we propose an attribute neutralization strategy that refines the learned representations, ensuring sensitive attribute neutrality. Extensive experiments demonstrate that our method achieves the optimal fairness and competitive accuracy compared to state-of-the-art methods.
FairCoRe:基于反事实表征学习的公平意识推荐
消除数据表示中的偏见对于确保推荐的公平性至关重要。现有的研究主要侧重于削弱数据表示与敏感属性之间的相关性,但可能无意中将用户表示引向目标属性的另一个潜在偏差方向。此外,他们经常忽略用户偏好对捕获敏感信息的影响,导致偏见消除不足。在本文中,我们提出了一个公平反事实表征(FairCoRe)学习框架,旨在确保所有偏见方向之间表征的中立性。首先,我们对敏感属性进行干预,构建一个反事实情景。然后,分别在真和反事实场景下执行两个相反的属性预测任务,沿不同的偏置方向对敏感信息进行编码。其次,我们设计了一种偏差感知增强学习方法,量化用户偏好和敏感属性的各自相关性,以增强敏感信息编码。最后,我们引入了两种互信息优化方法来优化表征,以捕获用户的兴趣并解开敏感因素。此外,我们还提出了一种属性中和策略,该策略可以改进学习到的表示,以确保敏感属性的中立性。大量的实验表明,与最先进的方法相比,我们的方法达到了最佳的公平性和竞争精度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Transactions on Knowledge and Data Engineering
IEEE Transactions on Knowledge and Data Engineering 工程技术-工程:电子与电气
CiteScore
11.70
自引率
3.40%
发文量
515
审稿时长
6 months
期刊介绍: The IEEE Transactions on Knowledge and Data Engineering encompasses knowledge and data engineering aspects within computer science, artificial intelligence, electrical engineering, computer engineering, and related fields. It provides an interdisciplinary platform for disseminating new developments in knowledge and data engineering and explores the practicality of these concepts in both hardware and software. Specific areas covered include knowledge-based and expert systems, AI techniques for knowledge and data management, tools, and methodologies, distributed processing, real-time systems, architectures, data management practices, database design, query languages, security, fault tolerance, statistical databases, algorithms, performance evaluation, and applications.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信