Fairness-aware Federated Matrix Factorization

Shuchang Liu, Yingqiang Ge, Shuyuan Xu, Yongfeng Zhang, A. Marian
{"title":"Fairness-aware Federated Matrix Factorization","authors":"Shuchang Liu, Yingqiang Ge, Shuyuan Xu, Yongfeng Zhang, A. Marian","doi":"10.1145/3523227.3546771","DOIUrl":null,"url":null,"abstract":"Achieving fairness over different user groups in recommender systems is an important problem. The majority of existing works achieve fairness through constrained optimization that combines the recommendation loss and the fairness constraint. To achieve fairness, the algorithm usually needs to know each user’s group affiliation feature such as gender or race. However, such involved user group feature is usually sensitive and requires protection. In this work, we seek a federated learning solution for the fair recommendation problem and identify the main challenge as an algorithmic conflict between the global fairness objective and the localized federated optimization process. On one hand, the fairness objective usually requires access to all users’ group information. On the other hand, the federated learning systems restrain the personal data in each user’s local space. As a resolution, we propose to communicate group statistics during federated optimization and use differential privacy techniques to avoid exposure of users’ group information when users require privacy protection. We illustrate the theoretical bounds of the noisy signal used in our method that aims to enforce privacy without overwhelming the aggregated statistics. Empirical results show that federated learning may naturally improve user group fairness and the proposed framework can effectively control this fairness with low communication overheads.","PeriodicalId":443279,"journal":{"name":"Proceedings of the 16th ACM Conference on Recommender Systems","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2022-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"17","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 16th ACM Conference on Recommender Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3523227.3546771","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 17

Abstract

Achieving fairness over different user groups in recommender systems is an important problem. The majority of existing works achieve fairness through constrained optimization that combines the recommendation loss and the fairness constraint. To achieve fairness, the algorithm usually needs to know each user’s group affiliation feature such as gender or race. However, such involved user group feature is usually sensitive and requires protection. In this work, we seek a federated learning solution for the fair recommendation problem and identify the main challenge as an algorithmic conflict between the global fairness objective and the localized federated optimization process. On one hand, the fairness objective usually requires access to all users’ group information. On the other hand, the federated learning systems restrain the personal data in each user’s local space. As a resolution, we propose to communicate group statistics during federated optimization and use differential privacy techniques to avoid exposure of users’ group information when users require privacy protection. We illustrate the theoretical bounds of the noisy signal used in our method that aims to enforce privacy without overwhelming the aggregated statistics. Empirical results show that federated learning may naturally improve user group fairness and the proposed framework can effectively control this fairness with low communication overheads.
公平感知的联邦矩阵分解
在推荐系统中,实现不同用户群之间的公平性是一个重要的问题。现有的大多数作品都是通过结合推荐损失和公平性约束的约束优化来实现公平性的。为了实现公平性,算法通常需要知道每个用户的群体归属特征,如性别或种族。然而,这种涉及到的用户组特征通常是敏感的,需要保护。在这项工作中,我们寻求公平推荐问题的联邦学习解决方案,并将主要挑战确定为全局公平目标与局部联邦优化过程之间的算法冲突。一方面,公平性目标通常要求获得所有用户的群组信息。另一方面,联邦学习系统将个人数据限制在每个用户的本地空间中。作为解决方案,我们建议在联邦优化期间通信组统计数据,并使用差分隐私技术来避免在用户需要隐私保护时暴露用户的组信息。我们说明了在我们的方法中使用的噪声信号的理论界限,该方法旨在在不压倒聚合统计的情况下加强隐私。实证结果表明,联邦学习可以自然地提高用户组的公平性,所提出的框架可以有效地控制这种公平性,且通信开销低。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
文献相关原料
公司名称 产品信息 采购帮参考价格
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信