具有定制隐私保护功能的社交感知聚类联合学习

IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
Yuntao Wang;Zhou Su;Yanghe Pan;Tom H. Luan;Ruidong Li;Shui Yu
{"title":"具有定制隐私保护功能的社交感知聚类联合学习","authors":"Yuntao Wang;Zhou Su;Yanghe Pan;Tom H. Luan;Ruidong Li;Shui Yu","doi":"10.1109/TNET.2024.3379439","DOIUrl":null,"url":null,"abstract":"A key feature of federated learning (FL) is to preserve the data privacy of end users. However, there still exist potential privacy leakage in exchanging gradients under FL. As a result, recent research often explores the differential privacy (DP) approaches to add noises to the computing results to address privacy concerns with low overheads, which however degrade the model performance. In this paper, we strike the balance of data privacy and efficiency by utilizing the pervasive social connections between users. Specifically, we propose SCFL, a novel Social-aware Clustered Federated Learning scheme, where mutually trusted individuals can freely form a social cluster and aggregate their raw model updates (e.g., gradients) inside each cluster before uploading to the cloud for global aggregation. By mixing model updates in a social group, adversaries can only eavesdrop the social-layer combined results, but not the privacy of individuals. As such, SCFL considerably enhances model utility without sacrificing privacy in a low-cost and highly feasible manner. We unfold the design of SCFL in three steps. i) Stable social cluster formation. Considering users’ heterogeneous training samples and data distributions, we formulate the optimal social cluster formation problem as a federation game and devise a fair revenue allocation mechanism to resist free-riders. ii) Differentiated trust-privacy mapping. For the clusters with low mutual trust, we design a customizable privacy preservation mechanism to adaptively sanitize participants’ model updates depending on social trust degrees. iii) Distributed convergence. A distributed two-sided matching algorithm is devised to attain an optimized disjoint partition with Nash-stable convergence. Experiments on Facebook network and MNIST/CIFAR-10 datasets validate that our SCFL can effectively enhance learning utility, improve user payoff, and enforce customizable privacy protection.","PeriodicalId":13443,"journal":{"name":"IEEE/ACM Transactions on Networking","volume":"32 5","pages":"3654-3668"},"PeriodicalIF":3.0000,"publicationDate":"2024-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Social-Aware Clustered Federated Learning With Customized Privacy Preservation\",\"authors\":\"Yuntao Wang;Zhou Su;Yanghe Pan;Tom H. Luan;Ruidong Li;Shui Yu\",\"doi\":\"10.1109/TNET.2024.3379439\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A key feature of federated learning (FL) is to preserve the data privacy of end users. However, there still exist potential privacy leakage in exchanging gradients under FL. As a result, recent research often explores the differential privacy (DP) approaches to add noises to the computing results to address privacy concerns with low overheads, which however degrade the model performance. In this paper, we strike the balance of data privacy and efficiency by utilizing the pervasive social connections between users. Specifically, we propose SCFL, a novel Social-aware Clustered Federated Learning scheme, where mutually trusted individuals can freely form a social cluster and aggregate their raw model updates (e.g., gradients) inside each cluster before uploading to the cloud for global aggregation. By mixing model updates in a social group, adversaries can only eavesdrop the social-layer combined results, but not the privacy of individuals. As such, SCFL considerably enhances model utility without sacrificing privacy in a low-cost and highly feasible manner. We unfold the design of SCFL in three steps. i) Stable social cluster formation. Considering users’ heterogeneous training samples and data distributions, we formulate the optimal social cluster formation problem as a federation game and devise a fair revenue allocation mechanism to resist free-riders. ii) Differentiated trust-privacy mapping. For the clusters with low mutual trust, we design a customizable privacy preservation mechanism to adaptively sanitize participants’ model updates depending on social trust degrees. iii) Distributed convergence. A distributed two-sided matching algorithm is devised to attain an optimized disjoint partition with Nash-stable convergence. Experiments on Facebook network and MNIST/CIFAR-10 datasets validate that our SCFL can effectively enhance learning utility, improve user payoff, and enforce customizable privacy protection.\",\"PeriodicalId\":13443,\"journal\":{\"name\":\"IEEE/ACM Transactions on Networking\",\"volume\":\"32 5\",\"pages\":\"3654-3668\"},\"PeriodicalIF\":3.0000,\"publicationDate\":\"2024-10-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE/ACM Transactions on Networking\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10704033/\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE/ACM Transactions on Networking","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10704033/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

摘要

联合学习(FL)的一个主要特点是保护终端用户的数据隐私。然而,FL 下的梯度交换仍存在潜在的隐私泄露问题。因此,最近的研究通常会探索差分隐私(DP)方法,即在计算结果中添加噪音,从而以较低的开销解决隐私问题,但这会降低模型的性能。在本文中,我们利用用户之间无处不在的社交关系,在数据隐私和效率之间取得了平衡。具体来说,我们提出了一种新颖的 "社交感知集群联合学习"(Social-aware Clustered Federated Learning)方案,即相互信任的个人可以自由组成一个社交集群,并在上传到云端进行全局聚合之前,在每个集群内聚合他们的原始模型更新(如梯度)。通过在社交群组中混合模型更新,对手只能窃听社交层的合并结果,而无法窃听个人隐私。因此,SCFL 以低成本和高度可行的方式在不牺牲隐私的情况下大大提高了模型的实用性。我们分三步展开 SCFL 的设计。考虑到用户的异构训练样本和数据分布,我们将最优社交集群形成问题表述为一个联盟博弈,并设计了一种公平的收益分配机制来抵制搭便车者。对于互信度较低的集群,我们设计了一种可定制的隐私保护机制,根据社会信任度自适应地屏蔽参与者的模型更新。我们设计了一种分布式双面匹配算法,以实现具有纳什稳定收敛性的优化分区。在 Facebook 网络和 MNIST/CIFAR-10 数据集上的实验验证了我们的 SCFL 能够有效提高学习效用、改善用户回报并实施可定制的隐私保护。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Social-Aware Clustered Federated Learning With Customized Privacy Preservation
A key feature of federated learning (FL) is to preserve the data privacy of end users. However, there still exist potential privacy leakage in exchanging gradients under FL. As a result, recent research often explores the differential privacy (DP) approaches to add noises to the computing results to address privacy concerns with low overheads, which however degrade the model performance. In this paper, we strike the balance of data privacy and efficiency by utilizing the pervasive social connections between users. Specifically, we propose SCFL, a novel Social-aware Clustered Federated Learning scheme, where mutually trusted individuals can freely form a social cluster and aggregate their raw model updates (e.g., gradients) inside each cluster before uploading to the cloud for global aggregation. By mixing model updates in a social group, adversaries can only eavesdrop the social-layer combined results, but not the privacy of individuals. As such, SCFL considerably enhances model utility without sacrificing privacy in a low-cost and highly feasible manner. We unfold the design of SCFL in three steps. i) Stable social cluster formation. Considering users’ heterogeneous training samples and data distributions, we formulate the optimal social cluster formation problem as a federation game and devise a fair revenue allocation mechanism to resist free-riders. ii) Differentiated trust-privacy mapping. For the clusters with low mutual trust, we design a customizable privacy preservation mechanism to adaptively sanitize participants’ model updates depending on social trust degrees. iii) Distributed convergence. A distributed two-sided matching algorithm is devised to attain an optimized disjoint partition with Nash-stable convergence. Experiments on Facebook network and MNIST/CIFAR-10 datasets validate that our SCFL can effectively enhance learning utility, improve user payoff, and enforce customizable privacy protection.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE/ACM Transactions on Networking
IEEE/ACM Transactions on Networking 工程技术-电信学
CiteScore
8.20
自引率
5.40%
发文量
246
审稿时长
4-8 weeks
期刊介绍: The IEEE/ACM Transactions on Networking’s high-level objective is to publish high-quality, original research results derived from theoretical or experimental exploration of the area of communication/computer networking, covering all sorts of information transport networks over all sorts of physical layer technologies, both wireline (all kinds of guided media: e.g., copper, optical) and wireless (e.g., radio-frequency, acoustic (e.g., underwater), infra-red), or hybrids of these. The journal welcomes applied contributions reporting on novel experiences and experiments with actual systems.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信