层次联邦学习中实用增强的个性化隐私保护

IF 9.2 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS
Jianan Chen;Honglu Jiang;Qin Hu
{"title":"层次联邦学习中实用增强的个性化隐私保护","authors":"Jianan Chen;Honglu Jiang;Qin Hu","doi":"10.1109/TMC.2025.3531919","DOIUrl":null,"url":null,"abstract":"Federated learning (FL) is a distributed learning framework that allows clients to jointly train a model by uploading parameter updates rather than sharing local data. FL deployed on a client-edge-cloud hierarchical architecture, named Hierarchical Federated Learning (HFL), can accelerate model training and accommodate more clients with reduced communication cost via edge aggregation. Unfortunately, HFL suffers from privacy risks since the submitted parameters from clients are vulnerable to privacy attacks. To address this issue, we propose a novel Differential Privacy (DP) definition tailored for HFL, i.e., Group Local Differential Privacy (GLDP). We design the Sampling-Randomizing-Shuffling (SRS) mechanism to implement GLDP in HFL, where the sampling process is employed to achieve a stronger level of privacy protection with less noise added. By combining the randomized response and the shuffling mechanism, our proposed SRS mechanism can achieve client-level personalization within <inline-formula><tex-math>$\\rho _{k}$</tex-math></inline-formula>-GLDP for privacy preservation while balancing model performance and privacy protection in HFL. Privacy analysis and convergence analysis are conducted to provide theoretical performance guarantees. Experimental results based on real-world datasets verify the effectiveness of SRS.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 6","pages":"5264-5279"},"PeriodicalIF":9.2000,"publicationDate":"2025-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Utility-Enhanced Personalized Privacy Preservation in Hierarchical Federated Learning\",\"authors\":\"Jianan Chen;Honglu Jiang;Qin Hu\",\"doi\":\"10.1109/TMC.2025.3531919\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Federated learning (FL) is a distributed learning framework that allows clients to jointly train a model by uploading parameter updates rather than sharing local data. FL deployed on a client-edge-cloud hierarchical architecture, named Hierarchical Federated Learning (HFL), can accelerate model training and accommodate more clients with reduced communication cost via edge aggregation. Unfortunately, HFL suffers from privacy risks since the submitted parameters from clients are vulnerable to privacy attacks. To address this issue, we propose a novel Differential Privacy (DP) definition tailored for HFL, i.e., Group Local Differential Privacy (GLDP). We design the Sampling-Randomizing-Shuffling (SRS) mechanism to implement GLDP in HFL, where the sampling process is employed to achieve a stronger level of privacy protection with less noise added. By combining the randomized response and the shuffling mechanism, our proposed SRS mechanism can achieve client-level personalization within <inline-formula><tex-math>$\\\\rho _{k}$</tex-math></inline-formula>-GLDP for privacy preservation while balancing model performance and privacy protection in HFL. Privacy analysis and convergence analysis are conducted to provide theoretical performance guarantees. Experimental results based on real-world datasets verify the effectiveness of SRS.\",\"PeriodicalId\":50389,\"journal\":{\"name\":\"IEEE Transactions on Mobile Computing\",\"volume\":\"24 6\",\"pages\":\"5264-5279\"},\"PeriodicalIF\":9.2000,\"publicationDate\":\"2025-01-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Mobile Computing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10847868/\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Mobile Computing","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10847868/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

联邦学习(FL)是一种分布式学习框架,它允许客户端通过上传参数更新而不是共享本地数据来联合训练模型。FL部署在客户端-边缘云分层架构上,称为分层联邦学习(HFL),可以加速模型训练,并通过边缘聚合降低通信成本,容纳更多的客户端。不幸的是,HFL存在隐私风险,因为客户端提交的参数容易受到隐私攻击。为了解决这个问题,我们提出了一个针对HFL的新的差分隐私(DP)定义,即组局部差分隐私(GLDP)。我们设计了采样-随机化-洗漱(SRS)机制来实现HFL中的GLDP,其中采样过程用于实现更强的隐私保护水平,并且添加了更少的噪声。通过将随机响应和洗漱机制相结合,我们提出的SRS机制可以在$\rho _{k}$-GLDP内实现客户级个性化,以保护隐私,同时在HFL中平衡模型性能和隐私保护。通过隐私性分析和收敛性分析为性能提供理论保证。基于实际数据集的实验结果验证了SRS的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Utility-Enhanced Personalized Privacy Preservation in Hierarchical Federated Learning
Federated learning (FL) is a distributed learning framework that allows clients to jointly train a model by uploading parameter updates rather than sharing local data. FL deployed on a client-edge-cloud hierarchical architecture, named Hierarchical Federated Learning (HFL), can accelerate model training and accommodate more clients with reduced communication cost via edge aggregation. Unfortunately, HFL suffers from privacy risks since the submitted parameters from clients are vulnerable to privacy attacks. To address this issue, we propose a novel Differential Privacy (DP) definition tailored for HFL, i.e., Group Local Differential Privacy (GLDP). We design the Sampling-Randomizing-Shuffling (SRS) mechanism to implement GLDP in HFL, where the sampling process is employed to achieve a stronger level of privacy protection with less noise added. By combining the randomized response and the shuffling mechanism, our proposed SRS mechanism can achieve client-level personalization within $\rho _{k}$-GLDP for privacy preservation while balancing model performance and privacy protection in HFL. Privacy analysis and convergence analysis are conducted to provide theoretical performance guarantees. Experimental results based on real-world datasets verify the effectiveness of SRS.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Transactions on Mobile Computing
IEEE Transactions on Mobile Computing 工程技术-电信学
CiteScore
12.90
自引率
2.50%
发文量
403
审稿时长
6.6 months
期刊介绍: IEEE Transactions on Mobile Computing addresses key technical issues related to various aspects of mobile computing. This includes (a) architectures, (b) support services, (c) algorithm/protocol design and analysis, (d) mobile environments, (e) mobile communication systems, (f) applications, and (g) emerging technologies. Topics of interest span a wide range, covering aspects like mobile networks and hosts, mobility management, multimedia, operating system support, power management, online and mobile environments, security, scalability, reliability, and emerging technologies such as wearable computers, body area networks, and wireless sensor networks. The journal serves as a comprehensive platform for advancements in mobile computing research.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信