ShuffleFL:解决多设备联合学习中的异质性问题

IF 3.6 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS
Ran Zhu, Mingkun Yang, Qing Wang
{"title":"ShuffleFL:解决多设备联合学习中的异质性问题","authors":"Ran Zhu, Mingkun Yang, Qing Wang","doi":"10.1145/3659621","DOIUrl":null,"url":null,"abstract":"Federated Learning (FL) has emerged as a privacy-preserving paradigm for collaborative deep learning model training across distributed data silos. Despite its importance, FL faces challenges such as high latency and less effective global models. In this paper, we propose ShuffleFL, an innovative framework stemming from the hierarchical FL, which introduces a user layer between the FL devices and the FL server. ShuffleFL naturally groups devices based on their affiliations, e.g., belonging to the same user, to ease the strict privacy restriction-\"data at the FL devices cannot be shared with others\", thereby enabling the exchange of local samples among them. The user layer assumes a multi-faceted role, not just aggregating local updates but also coordinating data shuffling within affiliated devices. We formulate this data shuffling as an optimization problem, detailing our objectives to align local data closely with device computing capabilities and to ensure a more balanced data distribution at the intra-user devices. Through extensive experiments using realistic device profiles and five non-IID datasets, we demonstrate that ShuffleFL can improve inference accuracy by 2.81% to 7.85% and speed up the convergence by 4.11x to 36.56x when reaching the target accuracy.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":3.6000,"publicationDate":"2024-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"ShuffleFL: Addressing Heterogeneity in Multi-Device Federated Learning\",\"authors\":\"Ran Zhu, Mingkun Yang, Qing Wang\",\"doi\":\"10.1145/3659621\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Federated Learning (FL) has emerged as a privacy-preserving paradigm for collaborative deep learning model training across distributed data silos. Despite its importance, FL faces challenges such as high latency and less effective global models. In this paper, we propose ShuffleFL, an innovative framework stemming from the hierarchical FL, which introduces a user layer between the FL devices and the FL server. ShuffleFL naturally groups devices based on their affiliations, e.g., belonging to the same user, to ease the strict privacy restriction-\\\"data at the FL devices cannot be shared with others\\\", thereby enabling the exchange of local samples among them. The user layer assumes a multi-faceted role, not just aggregating local updates but also coordinating data shuffling within affiliated devices. We formulate this data shuffling as an optimization problem, detailing our objectives to align local data closely with device computing capabilities and to ensure a more balanced data distribution at the intra-user devices. Through extensive experiments using realistic device profiles and five non-IID datasets, we demonstrate that ShuffleFL can improve inference accuracy by 2.81% to 7.85% and speed up the convergence by 4.11x to 36.56x when reaching the target accuracy.\",\"PeriodicalId\":20553,\"journal\":{\"name\":\"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":3.6000,\"publicationDate\":\"2024-05-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3659621\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3659621","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

联盟学习(Federated Learning,FL)已成为跨分布式数据孤岛协作式深度学习模型训练的一种隐私保护范例。尽管其重要性不言而喻,但联邦学习仍面临着高延迟和全局模型效率较低等挑战。在本文中,我们提出了源自分层 FL 的创新框架 ShuffleFL,它在 FL 设备和 FL 服务器之间引入了用户层。ShuffleFL 根据设备的隶属关系(如属于同一用户)对设备进行自然分组,以简化严格的隐私限制--"FL 设备上的数据不能与他人共享",从而实现设备之间的本地样本交换。用户层承担着多方面的角色,不仅要聚合本地更新,还要协调附属设备内部的数据洗牌。我们将这种数据洗牌表述为一个优化问题,详细说明了我们的目标,即使本地数据与设备计算能力密切配合,并确保用户内部设备的数据分布更加均衡。通过使用现实设备配置文件和五个非 IID 数据集进行大量实验,我们证明 ShuffleFL 可以将推理准确率提高 2.81% 至 7.85%,并在达到目标准确率时将收敛速度提高 4.11 倍至 36.56 倍。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
ShuffleFL: Addressing Heterogeneity in Multi-Device Federated Learning
Federated Learning (FL) has emerged as a privacy-preserving paradigm for collaborative deep learning model training across distributed data silos. Despite its importance, FL faces challenges such as high latency and less effective global models. In this paper, we propose ShuffleFL, an innovative framework stemming from the hierarchical FL, which introduces a user layer between the FL devices and the FL server. ShuffleFL naturally groups devices based on their affiliations, e.g., belonging to the same user, to ease the strict privacy restriction-"data at the FL devices cannot be shared with others", thereby enabling the exchange of local samples among them. The user layer assumes a multi-faceted role, not just aggregating local updates but also coordinating data shuffling within affiliated devices. We formulate this data shuffling as an optimization problem, detailing our objectives to align local data closely with device computing capabilities and to ensure a more balanced data distribution at the intra-user devices. Through extensive experiments using realistic device profiles and five non-IID datasets, we demonstrate that ShuffleFL can improve inference accuracy by 2.81% to 7.85% and speed up the convergence by 4.11x to 36.56x when reaching the target accuracy.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies Computer Science-Computer Networks and Communications
CiteScore
9.10
自引率
0.00%
发文量
154
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信