异构设备上的异步联邦学习:综述

IF 13.3 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS
Chenhao Xu , Youyang Qu , Yong Xiang , Longxiang Gao
{"title":"异构设备上的异步联邦学习:综述","authors":"Chenhao Xu ,&nbsp;Youyang Qu ,&nbsp;Yong Xiang ,&nbsp;Longxiang Gao","doi":"10.1016/j.cosrev.2023.100595","DOIUrl":null,"url":null,"abstract":"<div><p>Federated learning (FL) is a kind of distributed machine learning framework, where the global model is generated on the centralized aggregation server based on the parameters of local models, addressing concerns about privacy leakage caused by the collection of local training data. With the growing computational and communication capacities of edge and IoT devices, applying FL on heterogeneous devices to train machine learning models is becoming a prevailing trend. Nonetheless, the synchronous aggregation strategy in the classic FL paradigm, particularly on heterogeneous devices, encounters limitations in resource utilization due to the need to wait for slow devices before aggregation in each training round. Furthermore, the uneven distribution of data across devices (i.e. data heterogeneity) in real-world scenarios adversely impacts the accuracy of the global model. Consequently, many asynchronous FL (AFL) approaches have been introduced across various application contexts to enhance efficiency, performance, privacy, and security. This survey comprehensively analyzes and summarizes existing AFL variations using a novel classification scheme, including device heterogeneity, data heterogeneity, privacy, and security on heterogeneous devices, as well as applications on heterogeneous devices. Finally, this survey reveals rising challenges and presents potentially promising research directions in this under-investigated domain.</p></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"50 ","pages":"Article 100595"},"PeriodicalIF":13.3000,"publicationDate":"2023-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"90","resultStr":"{\"title\":\"Asynchronous federated learning on heterogeneous devices: A survey\",\"authors\":\"Chenhao Xu ,&nbsp;Youyang Qu ,&nbsp;Yong Xiang ,&nbsp;Longxiang Gao\",\"doi\":\"10.1016/j.cosrev.2023.100595\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Federated learning (FL) is a kind of distributed machine learning framework, where the global model is generated on the centralized aggregation server based on the parameters of local models, addressing concerns about privacy leakage caused by the collection of local training data. With the growing computational and communication capacities of edge and IoT devices, applying FL on heterogeneous devices to train machine learning models is becoming a prevailing trend. Nonetheless, the synchronous aggregation strategy in the classic FL paradigm, particularly on heterogeneous devices, encounters limitations in resource utilization due to the need to wait for slow devices before aggregation in each training round. Furthermore, the uneven distribution of data across devices (i.e. data heterogeneity) in real-world scenarios adversely impacts the accuracy of the global model. Consequently, many asynchronous FL (AFL) approaches have been introduced across various application contexts to enhance efficiency, performance, privacy, and security. This survey comprehensively analyzes and summarizes existing AFL variations using a novel classification scheme, including device heterogeneity, data heterogeneity, privacy, and security on heterogeneous devices, as well as applications on heterogeneous devices. Finally, this survey reveals rising challenges and presents potentially promising research directions in this under-investigated domain.</p></div>\",\"PeriodicalId\":48633,\"journal\":{\"name\":\"Computer Science Review\",\"volume\":\"50 \",\"pages\":\"Article 100595\"},\"PeriodicalIF\":13.3000,\"publicationDate\":\"2023-10-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"90\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computer Science Review\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S157401372300062X\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Science Review","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S157401372300062X","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 90

摘要

联合学习(FL)是一种分布式机器学习框架,其中基于本地模型的参数在集中式聚合服务器上生成全局模型,解决了由于收集本地训练数据而导致的隐私泄露问题。随着边缘设备和物联网设备的计算和通信能力不断增长,在异构设备上应用FL来训练机器学习模型正成为一种流行趋势。尽管如此,经典FL范式中的同步聚合策略,特别是在异构设备上,由于在每一轮训练中聚合之前需要等待慢速设备,因此在资源利用率方面遇到了限制。此外,在现实世界场景中,数据在设备之间的不均匀分布(即数据异构性)对全局模型的准确性产生了不利影响。因此,在各种应用程序上下文中引入了许多异步FL(AFL)方法,以提高效率、性能、隐私和安全性。这项调查使用一种新的分类方案全面分析和总结了现有的AFL变体,包括设备异构性、数据异构性、异构设备上的隐私和安全性,以及异构设备的应用。最后,这项调查揭示了日益增长的挑战,并在这一研究不足的领域提出了潜在的有前景的研究方向。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Asynchronous federated learning on heterogeneous devices: A survey

Federated learning (FL) is a kind of distributed machine learning framework, where the global model is generated on the centralized aggregation server based on the parameters of local models, addressing concerns about privacy leakage caused by the collection of local training data. With the growing computational and communication capacities of edge and IoT devices, applying FL on heterogeneous devices to train machine learning models is becoming a prevailing trend. Nonetheless, the synchronous aggregation strategy in the classic FL paradigm, particularly on heterogeneous devices, encounters limitations in resource utilization due to the need to wait for slow devices before aggregation in each training round. Furthermore, the uneven distribution of data across devices (i.e. data heterogeneity) in real-world scenarios adversely impacts the accuracy of the global model. Consequently, many asynchronous FL (AFL) approaches have been introduced across various application contexts to enhance efficiency, performance, privacy, and security. This survey comprehensively analyzes and summarizes existing AFL variations using a novel classification scheme, including device heterogeneity, data heterogeneity, privacy, and security on heterogeneous devices, as well as applications on heterogeneous devices. Finally, this survey reveals rising challenges and presents potentially promising research directions in this under-investigated domain.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Computer Science Review
Computer Science Review Computer Science-General Computer Science
CiteScore
32.70
自引率
0.00%
发文量
26
审稿时长
51 days
期刊介绍: Computer Science Review, a publication dedicated to research surveys and expository overviews of open problems in computer science, targets a broad audience within the field seeking comprehensive insights into the latest developments. The journal welcomes articles from various fields as long as their content impacts the advancement of computer science. In particular, articles that review the application of well-known Computer Science methods to other areas are in scope only if these articles advance the fundamental understanding of those methods.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信