{"title":"An Experimental Study of Different Aggregation Schemes in Semi-Asynchronous Federated Learning","authors":"Yunbo Li, Jiaping Gui, Yue Wu","doi":"arxiv-2405.16086","DOIUrl":null,"url":null,"abstract":"Federated learning is highly valued due to its high-performance computing in\ndistributed environments while safeguarding data privacy. To address resource\nheterogeneity, researchers have proposed a semi-asynchronous federated learning\n(SAFL) architecture. However, the performance gap between different aggregation\ntargets in SAFL remain unexplored. In this paper, we systematically compare the performance between two\nalgorithm modes, FedSGD and FedAvg that correspond to aggregating gradients and\nmodels, respectively. Our results across various task scenarios indicate these\ntwo modes exhibit a substantial performance gap. Specifically, FedSGD achieves\nhigher accuracy and faster convergence but experiences more severe fluctuates\nin accuracy, whereas FedAvg excels in handling straggler issues but converges\nslower with reduced accuracy.","PeriodicalId":501291,"journal":{"name":"arXiv - CS - Performance","volume":"2016 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Performance","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2405.16086","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Federated learning is highly valued due to its high-performance computing in
distributed environments while safeguarding data privacy. To address resource
heterogeneity, researchers have proposed a semi-asynchronous federated learning
(SAFL) architecture. However, the performance gap between different aggregation
targets in SAFL remain unexplored. In this paper, we systematically compare the performance between two
algorithm modes, FedSGD and FedAvg that correspond to aggregating gradients and
models, respectively. Our results across various task scenarios indicate these
two modes exhibit a substantial performance gap. Specifically, FedSGD achieves
higher accuracy and faster convergence but experiences more severe fluctuates
in accuracy, whereas FedAvg excels in handling straggler issues but converges
slower with reduced accuracy.