{"title":"FedStrag: Straggler-aware federated learning for low resource devices","authors":"Aditya Kumar, Satish Narayana Srirama","doi":"10.1016/j.dcan.2024.12.004","DOIUrl":null,"url":null,"abstract":"<div><div>Federated Learning (FL) has become a popular training paradigm in recent years. However, stragglers are critical bottlenecks in an Internet of Things (IoT) network while training. These nodes produce stale updates to the server, which slow down the convergence. In this paper, we studied the impact of the stale updates on the global model, which is observed to be significant. To address this, we propose a weighted averaging scheme, FedStrag, that optimizes the training with stale updates. The work is focused on training a model in an IoT network that has multiple challenges, such as resource constraints, stragglers, network issues, device heterogeneity, etc. To this end, we developed a time-bounded asynchronous FL paradigm that can train a model on the continuous inflow of data in the edge-fog-cloud continuum. To test the FedStrag approach, a model is trained with multiple stragglers scenarios on both Independent and Identically Distributed (IID) and non-IID datasets on Raspberry Pis. The experiment results suggest that the FedStrag outperforms the baseline FedAvg in all possible cases.</div></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"11 4","pages":"Pages 1214-1224"},"PeriodicalIF":7.5000,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Digital Communications and Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S235286482400169X","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"TELECOMMUNICATIONS","Score":null,"Total":0}
引用次数: 0
Abstract
Federated Learning (FL) has become a popular training paradigm in recent years. However, stragglers are critical bottlenecks in an Internet of Things (IoT) network while training. These nodes produce stale updates to the server, which slow down the convergence. In this paper, we studied the impact of the stale updates on the global model, which is observed to be significant. To address this, we propose a weighted averaging scheme, FedStrag, that optimizes the training with stale updates. The work is focused on training a model in an IoT network that has multiple challenges, such as resource constraints, stragglers, network issues, device heterogeneity, etc. To this end, we developed a time-bounded asynchronous FL paradigm that can train a model on the continuous inflow of data in the edge-fog-cloud continuum. To test the FedStrag approach, a model is trained with multiple stragglers scenarios on both Independent and Identically Distributed (IID) and non-IID datasets on Raspberry Pis. The experiment results suggest that the FedStrag outperforms the baseline FedAvg in all possible cases.
期刊介绍:
Digital Communications and Networks is a prestigious journal that emphasizes on communication systems and networks. We publish only top-notch original articles and authoritative reviews, which undergo rigorous peer-review. We are proud to announce that all our articles are fully Open Access and can be accessed on ScienceDirect. Our journal is recognized and indexed by eminent databases such as the Science Citation Index Expanded (SCIE) and Scopus.
In addition to regular articles, we may also consider exceptional conference papers that have been significantly expanded. Furthermore, we periodically release special issues that focus on specific aspects of the field.
In conclusion, Digital Communications and Networks is a leading journal that guarantees exceptional quality and accessibility for researchers and scholars in the field of communication systems and networks.