Anirudha Kulkarni, Abhinav Kumar, R. Shorey, Rohit Verma
{"title":"建立具有选择性聚合功能的高效联盟学习框架","authors":"Anirudha Kulkarni, Abhinav Kumar, R. Shorey, Rohit Verma","doi":"10.1109/COMSNETS59351.2024.10426966","DOIUrl":null,"url":null,"abstract":"Federated Learning shows promise for collaborative, decentralized machine learning but faces efficiency challenges, primarily network straggler-induced latency bottlenecks and the need for complex aggregation techniques. To address these issues, ongoing research explores asynchronous FL, i.e., federated learning models, including an Asynchronous Parallel Federated Learning [5] framework. This study investigates the impact of varying worker node numbers on key metrics. More nodes offer faster convergence but may increase communication overhead and straggler vulnerability. We aim to quantify how the number of worker node variations for one global aggregation can affect convergence speed, communication efficiency, model accuracy, and system robustness, optimizing asynchronous FL system configurations. This work is crucial for practical and scalable FL applications, mitigating network stragglers, data distribution, and security challenges. This work analyses Asynchronous Parallel Federated Learning and showcases a paradigm shift in the approach by selectively aggregating early arriving worker node updates with a novel parameter ‘x’, improving efficiency and reshaping FL.","PeriodicalId":518748,"journal":{"name":"2024 16th International Conference on COMmunication Systems & NETworkS (COMSNETS)","volume":"241 1-2","pages":"623-627"},"PeriodicalIF":0.0000,"publicationDate":"2024-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Towards an Efficient Federated Learning Framework with Selective Aggregation\",\"authors\":\"Anirudha Kulkarni, Abhinav Kumar, R. Shorey, Rohit Verma\",\"doi\":\"10.1109/COMSNETS59351.2024.10426966\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Federated Learning shows promise for collaborative, decentralized machine learning but faces efficiency challenges, primarily network straggler-induced latency bottlenecks and the need for complex aggregation techniques. To address these issues, ongoing research explores asynchronous FL, i.e., federated learning models, including an Asynchronous Parallel Federated Learning [5] framework. This study investigates the impact of varying worker node numbers on key metrics. More nodes offer faster convergence but may increase communication overhead and straggler vulnerability. We aim to quantify how the number of worker node variations for one global aggregation can affect convergence speed, communication efficiency, model accuracy, and system robustness, optimizing asynchronous FL system configurations. This work is crucial for practical and scalable FL applications, mitigating network stragglers, data distribution, and security challenges. This work analyses Asynchronous Parallel Federated Learning and showcases a paradigm shift in the approach by selectively aggregating early arriving worker node updates with a novel parameter ‘x’, improving efficiency and reshaping FL.\",\"PeriodicalId\":518748,\"journal\":{\"name\":\"2024 16th International Conference on COMmunication Systems & NETworkS (COMSNETS)\",\"volume\":\"241 1-2\",\"pages\":\"623-627\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-01-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2024 16th International Conference on COMmunication Systems & NETworkS (COMSNETS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/COMSNETS59351.2024.10426966\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2024 16th International Conference on COMmunication Systems & NETworkS (COMSNETS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/COMSNETS59351.2024.10426966","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Towards an Efficient Federated Learning Framework with Selective Aggregation
Federated Learning shows promise for collaborative, decentralized machine learning but faces efficiency challenges, primarily network straggler-induced latency bottlenecks and the need for complex aggregation techniques. To address these issues, ongoing research explores asynchronous FL, i.e., federated learning models, including an Asynchronous Parallel Federated Learning [5] framework. This study investigates the impact of varying worker node numbers on key metrics. More nodes offer faster convergence but may increase communication overhead and straggler vulnerability. We aim to quantify how the number of worker node variations for one global aggregation can affect convergence speed, communication efficiency, model accuracy, and system robustness, optimizing asynchronous FL system configurations. This work is crucial for practical and scalable FL applications, mitigating network stragglers, data distribution, and security challenges. This work analyses Asynchronous Parallel Federated Learning and showcases a paradigm shift in the approach by selectively aggregating early arriving worker node updates with a novel parameter ‘x’, improving efficiency and reshaping FL.