{"title":"Federated Learning: Optimizing Objective Function","authors":"Aishwarya Asesh","doi":"10.1109/IICAIET51634.2021.9573567","DOIUrl":null,"url":null,"abstract":"A universal server coordinates the training of a single model on a largely distributed network of computers in federated learning. This setting can easily be expanded to a multi-task learning system in order to manage real-world federated datasets with high statistical heterogeneity across devices. Federated learning is very useful as a framework for real-world data and federated multi-task learning has been applied to convex models. This research work discusses and evaluates possibility of sparser gradient changes to outperform the existing state-of-the-art for federated learning on real-world federated datasets as well as imputed data values. The experiments investigate the effect of rolling data or data randomization and adaptive global frequency update scheduling on the convergence of the federated learning algorithm. The results show that convergence speed and gradient curve are considerably affected by number of contact rounds between worker and aggregator and is unaffected by data heterogeneity or client sampling. The research is the core part of an extended experimental setup that will follow to better understand the behavior of distributed learning, by developing a simulation to track weights and loss function gradients during the training.","PeriodicalId":234229,"journal":{"name":"2021 IEEE International Conference on Artificial Intelligence in Engineering and Technology (IICAIET)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Conference on Artificial Intelligence in Engineering and Technology (IICAIET)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IICAIET51634.2021.9573567","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
A universal server coordinates the training of a single model on a largely distributed network of computers in federated learning. This setting can easily be expanded to a multi-task learning system in order to manage real-world federated datasets with high statistical heterogeneity across devices. Federated learning is very useful as a framework for real-world data and federated multi-task learning has been applied to convex models. This research work discusses and evaluates possibility of sparser gradient changes to outperform the existing state-of-the-art for federated learning on real-world federated datasets as well as imputed data values. The experiments investigate the effect of rolling data or data randomization and adaptive global frequency update scheduling on the convergence of the federated learning algorithm. The results show that convergence speed and gradient curve are considerably affected by number of contact rounds between worker and aggregator and is unaffected by data heterogeneity or client sampling. The research is the core part of an extended experimental setup that will follow to better understand the behavior of distributed learning, by developing a simulation to track weights and loss function gradients during the training.