{"title":"具有异构数据分布的联邦学习算法:一个经验评价","authors":"Alessio Mora, Davide Fantini, P. Bellavista","doi":"10.1109/SEC54971.2022.00049","DOIUrl":null,"url":null,"abstract":"Federated Learning (FL) is a paradigm that permits to learn a Deep Learning model without centralizing raw data, and has recently received growing interest primarily as a solution to improve privacy guarantees for end users while still distilling knowledge from a population of devices (e.g., edge devices or edge gateways managing a local set of visiting devices). However, the performance of FL algorithms significantly drops in presence of heterogeneous data distributions among the learners in the federation – this setting is very common in real practical applications, with clients holding data related to their habits, preferences, or environment. Several algorithms have been recently proposed to try to deal with data heterogeneity in FL settings under different assumptions and with differentiated pros/cons. In this article, we originally provide a review of the most relevant related solutions in the literature to alleviate the harmfulness of non-identically and independently distributed (IID) data, highlighting the intuition behind these alternative strategies as well as their possible drawbacks. Furthermore, we propose an empirical comparison among a subset of such state-of-the-art solutions under different levels of data hetero-geneity running them in the same operating conditions. We end up identifying the most promising approaches considering both empirical performances and defining characteristics (e.g., assumptions the strategy possibly make). The code is available online at https://github.com/alessiomora/fI_algorithms_non_iid.","PeriodicalId":364062,"journal":{"name":"2022 IEEE/ACM 7th Symposium on Edge Computing (SEC)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Federated Learning Algorithms with Heterogeneous Data Distributions: An Empirical Evaluation\",\"authors\":\"Alessio Mora, Davide Fantini, P. Bellavista\",\"doi\":\"10.1109/SEC54971.2022.00049\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Federated Learning (FL) is a paradigm that permits to learn a Deep Learning model without centralizing raw data, and has recently received growing interest primarily as a solution to improve privacy guarantees for end users while still distilling knowledge from a population of devices (e.g., edge devices or edge gateways managing a local set of visiting devices). However, the performance of FL algorithms significantly drops in presence of heterogeneous data distributions among the learners in the federation – this setting is very common in real practical applications, with clients holding data related to their habits, preferences, or environment. Several algorithms have been recently proposed to try to deal with data heterogeneity in FL settings under different assumptions and with differentiated pros/cons. In this article, we originally provide a review of the most relevant related solutions in the literature to alleviate the harmfulness of non-identically and independently distributed (IID) data, highlighting the intuition behind these alternative strategies as well as their possible drawbacks. Furthermore, we propose an empirical comparison among a subset of such state-of-the-art solutions under different levels of data hetero-geneity running them in the same operating conditions. We end up identifying the most promising approaches considering both empirical performances and defining characteristics (e.g., assumptions the strategy possibly make). The code is available online at https://github.com/alessiomora/fI_algorithms_non_iid.\",\"PeriodicalId\":364062,\"journal\":{\"name\":\"2022 IEEE/ACM 7th Symposium on Edge Computing (SEC)\",\"volume\":\"38 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE/ACM 7th Symposium on Edge Computing (SEC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SEC54971.2022.00049\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE/ACM 7th Symposium on Edge Computing (SEC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SEC54971.2022.00049","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Federated Learning Algorithms with Heterogeneous Data Distributions: An Empirical Evaluation
Federated Learning (FL) is a paradigm that permits to learn a Deep Learning model without centralizing raw data, and has recently received growing interest primarily as a solution to improve privacy guarantees for end users while still distilling knowledge from a population of devices (e.g., edge devices or edge gateways managing a local set of visiting devices). However, the performance of FL algorithms significantly drops in presence of heterogeneous data distributions among the learners in the federation – this setting is very common in real practical applications, with clients holding data related to their habits, preferences, or environment. Several algorithms have been recently proposed to try to deal with data heterogeneity in FL settings under different assumptions and with differentiated pros/cons. In this article, we originally provide a review of the most relevant related solutions in the literature to alleviate the harmfulness of non-identically and independently distributed (IID) data, highlighting the intuition behind these alternative strategies as well as their possible drawbacks. Furthermore, we propose an empirical comparison among a subset of such state-of-the-art solutions under different levels of data hetero-geneity running them in the same operating conditions. We end up identifying the most promising approaches considering both empirical performances and defining characteristics (e.g., assumptions the strategy possibly make). The code is available online at https://github.com/alessiomora/fI_algorithms_non_iid.