{"title":"弹弓:联盟学习的全局有利局部更新","authors":"Jialiang Liu;Huawei Huang;Chun Wang;Sicong Zhou;Ruixin Li;Zibin Zheng","doi":"10.1109/OJCS.2024.3356599","DOIUrl":null,"url":null,"abstract":"Federated Learning (FL), as a promising distributed learning paradigm, is proposed to solve the contradiction between the data hunger of modern machine learning and the increasingly stringent need for data privacy. However, clients naturally present different distributions of their local data and inconsistent local optima, which leads to poor model performance of FL. Many previous methods focus on mitigating objective inconsistency. Although local objective consistency can be guaranteed when the number of communication rounds is infinite, we should notice that the accumulation of global drift and the limitation on the potential of local updates are non-negligible in those previous methods. In this article, we study a new framework for data-heterogeneity FL, in which the local updates in clients towards the global optimum can accelerate FL. We propose a new approach called \n<italic>Slingshot</i>\n. Slingshot's design goals are twofold, i.e., i) to retain the potential of local updates, and ii) to combine local and global trends. Experimental results show that \n<italic>Slingshot</i>\n helps local updates become more globally favorable and outperforms other popular methods under various FL settings. For example, on CIFAR10, \n<italic>Slingshot</i>\n achieves 46.52% improvement in test accuracy and 48.21× speedup for a lightweight neural network named \n<italic>SqueezeNet</i>\n.","PeriodicalId":13205,"journal":{"name":"IEEE Open Journal of the Computer Society","volume":"5 ","pages":"39-49"},"PeriodicalIF":0.0000,"publicationDate":"2024-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10411043","citationCount":"0","resultStr":"{\"title\":\"Slingshot: Globally Favorable Local Updates for Federated Learning\",\"authors\":\"Jialiang Liu;Huawei Huang;Chun Wang;Sicong Zhou;Ruixin Li;Zibin Zheng\",\"doi\":\"10.1109/OJCS.2024.3356599\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Federated Learning (FL), as a promising distributed learning paradigm, is proposed to solve the contradiction between the data hunger of modern machine learning and the increasingly stringent need for data privacy. However, clients naturally present different distributions of their local data and inconsistent local optima, which leads to poor model performance of FL. Many previous methods focus on mitigating objective inconsistency. Although local objective consistency can be guaranteed when the number of communication rounds is infinite, we should notice that the accumulation of global drift and the limitation on the potential of local updates are non-negligible in those previous methods. In this article, we study a new framework for data-heterogeneity FL, in which the local updates in clients towards the global optimum can accelerate FL. We propose a new approach called \\n<italic>Slingshot</i>\\n. Slingshot's design goals are twofold, i.e., i) to retain the potential of local updates, and ii) to combine local and global trends. Experimental results show that \\n<italic>Slingshot</i>\\n helps local updates become more globally favorable and outperforms other popular methods under various FL settings. For example, on CIFAR10, \\n<italic>Slingshot</i>\\n achieves 46.52% improvement in test accuracy and 48.21× speedup for a lightweight neural network named \\n<italic>SqueezeNet</i>\\n.\",\"PeriodicalId\":13205,\"journal\":{\"name\":\"IEEE Open Journal of the Computer Society\",\"volume\":\"5 \",\"pages\":\"39-49\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-01-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10411043\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Open Journal of the Computer Society\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10411043/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Open Journal of the Computer Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10411043/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Slingshot: Globally Favorable Local Updates for Federated Learning
Federated Learning (FL), as a promising distributed learning paradigm, is proposed to solve the contradiction between the data hunger of modern machine learning and the increasingly stringent need for data privacy. However, clients naturally present different distributions of their local data and inconsistent local optima, which leads to poor model performance of FL. Many previous methods focus on mitigating objective inconsistency. Although local objective consistency can be guaranteed when the number of communication rounds is infinite, we should notice that the accumulation of global drift and the limitation on the potential of local updates are non-negligible in those previous methods. In this article, we study a new framework for data-heterogeneity FL, in which the local updates in clients towards the global optimum can accelerate FL. We propose a new approach called
Slingshot
. Slingshot's design goals are twofold, i.e., i) to retain the potential of local updates, and ii) to combine local and global trends. Experimental results show that
Slingshot
helps local updates become more globally favorable and outperforms other popular methods under various FL settings. For example, on CIFAR10,
Slingshot
achieves 46.52% improvement in test accuracy and 48.21× speedup for a lightweight neural network named
SqueezeNet
.