{"title":"单面信任联邦学习的在线镜像下降保护隐私","authors":"O. Odeyomi, G. Záruba","doi":"10.1109/SSCI50451.2021.9659544","DOIUrl":null,"url":null,"abstract":"This paper discusses how clients in a federated learning system can collaborate with privacy guarantee in a fully decentralized setting without a central server. Most existing work includes a central server that aggregates the local updates from the clients and coordinates the training. Thus, the setting in this existing work is prone to communication and computational bottlenecks, especially when large number of clients are involved. Also, most existing federated learning algorithms do not cater for situations where the data distribution is time-varying such as in real-time traffic monitoring. To address these problems, this paper proposes a differentially-private online mirror descent algorithm. To provide additional privacy to the loss gradients of the clients, local differential privacy is introduced. Simulation results are based on a proposed differentially-private exponential gradient algorithm, which is a variant of differentially-private online mirror descent algorithm with entropic regularizer. The simulation shows that all the clients can converge to the global optimal vector over time. The regret bound of the proposed differentially-private exponential gradient algorithm is compared with the regret bounds of some state-of-the-art online federated learning algorithms found in the literature.","PeriodicalId":255763,"journal":{"name":"2021 IEEE Symposium Series on Computational Intelligence (SSCI)","volume":"7 7","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Privacy-Preserving Online Mirror Descent for Federated Learning with Single-Sided Trust\",\"authors\":\"O. Odeyomi, G. Záruba\",\"doi\":\"10.1109/SSCI50451.2021.9659544\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper discusses how clients in a federated learning system can collaborate with privacy guarantee in a fully decentralized setting without a central server. Most existing work includes a central server that aggregates the local updates from the clients and coordinates the training. Thus, the setting in this existing work is prone to communication and computational bottlenecks, especially when large number of clients are involved. Also, most existing federated learning algorithms do not cater for situations where the data distribution is time-varying such as in real-time traffic monitoring. To address these problems, this paper proposes a differentially-private online mirror descent algorithm. To provide additional privacy to the loss gradients of the clients, local differential privacy is introduced. Simulation results are based on a proposed differentially-private exponential gradient algorithm, which is a variant of differentially-private online mirror descent algorithm with entropic regularizer. The simulation shows that all the clients can converge to the global optimal vector over time. The regret bound of the proposed differentially-private exponential gradient algorithm is compared with the regret bounds of some state-of-the-art online federated learning algorithms found in the literature.\",\"PeriodicalId\":255763,\"journal\":{\"name\":\"2021 IEEE Symposium Series on Computational Intelligence (SSCI)\",\"volume\":\"7 7\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-12-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE Symposium Series on Computational Intelligence (SSCI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SSCI50451.2021.9659544\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE Symposium Series on Computational Intelligence (SSCI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SSCI50451.2021.9659544","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Privacy-Preserving Online Mirror Descent for Federated Learning with Single-Sided Trust
This paper discusses how clients in a federated learning system can collaborate with privacy guarantee in a fully decentralized setting without a central server. Most existing work includes a central server that aggregates the local updates from the clients and coordinates the training. Thus, the setting in this existing work is prone to communication and computational bottlenecks, especially when large number of clients are involved. Also, most existing federated learning algorithms do not cater for situations where the data distribution is time-varying such as in real-time traffic monitoring. To address these problems, this paper proposes a differentially-private online mirror descent algorithm. To provide additional privacy to the loss gradients of the clients, local differential privacy is introduced. Simulation results are based on a proposed differentially-private exponential gradient algorithm, which is a variant of differentially-private online mirror descent algorithm with entropic regularizer. The simulation shows that all the clients can converge to the global optimal vector over time. The regret bound of the proposed differentially-private exponential gradient algorithm is compared with the regret bounds of some state-of-the-art online federated learning algorithms found in the literature.