Jiaxang Tang, Zeshan Fayyaz, Mohammad A. Salahuddin, Raouf Boutaba, Zhi-Li Zhang, Ali Anwar
{"title":"HERL:利用强化学习进行分层联合学习与自适应同态加密","authors":"Jiaxang Tang, Zeshan Fayyaz, Mohammad A. Salahuddin, Raouf Boutaba, Zhi-Li Zhang, Ali Anwar","doi":"arxiv-2409.07631","DOIUrl":null,"url":null,"abstract":"Federated Learning is a well-researched approach for collaboratively training\nmachine learning models across decentralized data while preserving privacy.\nHowever, integrating Homomorphic Encryption to ensure data confidentiality\nintroduces significant computational and communication overheads, particularly\nin heterogeneous environments where clients have varying computational\ncapacities and security needs. In this paper, we propose HERL, a Reinforcement\nLearning-based approach that uses Q-Learning to dynamically optimize encryption\nparameters, specifically the polynomial modulus degree, $N$, and the\ncoefficient modulus, $q$, across different client tiers. Our proposed method\ninvolves first profiling and tiering clients according to the chosen clustering\napproach, followed by dynamically selecting the most suitable encryption\nparameters using an RL-agent. Experimental results demonstrate that our\napproach significantly reduces the computational overhead while maintaining\nutility and a high level of security. Empirical results show that HERL improves\nutility by 17%, reduces the convergence time by up to 24%, and increases\nconvergence efficiency by up to 30%, with minimal security loss.","PeriodicalId":501422,"journal":{"name":"arXiv - CS - Distributed, Parallel, and Cluster Computing","volume":"28 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"HERL: Tiered Federated Learning with Adaptive Homomorphic Encryption using Reinforcement Learning\",\"authors\":\"Jiaxang Tang, Zeshan Fayyaz, Mohammad A. Salahuddin, Raouf Boutaba, Zhi-Li Zhang, Ali Anwar\",\"doi\":\"arxiv-2409.07631\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Federated Learning is a well-researched approach for collaboratively training\\nmachine learning models across decentralized data while preserving privacy.\\nHowever, integrating Homomorphic Encryption to ensure data confidentiality\\nintroduces significant computational and communication overheads, particularly\\nin heterogeneous environments where clients have varying computational\\ncapacities and security needs. In this paper, we propose HERL, a Reinforcement\\nLearning-based approach that uses Q-Learning to dynamically optimize encryption\\nparameters, specifically the polynomial modulus degree, $N$, and the\\ncoefficient modulus, $q$, across different client tiers. Our proposed method\\ninvolves first profiling and tiering clients according to the chosen clustering\\napproach, followed by dynamically selecting the most suitable encryption\\nparameters using an RL-agent. Experimental results demonstrate that our\\napproach significantly reduces the computational overhead while maintaining\\nutility and a high level of security. Empirical results show that HERL improves\\nutility by 17%, reduces the convergence time by up to 24%, and increases\\nconvergence efficiency by up to 30%, with minimal security loss.\",\"PeriodicalId\":501422,\"journal\":{\"name\":\"arXiv - CS - Distributed, Parallel, and Cluster Computing\",\"volume\":\"28 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Distributed, Parallel, and Cluster Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.07631\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Distributed, Parallel, and Cluster Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.07631","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
HERL: Tiered Federated Learning with Adaptive Homomorphic Encryption using Reinforcement Learning
Federated Learning is a well-researched approach for collaboratively training
machine learning models across decentralized data while preserving privacy.
However, integrating Homomorphic Encryption to ensure data confidentiality
introduces significant computational and communication overheads, particularly
in heterogeneous environments where clients have varying computational
capacities and security needs. In this paper, we propose HERL, a Reinforcement
Learning-based approach that uses Q-Learning to dynamically optimize encryption
parameters, specifically the polynomial modulus degree, $N$, and the
coefficient modulus, $q$, across different client tiers. Our proposed method
involves first profiling and tiering clients according to the chosen clustering
approach, followed by dynamically selecting the most suitable encryption
parameters using an RL-agent. Experimental results demonstrate that our
approach significantly reduces the computational overhead while maintaining
utility and a high level of security. Empirical results show that HERL improves
utility by 17%, reduces the convergence time by up to 24%, and increases
convergence efficiency by up to 30%, with minimal security loss.