{"title":"SecureFL: Privacy Preserving Federated Learning with SGX and TrustZone","authors":"E. Kuznetsov, Yitao Chen, Ming Zhao","doi":"10.1145/3453142.3491287","DOIUrl":null,"url":null,"abstract":"Federated learning allows a large group of edge workers to collaboratively train a shared model without revealing their local data. It has become a powerful tool for deep learning in heterogeneous environments. User privacy is preserved by keeping the training data local to each device. However, federated learning still requires workers to share their weights, which can leak private information during collaboration. This paper introduces SecureFL, a practical framework that provides end-to-end security of federated learning. SecureFL integrates widely available Trusted Execution Environments (TEE) to protect against privacy leaks. SecureFL also uses carefully designed partitioning and aggregation techniques to ensure TEE efficiency on both the cloud and edge workers. SecureFL is both practical and efficient in securing the end-to-end process of federated learning, providing reasonable overhead given the privacy benefits. The paper provides thorough security analysis and performance evaluation of SecureFL, which show that the overhead is reasonable considering the substantial privacy benefits that it provides.","PeriodicalId":6779,"journal":{"name":"2021 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"27 1","pages":"55-67"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"12","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE/ACM Symposium on Edge Computing (SEC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3453142.3491287","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 12
Abstract
Federated learning allows a large group of edge workers to collaboratively train a shared model without revealing their local data. It has become a powerful tool for deep learning in heterogeneous environments. User privacy is preserved by keeping the training data local to each device. However, federated learning still requires workers to share their weights, which can leak private information during collaboration. This paper introduces SecureFL, a practical framework that provides end-to-end security of federated learning. SecureFL integrates widely available Trusted Execution Environments (TEE) to protect against privacy leaks. SecureFL also uses carefully designed partitioning and aggregation techniques to ensure TEE efficiency on both the cloud and edge workers. SecureFL is both practical and efficient in securing the end-to-end process of federated learning, providing reasonable overhead given the privacy benefits. The paper provides thorough security analysis and performance evaluation of SecureFL, which show that the overhead is reasonable considering the substantial privacy benefits that it provides.