Niousha Nazemi, Omid Tavallaie, Shuaijun Chen, Anna Maria Mandalario, Kanchana Thilakarathna, Ralph Holz, Hamed Haddadi, Albert Y. Zomaya
{"title":"ACCESS-FL:在稳定的联合学习网络中实现高效安全聚合的敏捷通信和计算","authors":"Niousha Nazemi, Omid Tavallaie, Shuaijun Chen, Anna Maria Mandalario, Kanchana Thilakarathna, Ralph Holz, Hamed Haddadi, Albert Y. Zomaya","doi":"arxiv-2409.01722","DOIUrl":null,"url":null,"abstract":"Federated Learning (FL) is a promising distributed learning framework\ndesigned for privacy-aware applications. FL trains models on client devices\nwithout sharing the client's data and generates a global model on a server by\naggregating model updates. Traditional FL approaches risk exposing sensitive\nclient data when plain model updates are transmitted to the server, making them\nvulnerable to security threats such as model inversion attacks where the server\ncan infer the client's original training data from monitoring the changes of\nthe trained model in different rounds. Google's Secure Aggregation (SecAgg)\nprotocol addresses this threat by employing a double-masking technique, secret\nsharing, and cryptography computations in honest-but-curious and adversarial\nscenarios with client dropouts. However, in scenarios without the presence of\nan active adversary, the computational and communication cost of SecAgg\nsignificantly increases by growing the number of clients. To address this\nissue, in this paper, we propose ACCESS-FL, a\ncommunication-and-computation-efficient secure aggregation method designed for\nhonest-but-curious scenarios in stable FL networks with a limited rate of\nclient dropout. ACCESS-FL reduces the computation/communication cost to a\nconstant level (independent of the network size) by generating shared secrets\nbetween only two clients and eliminating the need for double masking, secret\nsharing, and cryptography computations. To evaluate the performance of\nACCESS-FL, we conduct experiments using the MNIST, FMNIST, and CIFAR datasets\nto verify the performance of our proposed method. The evaluation results\ndemonstrate that our proposed method significantly reduces computation and\ncommunication overhead compared to state-of-the-art methods, SecAgg and\nSecAgg+.","PeriodicalId":501422,"journal":{"name":"arXiv - CS - Distributed, Parallel, and Cluster Computing","volume":"1 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"ACCESS-FL: Agile Communication and Computation for Efficient Secure Aggregation in Stable Federated Learning Networks\",\"authors\":\"Niousha Nazemi, Omid Tavallaie, Shuaijun Chen, Anna Maria Mandalario, Kanchana Thilakarathna, Ralph Holz, Hamed Haddadi, Albert Y. Zomaya\",\"doi\":\"arxiv-2409.01722\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Federated Learning (FL) is a promising distributed learning framework\\ndesigned for privacy-aware applications. FL trains models on client devices\\nwithout sharing the client's data and generates a global model on a server by\\naggregating model updates. Traditional FL approaches risk exposing sensitive\\nclient data when plain model updates are transmitted to the server, making them\\nvulnerable to security threats such as model inversion attacks where the server\\ncan infer the client's original training data from monitoring the changes of\\nthe trained model in different rounds. Google's Secure Aggregation (SecAgg)\\nprotocol addresses this threat by employing a double-masking technique, secret\\nsharing, and cryptography computations in honest-but-curious and adversarial\\nscenarios with client dropouts. However, in scenarios without the presence of\\nan active adversary, the computational and communication cost of SecAgg\\nsignificantly increases by growing the number of clients. To address this\\nissue, in this paper, we propose ACCESS-FL, a\\ncommunication-and-computation-efficient secure aggregation method designed for\\nhonest-but-curious scenarios in stable FL networks with a limited rate of\\nclient dropout. ACCESS-FL reduces the computation/communication cost to a\\nconstant level (independent of the network size) by generating shared secrets\\nbetween only two clients and eliminating the need for double masking, secret\\nsharing, and cryptography computations. To evaluate the performance of\\nACCESS-FL, we conduct experiments using the MNIST, FMNIST, and CIFAR datasets\\nto verify the performance of our proposed method. The evaluation results\\ndemonstrate that our proposed method significantly reduces computation and\\ncommunication overhead compared to state-of-the-art methods, SecAgg and\\nSecAgg+.\",\"PeriodicalId\":501422,\"journal\":{\"name\":\"arXiv - CS - Distributed, Parallel, and Cluster Computing\",\"volume\":\"1 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Distributed, Parallel, and Cluster Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.01722\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Distributed, Parallel, and Cluster Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.01722","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
ACCESS-FL: Agile Communication and Computation for Efficient Secure Aggregation in Stable Federated Learning Networks
Federated Learning (FL) is a promising distributed learning framework
designed for privacy-aware applications. FL trains models on client devices
without sharing the client's data and generates a global model on a server by
aggregating model updates. Traditional FL approaches risk exposing sensitive
client data when plain model updates are transmitted to the server, making them
vulnerable to security threats such as model inversion attacks where the server
can infer the client's original training data from monitoring the changes of
the trained model in different rounds. Google's Secure Aggregation (SecAgg)
protocol addresses this threat by employing a double-masking technique, secret
sharing, and cryptography computations in honest-but-curious and adversarial
scenarios with client dropouts. However, in scenarios without the presence of
an active adversary, the computational and communication cost of SecAgg
significantly increases by growing the number of clients. To address this
issue, in this paper, we propose ACCESS-FL, a
communication-and-computation-efficient secure aggregation method designed for
honest-but-curious scenarios in stable FL networks with a limited rate of
client dropout. ACCESS-FL reduces the computation/communication cost to a
constant level (independent of the network size) by generating shared secrets
between only two clients and eliminating the need for double masking, secret
sharing, and cryptography computations. To evaluate the performance of
ACCESS-FL, we conduct experiments using the MNIST, FMNIST, and CIFAR datasets
to verify the performance of our proposed method. The evaluation results
demonstrate that our proposed method significantly reduces computation and
communication overhead compared to state-of-the-art methods, SecAgg and
SecAgg+.