ACCESS-FL:在稳定的联合学习网络中实现高效安全聚合的敏捷通信和计算

Niousha Nazemi, Omid Tavallaie, Shuaijun Chen, Anna Maria Mandalario, Kanchana Thilakarathna, Ralph Holz, Hamed Haddadi, Albert Y. Zomaya
{"title":"ACCESS-FL:在稳定的联合学习网络中实现高效安全聚合的敏捷通信和计算","authors":"Niousha Nazemi, Omid Tavallaie, Shuaijun Chen, Anna Maria Mandalario, Kanchana Thilakarathna, Ralph Holz, Hamed Haddadi, Albert Y. Zomaya","doi":"arxiv-2409.01722","DOIUrl":null,"url":null,"abstract":"Federated Learning (FL) is a promising distributed learning framework\ndesigned for privacy-aware applications. FL trains models on client devices\nwithout sharing the client's data and generates a global model on a server by\naggregating model updates. Traditional FL approaches risk exposing sensitive\nclient data when plain model updates are transmitted to the server, making them\nvulnerable to security threats such as model inversion attacks where the server\ncan infer the client's original training data from monitoring the changes of\nthe trained model in different rounds. Google's Secure Aggregation (SecAgg)\nprotocol addresses this threat by employing a double-masking technique, secret\nsharing, and cryptography computations in honest-but-curious and adversarial\nscenarios with client dropouts. However, in scenarios without the presence of\nan active adversary, the computational and communication cost of SecAgg\nsignificantly increases by growing the number of clients. To address this\nissue, in this paper, we propose ACCESS-FL, a\ncommunication-and-computation-efficient secure aggregation method designed for\nhonest-but-curious scenarios in stable FL networks with a limited rate of\nclient dropout. ACCESS-FL reduces the computation/communication cost to a\nconstant level (independent of the network size) by generating shared secrets\nbetween only two clients and eliminating the need for double masking, secret\nsharing, and cryptography computations. To evaluate the performance of\nACCESS-FL, we conduct experiments using the MNIST, FMNIST, and CIFAR datasets\nto verify the performance of our proposed method. The evaluation results\ndemonstrate that our proposed method significantly reduces computation and\ncommunication overhead compared to state-of-the-art methods, SecAgg and\nSecAgg+.","PeriodicalId":501422,"journal":{"name":"arXiv - CS - Distributed, Parallel, and Cluster Computing","volume":"1 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"ACCESS-FL: Agile Communication and Computation for Efficient Secure Aggregation in Stable Federated Learning Networks\",\"authors\":\"Niousha Nazemi, Omid Tavallaie, Shuaijun Chen, Anna Maria Mandalario, Kanchana Thilakarathna, Ralph Holz, Hamed Haddadi, Albert Y. Zomaya\",\"doi\":\"arxiv-2409.01722\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Federated Learning (FL) is a promising distributed learning framework\\ndesigned for privacy-aware applications. FL trains models on client devices\\nwithout sharing the client's data and generates a global model on a server by\\naggregating model updates. Traditional FL approaches risk exposing sensitive\\nclient data when plain model updates are transmitted to the server, making them\\nvulnerable to security threats such as model inversion attacks where the server\\ncan infer the client's original training data from monitoring the changes of\\nthe trained model in different rounds. Google's Secure Aggregation (SecAgg)\\nprotocol addresses this threat by employing a double-masking technique, secret\\nsharing, and cryptography computations in honest-but-curious and adversarial\\nscenarios with client dropouts. However, in scenarios without the presence of\\nan active adversary, the computational and communication cost of SecAgg\\nsignificantly increases by growing the number of clients. To address this\\nissue, in this paper, we propose ACCESS-FL, a\\ncommunication-and-computation-efficient secure aggregation method designed for\\nhonest-but-curious scenarios in stable FL networks with a limited rate of\\nclient dropout. ACCESS-FL reduces the computation/communication cost to a\\nconstant level (independent of the network size) by generating shared secrets\\nbetween only two clients and eliminating the need for double masking, secret\\nsharing, and cryptography computations. To evaluate the performance of\\nACCESS-FL, we conduct experiments using the MNIST, FMNIST, and CIFAR datasets\\nto verify the performance of our proposed method. The evaluation results\\ndemonstrate that our proposed method significantly reduces computation and\\ncommunication overhead compared to state-of-the-art methods, SecAgg and\\nSecAgg+.\",\"PeriodicalId\":501422,\"journal\":{\"name\":\"arXiv - CS - Distributed, Parallel, and Cluster Computing\",\"volume\":\"1 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Distributed, Parallel, and Cluster Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.01722\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Distributed, Parallel, and Cluster Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.01722","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

联合学习(FL)是一种有前途的分布式学习框架,专为隐私感知应用而设计。联合学习在不共享客户端数据的情况下在客户端设备上训练模型,并通过聚合模型更新在服务器上生成全局模型。传统的 FL 方法在向服务器发送普通模型更新时,有可能暴露敏感的客户端数据,使其容易受到安全威胁,如模型反转攻击,即服务器可以通过监控不同轮次中训练模型的变化来推断客户端的原始训练数据。谷歌的安全聚合(SecAgg)协议通过采用双重掩码技术、秘密共享和加密计算,在诚实但不诚实以及客户端退出的对抗性场景中解决了这一威胁。然而,在不存在活跃对手的情况下,SecAggs 的计算和通信成本会随着客户端数量的增加而显著增加。为了解决这个问题,我们在本文中提出了 ACCESS-FL,一种通信和计算效率高的安全聚合方法,设计用于稳定的 FL 网络中客户丢失率有限的 "诚实但好奇 "场景。ACCESS-FL 只在两个客户端之间生成共享秘密,无需进行双重屏蔽、秘密共享和密码学计算,从而将计算/通信成本降至恒定水平(与网络规模无关)。为了评估 ACCESS-FL 的性能,我们使用 MNIST、FMNIST 和 CIFAR 数据集进行了实验,以验证我们提出的方法的性能。评估结果表明,与最先进的 SecAgg 和 SecAgg+ 方法相比,我们提出的方法大大减少了计算和通信开销。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
ACCESS-FL: Agile Communication and Computation for Efficient Secure Aggregation in Stable Federated Learning Networks
Federated Learning (FL) is a promising distributed learning framework designed for privacy-aware applications. FL trains models on client devices without sharing the client's data and generates a global model on a server by aggregating model updates. Traditional FL approaches risk exposing sensitive client data when plain model updates are transmitted to the server, making them vulnerable to security threats such as model inversion attacks where the server can infer the client's original training data from monitoring the changes of the trained model in different rounds. Google's Secure Aggregation (SecAgg) protocol addresses this threat by employing a double-masking technique, secret sharing, and cryptography computations in honest-but-curious and adversarial scenarios with client dropouts. However, in scenarios without the presence of an active adversary, the computational and communication cost of SecAgg significantly increases by growing the number of clients. To address this issue, in this paper, we propose ACCESS-FL, a communication-and-computation-efficient secure aggregation method designed for honest-but-curious scenarios in stable FL networks with a limited rate of client dropout. ACCESS-FL reduces the computation/communication cost to a constant level (independent of the network size) by generating shared secrets between only two clients and eliminating the need for double masking, secret sharing, and cryptography computations. To evaluate the performance of ACCESS-FL, we conduct experiments using the MNIST, FMNIST, and CIFAR datasets to verify the performance of our proposed method. The evaluation results demonstrate that our proposed method significantly reduces computation and communication overhead compared to state-of-the-art methods, SecAgg and SecAgg+.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信