LVSA:用于联邦学习的轻量级和可验证的安全聚合

IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Gongli Li , Zhe Zhang , Ruiying Du
{"title":"LVSA:用于联邦学习的轻量级和可验证的安全聚合","authors":"Gongli Li ,&nbsp;Zhe Zhang ,&nbsp;Ruiying Du","doi":"10.1016/j.neucom.2025.130712","DOIUrl":null,"url":null,"abstract":"<div><div>Federated learning (FL) is a decentralized machine learning paradigm that facilitates collaborative training of global models through the exchange of local gradients while maintaining the confidentiality of raw data. However, recent studies have identified gradient leakage attacks and server-forged aggregation results as significant threats to user data privacy. This issue is especially pronounced in large-scale mobile devices (e.g., tablets, smartphones, and smartwatches), which store highly sensitive user data, making the protection of such data critical. In addition, it is essential to consider the limitations of mobile devices, such as potential power outages, disconnections, and their limited computational and communication resources. To address these challenges, LVSA, a lightweight and verifiable secure aggregation scheme is proposed. LVSA employs a non-interactive masking scheme to protect gradient privacy and allows any user to drop out at any stage. Moreover, a lightweight verification method based on the inner product is introduced, which eliminates complex computations and is more suitable for devices with limited computational resources. Security analysis shows that LVSA not only protects users’ original gradients from being leaked, but also verifies the correctness of the aggregation results. Experimental analysis shows that when the gradient dimension reaches <span><math><msup><mrow><mn>10</mn></mrow><mrow><mn>6</mn></mrow></msup></math></span>, the computation time in LVSA is two orders of magnitude faster than the most advanced existing schemes. In addition, the communication overhead for users is reduced by more than eight times compared to other schemes offering the same functionality.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"648 ","pages":"Article 130712"},"PeriodicalIF":5.5000,"publicationDate":"2025-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"LVSA: Lightweight and verifiable secure aggregation for federated learning\",\"authors\":\"Gongli Li ,&nbsp;Zhe Zhang ,&nbsp;Ruiying Du\",\"doi\":\"10.1016/j.neucom.2025.130712\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Federated learning (FL) is a decentralized machine learning paradigm that facilitates collaborative training of global models through the exchange of local gradients while maintaining the confidentiality of raw data. However, recent studies have identified gradient leakage attacks and server-forged aggregation results as significant threats to user data privacy. This issue is especially pronounced in large-scale mobile devices (e.g., tablets, smartphones, and smartwatches), which store highly sensitive user data, making the protection of such data critical. In addition, it is essential to consider the limitations of mobile devices, such as potential power outages, disconnections, and their limited computational and communication resources. To address these challenges, LVSA, a lightweight and verifiable secure aggregation scheme is proposed. LVSA employs a non-interactive masking scheme to protect gradient privacy and allows any user to drop out at any stage. Moreover, a lightweight verification method based on the inner product is introduced, which eliminates complex computations and is more suitable for devices with limited computational resources. Security analysis shows that LVSA not only protects users’ original gradients from being leaked, but also verifies the correctness of the aggregation results. Experimental analysis shows that when the gradient dimension reaches <span><math><msup><mrow><mn>10</mn></mrow><mrow><mn>6</mn></mrow></msup></math></span>, the computation time in LVSA is two orders of magnitude faster than the most advanced existing schemes. In addition, the communication overhead for users is reduced by more than eight times compared to other schemes offering the same functionality.</div></div>\",\"PeriodicalId\":19268,\"journal\":{\"name\":\"Neurocomputing\",\"volume\":\"648 \",\"pages\":\"Article 130712\"},\"PeriodicalIF\":5.5000,\"publicationDate\":\"2025-06-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neurocomputing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0925231225013840\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231225013840","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

联邦学习(FL)是一种分散的机器学习范式,它通过交换局部梯度来促进全局模型的协作训练,同时保持原始数据的机密性。然而,最近的研究已经确定梯度泄漏攻击和服务器伪造聚合结果是对用户数据隐私的重大威胁。这个问题在大型移动设备(例如平板电脑、智能手机和智能手表)中尤为明显,这些设备存储高度敏感的用户数据,因此保护这些数据至关重要。此外,必须考虑移动设备的局限性,例如潜在的停电、断开连接以及它们有限的计算和通信资源。针对这些挑战,提出了一种轻量级、可验证的安全聚合方案LVSA。LVSA采用非交互式掩蔽方案来保护渐变隐私,并允许任何用户在任何阶段退出。此外,还引入了一种基于内积的轻量级验证方法,消除了复杂的计算,更适合计算资源有限的设备。安全性分析表明,LVSA不仅保护了用户的原始梯度不被泄露,而且验证了聚合结果的正确性。实验分析表明,当梯度维数达到106时,LVSA算法的计算时间比现有最先进的算法快两个数量级。此外,与提供相同功能的其他方案相比,用户的通信开销减少了8倍以上。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
LVSA: Lightweight and verifiable secure aggregation for federated learning
Federated learning (FL) is a decentralized machine learning paradigm that facilitates collaborative training of global models through the exchange of local gradients while maintaining the confidentiality of raw data. However, recent studies have identified gradient leakage attacks and server-forged aggregation results as significant threats to user data privacy. This issue is especially pronounced in large-scale mobile devices (e.g., tablets, smartphones, and smartwatches), which store highly sensitive user data, making the protection of such data critical. In addition, it is essential to consider the limitations of mobile devices, such as potential power outages, disconnections, and their limited computational and communication resources. To address these challenges, LVSA, a lightweight and verifiable secure aggregation scheme is proposed. LVSA employs a non-interactive masking scheme to protect gradient privacy and allows any user to drop out at any stage. Moreover, a lightweight verification method based on the inner product is introduced, which eliminates complex computations and is more suitable for devices with limited computational resources. Security analysis shows that LVSA not only protects users’ original gradients from being leaked, but also verifies the correctness of the aggregation results. Experimental analysis shows that when the gradient dimension reaches 106, the computation time in LVSA is two orders of magnitude faster than the most advanced existing schemes. In addition, the communication overhead for users is reduced by more than eight times compared to other schemes offering the same functionality.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Neurocomputing
Neurocomputing 工程技术-计算机:人工智能
CiteScore
13.10
自引率
10.00%
发文量
1382
审稿时长
70 days
期刊介绍: Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信