一种基于数据增强的梯度反转攻击防御方法

IF 3.5 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Yingge Li, Xianlin Wu, Yuwen Chen, Haiyang Yu, Zhen Yang
{"title":"一种基于数据增强的梯度反转攻击防御方法","authors":"Yingge Li,&nbsp;Xianlin Wu,&nbsp;Yuwen Chen,&nbsp;Haiyang Yu,&nbsp;Zhen Yang","doi":"10.1007/s10489-025-06533-y","DOIUrl":null,"url":null,"abstract":"<div><p>The gradient inversion attack presents a significant threat to the data privacy in federated learning, enabling malicious adversaries to reconstruct private training data from gradients. Among the various protection strategies, data augmentation-based approaches have emerged as particularly promising. These methods can be seamlessly incorporated into existing federated learning frameworks, offering both efficiency and minimal impact on model accuracy. In this paper, we propose a novel data protection technique that leverages data augmentation methods, specifically CutMix and SaliencyMix. These techniques work by mixing images, which allows for more efficient utilization of training pixels. This, in turn, aids the model in learning more robust and meaningful feature representations, thereby enhancing both the model performance and its resilience to adversarial attacks. To further strengthen data privacy, we integrate these data augmentation methods with data pruning techniques. Our empirical results demonstrate that the proposed approach not only improves the accuracy of federated learning models but also reduces the quality of reconstructed images, offering a higher level of data privacy protection.</p></div>","PeriodicalId":8041,"journal":{"name":"Applied Intelligence","volume":"55 14","pages":""},"PeriodicalIF":3.5000,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A gradient inversion attack defense method based on data augmentation\",\"authors\":\"Yingge Li,&nbsp;Xianlin Wu,&nbsp;Yuwen Chen,&nbsp;Haiyang Yu,&nbsp;Zhen Yang\",\"doi\":\"10.1007/s10489-025-06533-y\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>The gradient inversion attack presents a significant threat to the data privacy in federated learning, enabling malicious adversaries to reconstruct private training data from gradients. Among the various protection strategies, data augmentation-based approaches have emerged as particularly promising. These methods can be seamlessly incorporated into existing federated learning frameworks, offering both efficiency and minimal impact on model accuracy. In this paper, we propose a novel data protection technique that leverages data augmentation methods, specifically CutMix and SaliencyMix. These techniques work by mixing images, which allows for more efficient utilization of training pixels. This, in turn, aids the model in learning more robust and meaningful feature representations, thereby enhancing both the model performance and its resilience to adversarial attacks. To further strengthen data privacy, we integrate these data augmentation methods with data pruning techniques. Our empirical results demonstrate that the proposed approach not only improves the accuracy of federated learning models but also reduces the quality of reconstructed images, offering a higher level of data privacy protection.</p></div>\",\"PeriodicalId\":8041,\"journal\":{\"name\":\"Applied Intelligence\",\"volume\":\"55 14\",\"pages\":\"\"},\"PeriodicalIF\":3.5000,\"publicationDate\":\"2025-09-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Applied Intelligence\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://link.springer.com/article/10.1007/s10489-025-06533-y\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Intelligence","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s10489-025-06533-y","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

梯度反转攻击对联邦学习中的数据隐私构成了重大威胁,使恶意攻击者能够从梯度中重构私有训练数据。在各种保护策略中,基于数据增强的方法已成为特别有前途的方法。这些方法可以无缝地集成到现有的联邦学习框架中,既能提高效率,又能将对模型准确性的影响降到最低。在本文中,我们提出了一种新的数据保护技术,利用数据增强方法,特别是CutMix和SaliencyMix。这些技术通过混合图像来工作,这允许更有效地利用训练像素。反过来,这有助于模型学习更健壮和有意义的特征表示,从而增强模型性能及其对对抗性攻击的弹性。为了进一步加强数据隐私,我们将这些数据增强方法与数据修剪技术相结合。我们的实证结果表明,该方法不仅提高了联邦学习模型的准确性,而且降低了重建图像的质量,提供了更高水平的数据隐私保护。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

A gradient inversion attack defense method based on data augmentation

A gradient inversion attack defense method based on data augmentation

A gradient inversion attack defense method based on data augmentation

The gradient inversion attack presents a significant threat to the data privacy in federated learning, enabling malicious adversaries to reconstruct private training data from gradients. Among the various protection strategies, data augmentation-based approaches have emerged as particularly promising. These methods can be seamlessly incorporated into existing federated learning frameworks, offering both efficiency and minimal impact on model accuracy. In this paper, we propose a novel data protection technique that leverages data augmentation methods, specifically CutMix and SaliencyMix. These techniques work by mixing images, which allows for more efficient utilization of training pixels. This, in turn, aids the model in learning more robust and meaningful feature representations, thereby enhancing both the model performance and its resilience to adversarial attacks. To further strengthen data privacy, we integrate these data augmentation methods with data pruning techniques. Our empirical results demonstrate that the proposed approach not only improves the accuracy of federated learning models but also reduces the quality of reconstructed images, offering a higher level of data privacy protection.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Applied Intelligence
Applied Intelligence 工程技术-计算机:人工智能
CiteScore
6.60
自引率
20.80%
发文量
1361
审稿时长
5.9 months
期刊介绍: With a focus on research in artificial intelligence and neural networks, this journal addresses issues involving solutions of real-life manufacturing, defense, management, government and industrial problems which are too complex to be solved through conventional approaches and require the simulation of intelligent thought processes, heuristics, applications of knowledge, and distributed and parallel processing. The integration of these multiple approaches in solving complex problems is of particular importance. The journal presents new and original research and technological developments, addressing real and complex issues applicable to difficult problems. It provides a medium for exchanging scientific research and technological achievements accomplished by the international community.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信