Gradient whispering in decentralized federated learning: Covert channel through AI model update paths

IF 3.8 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS
Chen Liang , Ziqi Wang , Xuan Sun , Thar Baker , Yuanzhang Li , Ning Shi
{"title":"Gradient whispering in decentralized federated learning: Covert channel through AI model update paths","authors":"Chen Liang ,&nbsp;Ziqi Wang ,&nbsp;Xuan Sun ,&nbsp;Thar Baker ,&nbsp;Yuanzhang Li ,&nbsp;Ning Shi","doi":"10.1016/j.jisa.2025.104118","DOIUrl":null,"url":null,"abstract":"<div><div>Federated learning faces significant data privacy challenges, with threats like inference attacks, model inversion attacks, and poisoning attacks. Existing methods struggle to balance privacy, security, and accuracy, resulting in suboptimal performance. Furthermore, many solutions extend training and communication time, increasing costs and reducing overall system efficiency and value. This paper proposes “gradient whispering” covert communication to address these issues. Adjusting gradients in federated learning changes the optimization path while maintaining model efficacy. “Gradient whispering” introduces two embedding schemes: gradient direction-based embedding and gradient magnitude-based embedding, designed to incorporate information during the iterative updates of AI models. These two schemes can be applied independently or in combination to enhance the flexibility of the embedding process. When used together, they further expand the embedding capacity, thereby maximizing the effectiveness of information embedding. MNIST and CIFAR-10 dataset trials demonstrate model accuracy stays stable post-embedding with fluctuations under 0.3%. Two-sample Kolmogorov–Smirnov tests and Kullback–Leibler divergence analysis show no statistical difference between pre- and post-embedding gradient distributions. Peak signal-to-noise ratio values of 40 to 50 indicate a strong similarity between the embedded and original gradients, hiding hidden information and guaranteeing model stability.</div></div>","PeriodicalId":48638,"journal":{"name":"Journal of Information Security and Applications","volume":"93 ","pages":"Article 104118"},"PeriodicalIF":3.8000,"publicationDate":"2025-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Information Security and Applications","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2214212625001553","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Federated learning faces significant data privacy challenges, with threats like inference attacks, model inversion attacks, and poisoning attacks. Existing methods struggle to balance privacy, security, and accuracy, resulting in suboptimal performance. Furthermore, many solutions extend training and communication time, increasing costs and reducing overall system efficiency and value. This paper proposes “gradient whispering” covert communication to address these issues. Adjusting gradients in federated learning changes the optimization path while maintaining model efficacy. “Gradient whispering” introduces two embedding schemes: gradient direction-based embedding and gradient magnitude-based embedding, designed to incorporate information during the iterative updates of AI models. These two schemes can be applied independently or in combination to enhance the flexibility of the embedding process. When used together, they further expand the embedding capacity, thereby maximizing the effectiveness of information embedding. MNIST and CIFAR-10 dataset trials demonstrate model accuracy stays stable post-embedding with fluctuations under 0.3%. Two-sample Kolmogorov–Smirnov tests and Kullback–Leibler divergence analysis show no statistical difference between pre- and post-embedding gradient distributions. Peak signal-to-noise ratio values of 40 to 50 indicate a strong similarity between the embedded and original gradients, hiding hidden information and guaranteeing model stability.
分散联邦学习中的梯度耳语:通过AI模型更新路径的隐蔽通道
联邦学习面临着重大的数据隐私挑战,包括推理攻击、模型反转攻击和中毒攻击等威胁。现有的方法难以平衡隐私、安全性和准确性,导致性能不理想。此外,许多解决方案延长了培训和沟通时间,增加了成本,降低了整个系统的效率和价值。本文提出了“梯度耳语”隐蔽通信来解决这些问题。在联邦学习中调整梯度改变了优化路径,同时保持了模型的有效性。“梯度窃窃私语”引入了基于梯度方向的嵌入和基于梯度幅度的嵌入两种嵌入方案,旨在在人工智能模型迭代更新过程中整合信息。这两种方案既可以单独使用,也可以结合使用,以增强嵌入过程的灵活性。当它们一起使用时,进一步扩大了嵌入容量,从而最大化了信息嵌入的有效性。MNIST和CIFAR-10数据集试验表明,模型精度在嵌入后保持稳定,波动小于0.3%。双样本Kolmogorov-Smirnov检验和Kullback-Leibler散度分析显示,嵌入前后梯度分布无统计学差异。峰值信噪比为40 ~ 50时,表明嵌入的梯度与原始梯度具有较强的相似性,隐藏了隐藏信息,保证了模型的稳定性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Journal of Information Security and Applications
Journal of Information Security and Applications Computer Science-Computer Networks and Communications
CiteScore
10.90
自引率
5.40%
发文量
206
审稿时长
56 days
期刊介绍: Journal of Information Security and Applications (JISA) focuses on the original research and practice-driven applications with relevance to information security and applications. JISA provides a common linkage between a vibrant scientific and research community and industry professionals by offering a clear view on modern problems and challenges in information security, as well as identifying promising scientific and "best-practice" solutions. JISA issues offer a balance between original research work and innovative industrial approaches by internationally renowned information security experts and researchers.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信