Anti-traceable backdoor: Blaming malicious poisoning on innocents in non-IID federated learning

IF 3.7 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS
Bei Chen , Gaolei Li , Haochen Mei , Jianhua Li , Mingzhe Chen , Mérouane Debbah
{"title":"Anti-traceable backdoor: Blaming malicious poisoning on innocents in non-IID federated learning","authors":"Bei Chen ,&nbsp;Gaolei Li ,&nbsp;Haochen Mei ,&nbsp;Jianhua Li ,&nbsp;Mingzhe Chen ,&nbsp;Mérouane Debbah","doi":"10.1016/j.jisa.2025.104240","DOIUrl":null,"url":null,"abstract":"<div><div>Backdoor attacks pose an extremely serious threat to federated learning (FL), where victim models are susceptible to specific triggers. To counter the defense, a smart attacker will forcefully and actively camouflage its behavior profiles (i.e., trigger invisibility and malicious collusion). However, in a more practical scenario where the label distribution on each client is heterogeneous, such camouflage is not highly deceptive and durable, and also malicious clients can be precisely identified by a blanket benchmark comparison. In this paper, we introduce an attack vector that blames innocent clients for malicious poisoning in backdoor tracing and motivates a novel Anti-Traceable Backdoor Attack (ATBA) framework. First, we devise a <em>progressive generative adversarial data inference</em> scheme to compensate missing classes for malicious clients, progressively improving the quality of inferred data through fictitious poisoning. Subsequently, we present a <em>trigger-enhanced specific backdoor learning</em> mechanism, selectively specifying vulnerable classes from benign clients to resist backdoor tracing and adaptively optimizing triggers to adjust specific backdoor behaviors. Additionally, we also design a <em>meta-detection-and-filtering defense</em> strategy, which aims to distinguish fictitiously-poisoned updates. Extensive experiments over three benchmark datasets validate the proposed ATBA’s attack effectiveness, anti-traceability, robustness, and the feasibility of the corresponding defense method.</div></div>","PeriodicalId":48638,"journal":{"name":"Journal of Information Security and Applications","volume":"94 ","pages":"Article 104240"},"PeriodicalIF":3.7000,"publicationDate":"2025-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Information Security and Applications","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2214212625002777","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Backdoor attacks pose an extremely serious threat to federated learning (FL), where victim models are susceptible to specific triggers. To counter the defense, a smart attacker will forcefully and actively camouflage its behavior profiles (i.e., trigger invisibility and malicious collusion). However, in a more practical scenario where the label distribution on each client is heterogeneous, such camouflage is not highly deceptive and durable, and also malicious clients can be precisely identified by a blanket benchmark comparison. In this paper, we introduce an attack vector that blames innocent clients for malicious poisoning in backdoor tracing and motivates a novel Anti-Traceable Backdoor Attack (ATBA) framework. First, we devise a progressive generative adversarial data inference scheme to compensate missing classes for malicious clients, progressively improving the quality of inferred data through fictitious poisoning. Subsequently, we present a trigger-enhanced specific backdoor learning mechanism, selectively specifying vulnerable classes from benign clients to resist backdoor tracing and adaptively optimizing triggers to adjust specific backdoor behaviors. Additionally, we also design a meta-detection-and-filtering defense strategy, which aims to distinguish fictitiously-poisoned updates. Extensive experiments over three benchmark datasets validate the proposed ATBA’s attack effectiveness, anti-traceability, robustness, and the feasibility of the corresponding defense method.
反溯源后门:指责非iid联合学习中无辜者的恶意中毒
后门攻击对联邦学习(FL)构成了极其严重的威胁,其中受害者模型容易受到特定触发器的影响。为了对抗这种防御,一个聪明的攻击者会强行和主动地伪装自己的行为概况(即,触发隐形和恶意共谋)。然而,在更实际的场景中,每个客户机上的标签分布是异构的,这种伪装不具有高度的欺骗性和持久性,并且可以通过一揽子基准比较精确地识别恶意客户机。在本文中,我们引入了一种攻击向量,该攻击向量将后门跟踪中的恶意中毒归咎于无辜的客户端,并激发了一种新的反可跟踪后门攻击(ATBA)框架。首先,我们设计了一种渐进式生成对抗性数据推断方案来补偿恶意客户端缺失的类,通过虚构中毒逐步提高推断数据的质量。随后,我们提出了一种触发器增强的特定后门学习机制,选择性地指定来自良性客户端的脆弱类以抵抗后门跟踪,并自适应优化触发器以调整特定后门行为。此外,我们还设计了一个元检测和过滤防御策略,旨在区分虚构中毒的更新。在三个基准数据集上的大量实验验证了所提出的ATBA的攻击有效性、抗追溯性、鲁棒性以及相应防御方法的可行性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Journal of Information Security and Applications
Journal of Information Security and Applications Computer Science-Computer Networks and Communications
CiteScore
10.90
自引率
5.40%
发文量
206
审稿时长
56 days
期刊介绍: Journal of Information Security and Applications (JISA) focuses on the original research and practice-driven applications with relevance to information security and applications. JISA provides a common linkage between a vibrant scientific and research community and industry professionals by offering a clear view on modern problems and challenges in information security, as well as identifying promising scientific and "best-practice" solutions. JISA issues offer a balance between original research work and innovative industrial approaches by internationally renowned information security experts and researchers.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信