Bei Chen , Gaolei Li , Haochen Mei , Jianhua Li , Mingzhe Chen , Mérouane Debbah
{"title":"反溯源后门:指责非iid联合学习中无辜者的恶意中毒","authors":"Bei Chen , Gaolei Li , Haochen Mei , Jianhua Li , Mingzhe Chen , Mérouane Debbah","doi":"10.1016/j.jisa.2025.104240","DOIUrl":null,"url":null,"abstract":"<div><div>Backdoor attacks pose an extremely serious threat to federated learning (FL), where victim models are susceptible to specific triggers. To counter the defense, a smart attacker will forcefully and actively camouflage its behavior profiles (i.e., trigger invisibility and malicious collusion). However, in a more practical scenario where the label distribution on each client is heterogeneous, such camouflage is not highly deceptive and durable, and also malicious clients can be precisely identified by a blanket benchmark comparison. In this paper, we introduce an attack vector that blames innocent clients for malicious poisoning in backdoor tracing and motivates a novel Anti-Traceable Backdoor Attack (ATBA) framework. First, we devise a <em>progressive generative adversarial data inference</em> scheme to compensate missing classes for malicious clients, progressively improving the quality of inferred data through fictitious poisoning. Subsequently, we present a <em>trigger-enhanced specific backdoor learning</em> mechanism, selectively specifying vulnerable classes from benign clients to resist backdoor tracing and adaptively optimizing triggers to adjust specific backdoor behaviors. Additionally, we also design a <em>meta-detection-and-filtering defense</em> strategy, which aims to distinguish fictitiously-poisoned updates. Extensive experiments over three benchmark datasets validate the proposed ATBA’s attack effectiveness, anti-traceability, robustness, and the feasibility of the corresponding defense method.</div></div>","PeriodicalId":48638,"journal":{"name":"Journal of Information Security and Applications","volume":"94 ","pages":"Article 104240"},"PeriodicalIF":3.7000,"publicationDate":"2025-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Anti-traceable backdoor: Blaming malicious poisoning on innocents in non-IID federated learning\",\"authors\":\"Bei Chen , Gaolei Li , Haochen Mei , Jianhua Li , Mingzhe Chen , Mérouane Debbah\",\"doi\":\"10.1016/j.jisa.2025.104240\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Backdoor attacks pose an extremely serious threat to federated learning (FL), where victim models are susceptible to specific triggers. To counter the defense, a smart attacker will forcefully and actively camouflage its behavior profiles (i.e., trigger invisibility and malicious collusion). However, in a more practical scenario where the label distribution on each client is heterogeneous, such camouflage is not highly deceptive and durable, and also malicious clients can be precisely identified by a blanket benchmark comparison. In this paper, we introduce an attack vector that blames innocent clients for malicious poisoning in backdoor tracing and motivates a novel Anti-Traceable Backdoor Attack (ATBA) framework. First, we devise a <em>progressive generative adversarial data inference</em> scheme to compensate missing classes for malicious clients, progressively improving the quality of inferred data through fictitious poisoning. Subsequently, we present a <em>trigger-enhanced specific backdoor learning</em> mechanism, selectively specifying vulnerable classes from benign clients to resist backdoor tracing and adaptively optimizing triggers to adjust specific backdoor behaviors. Additionally, we also design a <em>meta-detection-and-filtering defense</em> strategy, which aims to distinguish fictitiously-poisoned updates. Extensive experiments over three benchmark datasets validate the proposed ATBA’s attack effectiveness, anti-traceability, robustness, and the feasibility of the corresponding defense method.</div></div>\",\"PeriodicalId\":48638,\"journal\":{\"name\":\"Journal of Information Security and Applications\",\"volume\":\"94 \",\"pages\":\"Article 104240\"},\"PeriodicalIF\":3.7000,\"publicationDate\":\"2025-09-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Information Security and Applications\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2214212625002777\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Information Security and Applications","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2214212625002777","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
Anti-traceable backdoor: Blaming malicious poisoning on innocents in non-IID federated learning
Backdoor attacks pose an extremely serious threat to federated learning (FL), where victim models are susceptible to specific triggers. To counter the defense, a smart attacker will forcefully and actively camouflage its behavior profiles (i.e., trigger invisibility and malicious collusion). However, in a more practical scenario where the label distribution on each client is heterogeneous, such camouflage is not highly deceptive and durable, and also malicious clients can be precisely identified by a blanket benchmark comparison. In this paper, we introduce an attack vector that blames innocent clients for malicious poisoning in backdoor tracing and motivates a novel Anti-Traceable Backdoor Attack (ATBA) framework. First, we devise a progressive generative adversarial data inference scheme to compensate missing classes for malicious clients, progressively improving the quality of inferred data through fictitious poisoning. Subsequently, we present a trigger-enhanced specific backdoor learning mechanism, selectively specifying vulnerable classes from benign clients to resist backdoor tracing and adaptively optimizing triggers to adjust specific backdoor behaviors. Additionally, we also design a meta-detection-and-filtering defense strategy, which aims to distinguish fictitiously-poisoned updates. Extensive experiments over three benchmark datasets validate the proposed ATBA’s attack effectiveness, anti-traceability, robustness, and the feasibility of the corresponding defense method.
期刊介绍:
Journal of Information Security and Applications (JISA) focuses on the original research and practice-driven applications with relevance to information security and applications. JISA provides a common linkage between a vibrant scientific and research community and industry professionals by offering a clear view on modern problems and challenges in information security, as well as identifying promising scientific and "best-practice" solutions. JISA issues offer a balance between original research work and innovative industrial approaches by internationally renowned information security experts and researchers.