{"title":"CapsuleBD: A Backdoor Attack Method Against Federated Learning Under Heterogeneous Models","authors":"Yuying Liao;Xuechen Zhao;Bin Zhou;Yanyi Huang","doi":"10.1109/TIFS.2025.3556346","DOIUrl":null,"url":null,"abstract":"Federated learning under heterogeneous models, as an innovative approach, aims to break through the constraints of vanilla federated learning on the consistency of model architectures to better accommodate the heterogeneity of data distributions and hardware resource constraints in mobile computing scenarios. While significant attention has been given to backdoor risks in federated learning, the impact on heterogeneous models remains insufficiently investigated, where devices contribute models with varying structures. The reduction in the number of benign local model neurons that the adversary can manipulate through the global model reduces the attack surface. To challenge this issue, we propose a white-box multi-target backdoor attack method, CapsuleBD, against heterogeneous federated learning. Specifically, we design a model decoupling method to separate the benign and malicious task training pipelines through weight reassignment. The model responsible for the benign tasks is structurally larger than the malicious one, resembling a capsule encapsulating harmful substance impacting multiple heterogeneous models. Our comprehensive experiments demonstrate the effectiveness of CapsuleBD in seamlessly embedding triggers into heterogeneous local models, sustaining a remarkable 99.5% average attack success rate against all benign users even with a 50% reduction in the attack space.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"4071-4086"},"PeriodicalIF":8.0000,"publicationDate":"2025-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Information Forensics and Security","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10947558/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0
Abstract
Federated learning under heterogeneous models, as an innovative approach, aims to break through the constraints of vanilla federated learning on the consistency of model architectures to better accommodate the heterogeneity of data distributions and hardware resource constraints in mobile computing scenarios. While significant attention has been given to backdoor risks in federated learning, the impact on heterogeneous models remains insufficiently investigated, where devices contribute models with varying structures. The reduction in the number of benign local model neurons that the adversary can manipulate through the global model reduces the attack surface. To challenge this issue, we propose a white-box multi-target backdoor attack method, CapsuleBD, against heterogeneous federated learning. Specifically, we design a model decoupling method to separate the benign and malicious task training pipelines through weight reassignment. The model responsible for the benign tasks is structurally larger than the malicious one, resembling a capsule encapsulating harmful substance impacting multiple heterogeneous models. Our comprehensive experiments demonstrate the effectiveness of CapsuleBD in seamlessly embedding triggers into heterogeneous local models, sustaining a remarkable 99.5% average attack success rate against all benign users even with a 50% reduction in the attack space.
期刊介绍:
The IEEE Transactions on Information Forensics and Security covers the sciences, technologies, and applications relating to information forensics, information security, biometrics, surveillance and systems applications that incorporate these features