一种基于对抗性自知识蒸馏的无数据后门去除新方法

IF 8.9 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS
Xuexiang Li;Yafei Gao;Minglin Liu;Xu Zhou;Xianfu Chen;Celimuge Wu;Jie Li
{"title":"一种基于对抗性自知识蒸馏的无数据后门去除新方法","authors":"Xuexiang Li;Yafei Gao;Minglin Liu;Xu Zhou;Xianfu Chen;Celimuge Wu;Jie Li","doi":"10.1109/JIOT.2024.3520642","DOIUrl":null,"url":null,"abstract":"In the context of Internet of Things edge devices, pretrained models are often sourced directly from cloud computing platforms due to the unavailability of training data. This lack of access during the training phase makes these models susceptible to backdoor attacks. To address this challenge, we introduce a novel data-free backdoor removal method that operates effectively even when only the poisoned model is accessible. Our innovative approach employs two end-to-end generators with identical architectures to create both clean and poisoned samples. These samples are crucial for transferring knowledge from the teacher model—the fixed poisoned model—to the student model, which is initialized with the poisoned model. Our method utilizes a channel shuffling technique during the distillation process to disrupt and eliminate the backdoor knowledge embedded in the teacher model. This process involves iterative updates of the generators and meticulous distillation of the student model, leading to efficient backdoor removal. We conducted extensive experiments on five sophisticated backdoor attacks across two benchmark datasets. The results demonstrate that our method not only significantly bolsters the model’s resistance to backdoor attacks but also maintains high recognition accuracy for clean samples, thereby outperforming existing methods. Additionally, the code for our method is available at <uri>https://github.com/gaoyafeiyoo/ADBR</uri>.","PeriodicalId":54347,"journal":{"name":"IEEE Internet of Things Journal","volume":"12 9","pages":"12267-12277"},"PeriodicalIF":8.9000,"publicationDate":"2024-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A New Data-Free Backdoor Removal Method via Adversarial Self-Knowledge Distillation\",\"authors\":\"Xuexiang Li;Yafei Gao;Minglin Liu;Xu Zhou;Xianfu Chen;Celimuge Wu;Jie Li\",\"doi\":\"10.1109/JIOT.2024.3520642\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In the context of Internet of Things edge devices, pretrained models are often sourced directly from cloud computing platforms due to the unavailability of training data. This lack of access during the training phase makes these models susceptible to backdoor attacks. To address this challenge, we introduce a novel data-free backdoor removal method that operates effectively even when only the poisoned model is accessible. Our innovative approach employs two end-to-end generators with identical architectures to create both clean and poisoned samples. These samples are crucial for transferring knowledge from the teacher model—the fixed poisoned model—to the student model, which is initialized with the poisoned model. Our method utilizes a channel shuffling technique during the distillation process to disrupt and eliminate the backdoor knowledge embedded in the teacher model. This process involves iterative updates of the generators and meticulous distillation of the student model, leading to efficient backdoor removal. We conducted extensive experiments on five sophisticated backdoor attacks across two benchmark datasets. The results demonstrate that our method not only significantly bolsters the model’s resistance to backdoor attacks but also maintains high recognition accuracy for clean samples, thereby outperforming existing methods. Additionally, the code for our method is available at <uri>https://github.com/gaoyafeiyoo/ADBR</uri>.\",\"PeriodicalId\":54347,\"journal\":{\"name\":\"IEEE Internet of Things Journal\",\"volume\":\"12 9\",\"pages\":\"12267-12277\"},\"PeriodicalIF\":8.9000,\"publicationDate\":\"2024-12-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Internet of Things Journal\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10810368/\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Internet of Things Journal","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10810368/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

在物联网边缘设备的背景下,由于训练数据不可用,预训练模型通常直接来自云计算平台。在训练阶段缺乏访问使得这些模型容易受到后门攻击。为了应对这一挑战,我们引入了一种新颖的无数据后门移除方法,即使只有中毒模型可访问,该方法也能有效地运行。我们的创新方法采用了两个具有相同架构的端到端生成器来创建干净和有毒的样本。这些样本对于将知识从教师模型(固定的中毒模型)转移到使用中毒模型初始化的学生模型至关重要。我们的方法在蒸馏过程中利用通道洗牌技术来破坏和消除嵌入在教师模型中的后门知识。这个过程包括对生成器的迭代更新和对学生模型的细致提炼,从而有效地去除后门。我们在两个基准数据集上对五种复杂的后门攻击进行了广泛的实验。结果表明,我们的方法不仅显著增强了模型对后门攻击的抵抗力,而且对干净样本保持了较高的识别精度,从而优于现有的方法。此外,我们的方法的代码可以在https://github.com/gaoyafeiyoo/ADBR上获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A New Data-Free Backdoor Removal Method via Adversarial Self-Knowledge Distillation
In the context of Internet of Things edge devices, pretrained models are often sourced directly from cloud computing platforms due to the unavailability of training data. This lack of access during the training phase makes these models susceptible to backdoor attacks. To address this challenge, we introduce a novel data-free backdoor removal method that operates effectively even when only the poisoned model is accessible. Our innovative approach employs two end-to-end generators with identical architectures to create both clean and poisoned samples. These samples are crucial for transferring knowledge from the teacher model—the fixed poisoned model—to the student model, which is initialized with the poisoned model. Our method utilizes a channel shuffling technique during the distillation process to disrupt and eliminate the backdoor knowledge embedded in the teacher model. This process involves iterative updates of the generators and meticulous distillation of the student model, leading to efficient backdoor removal. We conducted extensive experiments on five sophisticated backdoor attacks across two benchmark datasets. The results demonstrate that our method not only significantly bolsters the model’s resistance to backdoor attacks but also maintains high recognition accuracy for clean samples, thereby outperforming existing methods. Additionally, the code for our method is available at https://github.com/gaoyafeiyoo/ADBR.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Internet of Things Journal
IEEE Internet of Things Journal Computer Science-Information Systems
CiteScore
17.60
自引率
13.20%
发文量
1982
期刊介绍: The EEE Internet of Things (IoT) Journal publishes articles and review articles covering various aspects of IoT, including IoT system architecture, IoT enabling technologies, IoT communication and networking protocols such as network coding, and IoT services and applications. Topics encompass IoT's impacts on sensor technologies, big data management, and future internet design for applications like smart cities and smart homes. Fields of interest include IoT architecture such as things-centric, data-centric, service-oriented IoT architecture; IoT enabling technologies and systematic integration such as sensor technologies, big sensor data management, and future Internet design for IoT; IoT services, applications, and test-beds such as IoT service middleware, IoT application programming interface (API), IoT application design, and IoT trials/experiments; IoT standardization activities and technology development in different standard development organizations (SDO) such as IEEE, IETF, ITU, 3GPP, ETSI, etc.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信