Invisible Poison: A Blackbox Clean Label Backdoor Attack to Deep Neural Networks

R. Ning, Jiang Li, Chunsheng Xin, Hongyi Wu
{"title":"Invisible Poison: A Blackbox Clean Label Backdoor Attack to Deep Neural Networks","authors":"R. Ning, Jiang Li, Chunsheng Xin, Hongyi Wu","doi":"10.1109/INFOCOM42981.2021.9488902","DOIUrl":null,"url":null,"abstract":"This paper reports a new clean-label data poisoning backdoor attack, named Invisible Poison, which stealthily and aggressively plants a backdoor in neural networks. It converts a regular trigger to a noised trigger that can be easily concealed inside images for training NN, with the objective to plant a backdoor that can be later activated by the trigger. Compared with existing data poisoning backdoor attacks, this newfound attack has the following distinct properties. First, it is a blackbox attack, requiring zero-knowledge of the target model. Second, this attack utilizes \"invisible poison\" to achieve stealthiness where the trigger is disguised as ‘noise’, and thus can easily evade human inspection. On the other hand, this noised trigger remains effective in the feature space to poison training data. Third, the attack is practical and aggressive. A backdoor can be effectively planted with a small amount of poisoned data and is robust to most data augmentation methods during training. The attack is fully tested on multiple benchmark datasets including MNIST, Cifar10, and ImageNet10, as well as application specific data sets such as Yahoo Adblocker and GTSRB. Two countermeasures, namely Supervised and Unsupervised Poison Sample Detection, are introduced to defend the attack.","PeriodicalId":293079,"journal":{"name":"IEEE INFOCOM 2021 - IEEE Conference on Computer Communications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"17","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE INFOCOM 2021 - IEEE Conference on Computer Communications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/INFOCOM42981.2021.9488902","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 17

Abstract

This paper reports a new clean-label data poisoning backdoor attack, named Invisible Poison, which stealthily and aggressively plants a backdoor in neural networks. It converts a regular trigger to a noised trigger that can be easily concealed inside images for training NN, with the objective to plant a backdoor that can be later activated by the trigger. Compared with existing data poisoning backdoor attacks, this newfound attack has the following distinct properties. First, it is a blackbox attack, requiring zero-knowledge of the target model. Second, this attack utilizes "invisible poison" to achieve stealthiness where the trigger is disguised as ‘noise’, and thus can easily evade human inspection. On the other hand, this noised trigger remains effective in the feature space to poison training data. Third, the attack is practical and aggressive. A backdoor can be effectively planted with a small amount of poisoned data and is robust to most data augmentation methods during training. The attack is fully tested on multiple benchmark datasets including MNIST, Cifar10, and ImageNet10, as well as application specific data sets such as Yahoo Adblocker and GTSRB. Two countermeasures, namely Supervised and Unsupervised Poison Sample Detection, are introduced to defend the attack.
看不见的毒药:对深度神经网络的黑盒干净标签后门攻击
本文报道了一种新的干净标签数据中毒后门攻击,称为“隐形毒药”,它在神经网络中秘密地、积极地植入后门。它将常规触发器转换为噪声触发器,可以很容易地隐藏在图像中用于训练神经网络,目的是植入一个后门,稍后可以被触发器激活。与现有的数据中毒后门攻击相比,这种新发现的攻击具有以下明显的特性:首先,它是一种黑盒攻击,需要对目标模型零知识。其次,这种攻击利用“看不见的毒药”来实现隐形,触发器伪装成“噪音”,因此可以很容易地逃避人类的检查。另一方面,这种带噪声的触发器在特征空间中仍然有效,可以毒害训练数据。第三,进攻切实有力。后门可以有效地植入少量有毒数据,并且在训练期间对大多数数据增强方法具有鲁棒性。该攻击在多个基准数据集(包括MNIST、Cifar10和ImageNet10)以及特定于应用程序的数据集(如Yahoo Adblocker和GTSRB)上进行了全面测试。提出了两种防范措施,即有监督和无监督的毒物样本检测。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信