{"title":"Backdoor Attacks to Deep Learning Models and Countermeasures: A Survey","authors":"Yudong Li;Shigeng Zhang;Weiping Wang;Hong Song","doi":"10.1109/OJCS.2023.3267221","DOIUrl":null,"url":null,"abstract":"Backdoor attacks have severely threatened deep neural network (DNN) models in the past several years. In backdoor attacks, the attackers try to plant hidden backdoors into DNN models, either in the training or inference stage, to mislead the output of the model when the input contains some specified triggers without affecting the prediction of normal inputs not containing the triggers. As a rapidly developing topic, numerous works on designing various backdoor attacks and developing techniques to defend against such attacks have been proposed in recent years. However, a comprehensive and holistic overview of backdoor attacks and countermeasures is still missing. In this paper, we provide a systematic overview of the design of backdoor attacks and the defense strategies to defend against backdoor attacks, covering the latest published works. We review representative backdoor attacks and defense strategies in both the computer vision domain and other domains, discuss their pros and cons, and make comparisons among them. We outline key challenges to be addressed and potential research directions in the future.","PeriodicalId":13205,"journal":{"name":"IEEE Open Journal of the Computer Society","volume":"4 ","pages":"134-146"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/8782664/10016900/10102775.pdf","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Open Journal of the Computer Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10102775/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Backdoor attacks have severely threatened deep neural network (DNN) models in the past several years. In backdoor attacks, the attackers try to plant hidden backdoors into DNN models, either in the training or inference stage, to mislead the output of the model when the input contains some specified triggers without affecting the prediction of normal inputs not containing the triggers. As a rapidly developing topic, numerous works on designing various backdoor attacks and developing techniques to defend against such attacks have been proposed in recent years. However, a comprehensive and holistic overview of backdoor attacks and countermeasures is still missing. In this paper, we provide a systematic overview of the design of backdoor attacks and the defense strategies to defend against backdoor attacks, covering the latest published works. We review representative backdoor attacks and defense strategies in both the computer vision domain and other domains, discuss their pros and cons, and make comparisons among them. We outline key challenges to be addressed and potential research directions in the future.