{"title":"利用注意力和隐写术进行隐形后门攻击","authors":"Wenmin Chen, Xiaowei Xu, Xiaodong Wang, Huasong Zhou, Zewen Li, Yangming Chen","doi":"10.1016/j.cviu.2024.104208","DOIUrl":null,"url":null,"abstract":"<div><div>Recently, with the development and widespread application of deep neural networks (DNNs), backdoor attacks have posed new security threats to the training process of DNNs. Backdoor attacks on neural networks undermine the security and trustworthiness of DNNs by implanting hidden, unauthorized triggers, leading to benign behavior on clean samples while exhibiting malicious behavior on samples containing backdoor triggers. Existing backdoor attacks typically employ triggers that are sample-agnostic and identical for each sample, resulting in poisoned images that lack naturalness and are ineffective against existing backdoor defenses. To address these issues, this paper proposes a novel stealthy backdoor attack, where the backdoor trigger is dynamic and specific to each sample. Specifically, we leverage spatial attention on images and pre-trained models to obtain dynamic triggers, which are then injected using an encoder–decoder network. The design of the injection network benefits from recent advances in steganography research. To demonstrate the effectiveness of the proposed steganographic network, we design two backdoor attack modes named ASBA and ATBA, where ASBA utilizes the steganographic network for attack, while ATBA is a backdoor attack without steganography. Subsequently, we conducted attacks on Deep Neural Networks (DNNs) using four standard datasets. Our extensive experiments show that ASBA surpasses ATBA in terms of stealthiness and resilience against current defensive measures. Furthermore, both ASBA and ATBA demonstrate superior attack efficiency.</div></div>","PeriodicalId":50633,"journal":{"name":"Computer Vision and Image Understanding","volume":"249 ","pages":"Article 104208"},"PeriodicalIF":4.3000,"publicationDate":"2024-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Invisible backdoor attack with attention and steganography\",\"authors\":\"Wenmin Chen, Xiaowei Xu, Xiaodong Wang, Huasong Zhou, Zewen Li, Yangming Chen\",\"doi\":\"10.1016/j.cviu.2024.104208\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Recently, with the development and widespread application of deep neural networks (DNNs), backdoor attacks have posed new security threats to the training process of DNNs. Backdoor attacks on neural networks undermine the security and trustworthiness of DNNs by implanting hidden, unauthorized triggers, leading to benign behavior on clean samples while exhibiting malicious behavior on samples containing backdoor triggers. Existing backdoor attacks typically employ triggers that are sample-agnostic and identical for each sample, resulting in poisoned images that lack naturalness and are ineffective against existing backdoor defenses. To address these issues, this paper proposes a novel stealthy backdoor attack, where the backdoor trigger is dynamic and specific to each sample. Specifically, we leverage spatial attention on images and pre-trained models to obtain dynamic triggers, which are then injected using an encoder–decoder network. The design of the injection network benefits from recent advances in steganography research. To demonstrate the effectiveness of the proposed steganographic network, we design two backdoor attack modes named ASBA and ATBA, where ASBA utilizes the steganographic network for attack, while ATBA is a backdoor attack without steganography. Subsequently, we conducted attacks on Deep Neural Networks (DNNs) using four standard datasets. Our extensive experiments show that ASBA surpasses ATBA in terms of stealthiness and resilience against current defensive measures. Furthermore, both ASBA and ATBA demonstrate superior attack efficiency.</div></div>\",\"PeriodicalId\":50633,\"journal\":{\"name\":\"Computer Vision and Image Understanding\",\"volume\":\"249 \",\"pages\":\"Article 104208\"},\"PeriodicalIF\":4.3000,\"publicationDate\":\"2024-10-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computer Vision and Image Understanding\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1077314224002893\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Vision and Image Understanding","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1077314224002893","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Invisible backdoor attack with attention and steganography
Recently, with the development and widespread application of deep neural networks (DNNs), backdoor attacks have posed new security threats to the training process of DNNs. Backdoor attacks on neural networks undermine the security and trustworthiness of DNNs by implanting hidden, unauthorized triggers, leading to benign behavior on clean samples while exhibiting malicious behavior on samples containing backdoor triggers. Existing backdoor attacks typically employ triggers that are sample-agnostic and identical for each sample, resulting in poisoned images that lack naturalness and are ineffective against existing backdoor defenses. To address these issues, this paper proposes a novel stealthy backdoor attack, where the backdoor trigger is dynamic and specific to each sample. Specifically, we leverage spatial attention on images and pre-trained models to obtain dynamic triggers, which are then injected using an encoder–decoder network. The design of the injection network benefits from recent advances in steganography research. To demonstrate the effectiveness of the proposed steganographic network, we design two backdoor attack modes named ASBA and ATBA, where ASBA utilizes the steganographic network for attack, while ATBA is a backdoor attack without steganography. Subsequently, we conducted attacks on Deep Neural Networks (DNNs) using four standard datasets. Our extensive experiments show that ASBA surpasses ATBA in terms of stealthiness and resilience against current defensive measures. Furthermore, both ASBA and ATBA demonstrate superior attack efficiency.
期刊介绍:
The central focus of this journal is the computer analysis of pictorial information. Computer Vision and Image Understanding publishes papers covering all aspects of image analysis from the low-level, iconic processes of early vision to the high-level, symbolic processes of recognition and interpretation. A wide range of topics in the image understanding area is covered, including papers offering insights that differ from predominant views.
Research Areas Include:
• Theory
• Early vision
• Data structures and representations
• Shape
• Range
• Motion
• Matching and recognition
• Architecture and languages
• Vision systems