{"title":"隐藏在众目睽睽之下通过样式转移对图像边界进行对抗性攻击","authors":"Haiyan Zhang;Xinghua Li;Jiawei Tang;Chunlei Peng;Yunwei Wang;Ning Zhang;Yingbin Miao;Ximeng Liu;Kim-Kwang Raymond Choo","doi":"10.1109/TC.2024.3416761","DOIUrl":null,"url":null,"abstract":"Deep Convolution Neural Networks (CNNs) have become the cornerstone of image classification, but the emergence of adversarial image attacks brings serious security risks to CNN-based applications. As a local perturbation attack, the border attack can achieve high success rates by only modifying the pixels around the border of an image, which is a novel attack perspective. However, existing border attacks have shortcomings in stealthiness and are easily detected. In this article, we propose a novel stealthy border attack method based on deep feature alignment. Specifically, we propose a deep feature alignment algorithm based on style transfer to guarantee the stealthiness of adversarial borders. The algorithm takes the deep feature difference between the adversarial and the original borders as the stealthiness loss and thus ensures good stealthiness of the generated adversarial images. To ensure high attack success rates simultaneously, we apply cross entropy to design the targeted attack loss and use margin loss as well as Leaky ReLU to design the untargeted attack loss. Experiments show that the structural similarity between the generated adversarial images and the original images is 8.8% higher than the state-of-art border attack method, indicating that our proposed adversarial images have better stealthiness. At the same time, the success rate of our attack in the face of defense methods is much higher, which is about four times that of the state-of-art border attack under the adversarial training defense.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"73 10","pages":"2405-2419"},"PeriodicalIF":3.6000,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Hiding in Plain Sight: Adversarial Attack via Style Transfer on Image Borders\",\"authors\":\"Haiyan Zhang;Xinghua Li;Jiawei Tang;Chunlei Peng;Yunwei Wang;Ning Zhang;Yingbin Miao;Ximeng Liu;Kim-Kwang Raymond Choo\",\"doi\":\"10.1109/TC.2024.3416761\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep Convolution Neural Networks (CNNs) have become the cornerstone of image classification, but the emergence of adversarial image attacks brings serious security risks to CNN-based applications. As a local perturbation attack, the border attack can achieve high success rates by only modifying the pixels around the border of an image, which is a novel attack perspective. However, existing border attacks have shortcomings in stealthiness and are easily detected. In this article, we propose a novel stealthy border attack method based on deep feature alignment. Specifically, we propose a deep feature alignment algorithm based on style transfer to guarantee the stealthiness of adversarial borders. The algorithm takes the deep feature difference between the adversarial and the original borders as the stealthiness loss and thus ensures good stealthiness of the generated adversarial images. To ensure high attack success rates simultaneously, we apply cross entropy to design the targeted attack loss and use margin loss as well as Leaky ReLU to design the untargeted attack loss. Experiments show that the structural similarity between the generated adversarial images and the original images is 8.8% higher than the state-of-art border attack method, indicating that our proposed adversarial images have better stealthiness. At the same time, the success rate of our attack in the face of defense methods is much higher, which is about four times that of the state-of-art border attack under the adversarial training defense.\",\"PeriodicalId\":13087,\"journal\":{\"name\":\"IEEE Transactions on Computers\",\"volume\":\"73 10\",\"pages\":\"2405-2419\"},\"PeriodicalIF\":3.6000,\"publicationDate\":\"2024-06-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Computers\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10565292/\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Computers","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10565292/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
Hiding in Plain Sight: Adversarial Attack via Style Transfer on Image Borders
Deep Convolution Neural Networks (CNNs) have become the cornerstone of image classification, but the emergence of adversarial image attacks brings serious security risks to CNN-based applications. As a local perturbation attack, the border attack can achieve high success rates by only modifying the pixels around the border of an image, which is a novel attack perspective. However, existing border attacks have shortcomings in stealthiness and are easily detected. In this article, we propose a novel stealthy border attack method based on deep feature alignment. Specifically, we propose a deep feature alignment algorithm based on style transfer to guarantee the stealthiness of adversarial borders. The algorithm takes the deep feature difference between the adversarial and the original borders as the stealthiness loss and thus ensures good stealthiness of the generated adversarial images. To ensure high attack success rates simultaneously, we apply cross entropy to design the targeted attack loss and use margin loss as well as Leaky ReLU to design the untargeted attack loss. Experiments show that the structural similarity between the generated adversarial images and the original images is 8.8% higher than the state-of-art border attack method, indicating that our proposed adversarial images have better stealthiness. At the same time, the success rate of our attack in the face of defense methods is much higher, which is about four times that of the state-of-art border attack under the adversarial training defense.
期刊介绍:
The IEEE Transactions on Computers is a monthly publication with a wide distribution to researchers, developers, technical managers, and educators in the computer field. It publishes papers on research in areas of current interest to the readers. These areas include, but are not limited to, the following: a) computer organizations and architectures; b) operating systems, software systems, and communication protocols; c) real-time systems and embedded systems; d) digital devices, computer components, and interconnection networks; e) specification, design, prototyping, and testing methods and tools; f) performance, fault tolerance, reliability, security, and testability; g) case studies and experimental and theoretical evaluations; and h) new and important applications and trends.