{"title":"基于目标特征关注和扰动提取的目标检测对抗攻击","authors":"Wei Xue;Xiaoyan Xia;Pengcheng Wan;Ping Zhong;Xiao Zheng","doi":"10.26599/TST.2024.9010029","DOIUrl":null,"url":null,"abstract":"Deep neural networks are commonly used in computer vision tasks, but they are vulnerable to adversarial samples, resulting in poor recognition accuracy. Although traditional algorithms that craft adversarial samples have been effective in attacking classification models, the attacking performance degrades when facing object detection models with more complex structures. To address this issue better, in this paper we first analyze the mechanism of multi-scale feature extraction of object detection models, and then by constructing the object feature-wise attention module and the perturbation extraction module, a novel adversarial sample generation algorithm for attacking detection models is proposed. Specifically, in the first module, based on the multi-scale feature map, we reduce the range of perturbation and improve the stealthiness of adversarial samples by computing the noise distribution in the object region. Then in the second module, we feed the noise distribution into the generative adversarial networks to generate adversarial perturbation with strong attack transferability. By doing so, the proposed approach possesses the ability to better confuse the judgment of detection models. Experiments carried out on the DroneVehicle dataset show that our method is computationally efficient and works well in attacking detection models measured by qualitative analysis and quantitative analysis.","PeriodicalId":48690,"journal":{"name":"Tsinghua Science and Technology","volume":"30 3","pages":"1174-1189"},"PeriodicalIF":6.6000,"publicationDate":"2024-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10817718","citationCount":"0","resultStr":"{\"title\":\"Adversarial Attack on Object Detection via Object Feature-Wise Attention and Perturbation Extraction\",\"authors\":\"Wei Xue;Xiaoyan Xia;Pengcheng Wan;Ping Zhong;Xiao Zheng\",\"doi\":\"10.26599/TST.2024.9010029\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep neural networks are commonly used in computer vision tasks, but they are vulnerable to adversarial samples, resulting in poor recognition accuracy. Although traditional algorithms that craft adversarial samples have been effective in attacking classification models, the attacking performance degrades when facing object detection models with more complex structures. To address this issue better, in this paper we first analyze the mechanism of multi-scale feature extraction of object detection models, and then by constructing the object feature-wise attention module and the perturbation extraction module, a novel adversarial sample generation algorithm for attacking detection models is proposed. Specifically, in the first module, based on the multi-scale feature map, we reduce the range of perturbation and improve the stealthiness of adversarial samples by computing the noise distribution in the object region. Then in the second module, we feed the noise distribution into the generative adversarial networks to generate adversarial perturbation with strong attack transferability. By doing so, the proposed approach possesses the ability to better confuse the judgment of detection models. Experiments carried out on the DroneVehicle dataset show that our method is computationally efficient and works well in attacking detection models measured by qualitative analysis and quantitative analysis.\",\"PeriodicalId\":48690,\"journal\":{\"name\":\"Tsinghua Science and Technology\",\"volume\":\"30 3\",\"pages\":\"1174-1189\"},\"PeriodicalIF\":6.6000,\"publicationDate\":\"2024-12-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10817718\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Tsinghua Science and Technology\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10817718/\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"Multidisciplinary\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Tsinghua Science and Technology","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10817718/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Multidisciplinary","Score":null,"Total":0}
Adversarial Attack on Object Detection via Object Feature-Wise Attention and Perturbation Extraction
Deep neural networks are commonly used in computer vision tasks, but they are vulnerable to adversarial samples, resulting in poor recognition accuracy. Although traditional algorithms that craft adversarial samples have been effective in attacking classification models, the attacking performance degrades when facing object detection models with more complex structures. To address this issue better, in this paper we first analyze the mechanism of multi-scale feature extraction of object detection models, and then by constructing the object feature-wise attention module and the perturbation extraction module, a novel adversarial sample generation algorithm for attacking detection models is proposed. Specifically, in the first module, based on the multi-scale feature map, we reduce the range of perturbation and improve the stealthiness of adversarial samples by computing the noise distribution in the object region. Then in the second module, we feed the noise distribution into the generative adversarial networks to generate adversarial perturbation with strong attack transferability. By doing so, the proposed approach possesses the ability to better confuse the judgment of detection models. Experiments carried out on the DroneVehicle dataset show that our method is computationally efficient and works well in attacking detection models measured by qualitative analysis and quantitative analysis.
期刊介绍:
Tsinghua Science and Technology (Tsinghua Sci Technol) started publication in 1996. It is an international academic journal sponsored by Tsinghua University and is published bimonthly. This journal aims at presenting the up-to-date scientific achievements in computer science, electronic engineering, and other IT fields. Contributions all over the world are welcome.