基于目标特征关注和扰动提取的目标检测对抗攻击

IF 6.6 1区 计算机科学 Q1 Multidisciplinary
Wei Xue;Xiaoyan Xia;Pengcheng Wan;Ping Zhong;Xiao Zheng
{"title":"基于目标特征关注和扰动提取的目标检测对抗攻击","authors":"Wei Xue;Xiaoyan Xia;Pengcheng Wan;Ping Zhong;Xiao Zheng","doi":"10.26599/TST.2024.9010029","DOIUrl":null,"url":null,"abstract":"Deep neural networks are commonly used in computer vision tasks, but they are vulnerable to adversarial samples, resulting in poor recognition accuracy. Although traditional algorithms that craft adversarial samples have been effective in attacking classification models, the attacking performance degrades when facing object detection models with more complex structures. To address this issue better, in this paper we first analyze the mechanism of multi-scale feature extraction of object detection models, and then by constructing the object feature-wise attention module and the perturbation extraction module, a novel adversarial sample generation algorithm for attacking detection models is proposed. Specifically, in the first module, based on the multi-scale feature map, we reduce the range of perturbation and improve the stealthiness of adversarial samples by computing the noise distribution in the object region. Then in the second module, we feed the noise distribution into the generative adversarial networks to generate adversarial perturbation with strong attack transferability. By doing so, the proposed approach possesses the ability to better confuse the judgment of detection models. Experiments carried out on the DroneVehicle dataset show that our method is computationally efficient and works well in attacking detection models measured by qualitative analysis and quantitative analysis.","PeriodicalId":48690,"journal":{"name":"Tsinghua Science and Technology","volume":"30 3","pages":"1174-1189"},"PeriodicalIF":6.6000,"publicationDate":"2024-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10817718","citationCount":"0","resultStr":"{\"title\":\"Adversarial Attack on Object Detection via Object Feature-Wise Attention and Perturbation Extraction\",\"authors\":\"Wei Xue;Xiaoyan Xia;Pengcheng Wan;Ping Zhong;Xiao Zheng\",\"doi\":\"10.26599/TST.2024.9010029\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep neural networks are commonly used in computer vision tasks, but they are vulnerable to adversarial samples, resulting in poor recognition accuracy. Although traditional algorithms that craft adversarial samples have been effective in attacking classification models, the attacking performance degrades when facing object detection models with more complex structures. To address this issue better, in this paper we first analyze the mechanism of multi-scale feature extraction of object detection models, and then by constructing the object feature-wise attention module and the perturbation extraction module, a novel adversarial sample generation algorithm for attacking detection models is proposed. Specifically, in the first module, based on the multi-scale feature map, we reduce the range of perturbation and improve the stealthiness of adversarial samples by computing the noise distribution in the object region. Then in the second module, we feed the noise distribution into the generative adversarial networks to generate adversarial perturbation with strong attack transferability. By doing so, the proposed approach possesses the ability to better confuse the judgment of detection models. Experiments carried out on the DroneVehicle dataset show that our method is computationally efficient and works well in attacking detection models measured by qualitative analysis and quantitative analysis.\",\"PeriodicalId\":48690,\"journal\":{\"name\":\"Tsinghua Science and Technology\",\"volume\":\"30 3\",\"pages\":\"1174-1189\"},\"PeriodicalIF\":6.6000,\"publicationDate\":\"2024-12-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10817718\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Tsinghua Science and Technology\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10817718/\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"Multidisciplinary\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Tsinghua Science and Technology","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10817718/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Multidisciplinary","Score":null,"Total":0}
引用次数: 0

摘要

深度神经网络通常用于计算机视觉任务,但它容易受到对抗性样本的影响,导致识别精度较差。虽然传统的制作对抗性样本的算法在攻击分类模型时是有效的,但是当面对结构更复杂的目标检测模型时,攻击性能会下降。为了更好地解决这一问题,本文首先分析了目标检测模型的多尺度特征提取机制,然后通过构造目标特征关注模块和摄动提取模块,提出了一种新的攻击检测模型的对抗样本生成算法。具体而言,在第一个模块中,我们基于多尺度特征映射,通过计算目标区域的噪声分布来减小扰动范围,提高对抗样本的隐身性。然后在第二个模块中,我们将噪声分布馈送到生成对抗网络中,以产生具有强攻击可转移性的对抗摄动。通过这样做,所提出的方法具有更好地混淆检测模型判断的能力。在无人机数据集上进行的实验表明,我们的方法计算效率高,可以很好地攻击定性分析和定量分析测量的检测模型。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Adversarial Attack on Object Detection via Object Feature-Wise Attention and Perturbation Extraction
Deep neural networks are commonly used in computer vision tasks, but they are vulnerable to adversarial samples, resulting in poor recognition accuracy. Although traditional algorithms that craft adversarial samples have been effective in attacking classification models, the attacking performance degrades when facing object detection models with more complex structures. To address this issue better, in this paper we first analyze the mechanism of multi-scale feature extraction of object detection models, and then by constructing the object feature-wise attention module and the perturbation extraction module, a novel adversarial sample generation algorithm for attacking detection models is proposed. Specifically, in the first module, based on the multi-scale feature map, we reduce the range of perturbation and improve the stealthiness of adversarial samples by computing the noise distribution in the object region. Then in the second module, we feed the noise distribution into the generative adversarial networks to generate adversarial perturbation with strong attack transferability. By doing so, the proposed approach possesses the ability to better confuse the judgment of detection models. Experiments carried out on the DroneVehicle dataset show that our method is computationally efficient and works well in attacking detection models measured by qualitative analysis and quantitative analysis.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Tsinghua Science and Technology
Tsinghua Science and Technology COMPUTER SCIENCE, INFORMATION SYSTEMSCOMPU-COMPUTER SCIENCE, SOFTWARE ENGINEERING
CiteScore
10.20
自引率
10.60%
发文量
2340
期刊介绍: Tsinghua Science and Technology (Tsinghua Sci Technol) started publication in 1996. It is an international academic journal sponsored by Tsinghua University and is published bimonthly. This journal aims at presenting the up-to-date scientific achievements in computer science, electronic engineering, and other IT fields. Contributions all over the world are welcome.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信