一种针对无人驾驶汽车目标检测模型的防御对抗样本方法研究

Ruzhi Xu, Min Li, Xin Yang, Dexin Liu, Dawei Chen
{"title":"一种针对无人驾驶汽车目标检测模型的防御对抗样本方法研究","authors":"Ruzhi Xu, Min Li, Xin Yang, Dexin Liu, Dawei Chen","doi":"10.34028/iajit/20/5/6","DOIUrl":null,"url":null,"abstract":"The adversarial examples make the object detection model make a wrong judgment, which threatens the security of driverless cars. In this paper, by improving the Momentum Iterative Fast Gradient Sign Method (MI-FGSM), based on ensemble learning, combined with L∞ perturbation and spatial transformation, a strong transferable black-box adversarial attack algorithm for object detection model of driverless cars is proposed. Through a large number of experiments on the nuScenes driverless dataset, it is proved that the adversarial attack algorithm proposed in this paper have strong transferability, and successfully make the mainstream object detection models such as FasterRcnn, SSD, YOLOv3 make wrong judgments. Based on the adversarial attack algorithm proposed in this paper, the parametric noise injection with adversarial training is performed to generate a defense model with strong robustness. The defense model proposed in this paper significantly improves the robustness of the object detection model. It can effectively alleviate various adversarial attacks against the object detection model of driverless cars, and does not affect the accuracy of clean samples. This is of great significance for studying the application of object detection model of driverless cars in the real physical world.","PeriodicalId":161392,"journal":{"name":"The International Arab Journal of Information Technology","volume":"44 4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Research on a Method of Defense Adversarial Samples for Target Detection Model of Driverless Cars\",\"authors\":\"Ruzhi Xu, Min Li, Xin Yang, Dexin Liu, Dawei Chen\",\"doi\":\"10.34028/iajit/20/5/6\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The adversarial examples make the object detection model make a wrong judgment, which threatens the security of driverless cars. In this paper, by improving the Momentum Iterative Fast Gradient Sign Method (MI-FGSM), based on ensemble learning, combined with L∞ perturbation and spatial transformation, a strong transferable black-box adversarial attack algorithm for object detection model of driverless cars is proposed. Through a large number of experiments on the nuScenes driverless dataset, it is proved that the adversarial attack algorithm proposed in this paper have strong transferability, and successfully make the mainstream object detection models such as FasterRcnn, SSD, YOLOv3 make wrong judgments. Based on the adversarial attack algorithm proposed in this paper, the parametric noise injection with adversarial training is performed to generate a defense model with strong robustness. The defense model proposed in this paper significantly improves the robustness of the object detection model. It can effectively alleviate various adversarial attacks against the object detection model of driverless cars, and does not affect the accuracy of clean samples. This is of great significance for studying the application of object detection model of driverless cars in the real physical world.\",\"PeriodicalId\":161392,\"journal\":{\"name\":\"The International Arab Journal of Information Technology\",\"volume\":\"44 4 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1900-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"The International Arab Journal of Information Technology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.34028/iajit/20/5/6\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"The International Arab Journal of Information Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.34028/iajit/20/5/6","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

对抗性示例会使目标检测模型做出错误判断,从而威胁到无人驾驶汽车的安全性。本文通过改进基于集成学习的动量迭代快速梯度符号法(MI-FGSM),结合L∞摄动和空间变换,提出了一种针对无人驾驶汽车目标检测模型的强可转移黑盒对抗攻击算法。通过在nuScenes无人驾驶数据集上的大量实验,证明本文提出的对抗性攻击算法具有很强的可移植性,并成功使FasterRcnn、SSD、YOLOv3等主流目标检测模型做出错误判断。在本文提出的对抗攻击算法的基础上,对参数噪声注入进行对抗训练,生成具有较强鲁棒性的防御模型。本文提出的防御模型显著提高了目标检测模型的鲁棒性。它可以有效缓解针对无人驾驶汽车目标检测模型的各种对抗性攻击,并且不影响干净样本的准确性。这对于研究无人驾驶汽车目标检测模型在真实物理世界中的应用具有重要意义。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Research on a Method of Defense Adversarial Samples for Target Detection Model of Driverless Cars
The adversarial examples make the object detection model make a wrong judgment, which threatens the security of driverless cars. In this paper, by improving the Momentum Iterative Fast Gradient Sign Method (MI-FGSM), based on ensemble learning, combined with L∞ perturbation and spatial transformation, a strong transferable black-box adversarial attack algorithm for object detection model of driverless cars is proposed. Through a large number of experiments on the nuScenes driverless dataset, it is proved that the adversarial attack algorithm proposed in this paper have strong transferability, and successfully make the mainstream object detection models such as FasterRcnn, SSD, YOLOv3 make wrong judgments. Based on the adversarial attack algorithm proposed in this paper, the parametric noise injection with adversarial training is performed to generate a defense model with strong robustness. The defense model proposed in this paper significantly improves the robustness of the object detection model. It can effectively alleviate various adversarial attacks against the object detection model of driverless cars, and does not affect the accuracy of clean samples. This is of great significance for studying the application of object detection model of driverless cars in the real physical world.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信