自动驾驶多任务视觉感知的对抗性攻击

Ibrahim Sobh, Ahmed Hamed, V. Kumar, S. Yogamani
{"title":"自动驾驶多任务视觉感知的对抗性攻击","authors":"Ibrahim Sobh, Ahmed Hamed, V. Kumar, S. Yogamani","doi":"10.2352/j.imagingsci.technol.2021.65.6.060408","DOIUrl":null,"url":null,"abstract":"\n In recent years, deep neural networks (DNNs) have accomplished impressive success in various applications, including autonomous driving perception tasks. However, current deep neural networks are easily deceived by adversarial attacks. This vulnerability raises significant concerns, particularly in safety-critical applications. As a result, research into attacking and defending DNNs has gained much coverage. In this work, detailed adversarial attacks are applied on a diverse multi-task visual perception deep network across distance estimation, semantic segmentation, motion detection, and object detection. The experiments consider both white and black box attacks for targeted and un-targeted cases, while attacking a task and inspecting the effect on all others, in addition to inspecting the effect of applying a simple defense method. We conclude this paper by comparing and discussing the experimental results, proposing insights and future work. The visualizations of the attacks are available at https://youtu.be/6AixN90budY.\n","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"98 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":"{\"title\":\"Adversarial Attacks on Multi-task Visual Perception for Autonomous Driving\",\"authors\":\"Ibrahim Sobh, Ahmed Hamed, V. Kumar, S. Yogamani\",\"doi\":\"10.2352/j.imagingsci.technol.2021.65.6.060408\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"\\n In recent years, deep neural networks (DNNs) have accomplished impressive success in various applications, including autonomous driving perception tasks. However, current deep neural networks are easily deceived by adversarial attacks. This vulnerability raises significant concerns, particularly in safety-critical applications. As a result, research into attacking and defending DNNs has gained much coverage. In this work, detailed adversarial attacks are applied on a diverse multi-task visual perception deep network across distance estimation, semantic segmentation, motion detection, and object detection. The experiments consider both white and black box attacks for targeted and un-targeted cases, while attacking a task and inspecting the effect on all others, in addition to inspecting the effect of applying a simple defense method. We conclude this paper by comparing and discussing the experimental results, proposing insights and future work. The visualizations of the attacks are available at https://youtu.be/6AixN90budY.\\n\",\"PeriodicalId\":177462,\"journal\":{\"name\":\"Autonomous Vehicles and Machines\",\"volume\":\"98 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-07-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"10\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Autonomous Vehicles and Machines\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.2352/j.imagingsci.technol.2021.65.6.060408\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Autonomous Vehicles and Machines","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2352/j.imagingsci.technol.2021.65.6.060408","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 10

摘要

近年来,深度神经网络(dnn)在各种应用中取得了令人印象深刻的成功,包括自动驾驶感知任务。然而,目前的深度神经网络很容易被对抗性攻击欺骗。这个漏洞引起了很大的关注,特别是在安全关键型应用程序中。因此,对攻击和防御dnn的研究已经得到了广泛的报道。在这项工作中,详细的对抗性攻击应用于跨越距离估计、语义分割、运动检测和目标检测的多种多任务视觉感知深度网络。实验考虑了针对目标和非目标情况的白盒和黑盒攻击,同时攻击一个任务并检查对所有其他任务的影响,以及检查应用简单防御方法的效果。最后,我们对实验结果进行了比较和讨论,并提出了自己的见解和未来的工作。攻击的可视化可以在https://youtu.be/6AixN90budY上获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Adversarial Attacks on Multi-task Visual Perception for Autonomous Driving
In recent years, deep neural networks (DNNs) have accomplished impressive success in various applications, including autonomous driving perception tasks. However, current deep neural networks are easily deceived by adversarial attacks. This vulnerability raises significant concerns, particularly in safety-critical applications. As a result, research into attacking and defending DNNs has gained much coverage. In this work, detailed adversarial attacks are applied on a diverse multi-task visual perception deep network across distance estimation, semantic segmentation, motion detection, and object detection. The experiments consider both white and black box attacks for targeted and un-targeted cases, while attacking a task and inspecting the effect on all others, in addition to inspecting the effect of applying a simple defense method. We conclude this paper by comparing and discussing the experimental results, proposing insights and future work. The visualizations of the attacks are available at https://youtu.be/6AixN90budY.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信