{"title":"自动驾驶多任务视觉感知的对抗性攻击","authors":"Ibrahim Sobh, Ahmed Hamed, V. Kumar, S. Yogamani","doi":"10.2352/j.imagingsci.technol.2021.65.6.060408","DOIUrl":null,"url":null,"abstract":"\n In recent years, deep neural networks (DNNs) have accomplished impressive success in various applications, including autonomous driving perception tasks. However, current deep neural networks are easily deceived by adversarial attacks. This vulnerability raises significant concerns, particularly in safety-critical applications. As a result, research into attacking and defending DNNs has gained much coverage. In this work, detailed adversarial attacks are applied on a diverse multi-task visual perception deep network across distance estimation, semantic segmentation, motion detection, and object detection. The experiments consider both white and black box attacks for targeted and un-targeted cases, while attacking a task and inspecting the effect on all others, in addition to inspecting the effect of applying a simple defense method. We conclude this paper by comparing and discussing the experimental results, proposing insights and future work. The visualizations of the attacks are available at https://youtu.be/6AixN90budY.\n","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"98 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":"{\"title\":\"Adversarial Attacks on Multi-task Visual Perception for Autonomous Driving\",\"authors\":\"Ibrahim Sobh, Ahmed Hamed, V. Kumar, S. Yogamani\",\"doi\":\"10.2352/j.imagingsci.technol.2021.65.6.060408\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"\\n In recent years, deep neural networks (DNNs) have accomplished impressive success in various applications, including autonomous driving perception tasks. However, current deep neural networks are easily deceived by adversarial attacks. This vulnerability raises significant concerns, particularly in safety-critical applications. As a result, research into attacking and defending DNNs has gained much coverage. In this work, detailed adversarial attacks are applied on a diverse multi-task visual perception deep network across distance estimation, semantic segmentation, motion detection, and object detection. The experiments consider both white and black box attacks for targeted and un-targeted cases, while attacking a task and inspecting the effect on all others, in addition to inspecting the effect of applying a simple defense method. We conclude this paper by comparing and discussing the experimental results, proposing insights and future work. The visualizations of the attacks are available at https://youtu.be/6AixN90budY.\\n\",\"PeriodicalId\":177462,\"journal\":{\"name\":\"Autonomous Vehicles and Machines\",\"volume\":\"98 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-07-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"10\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Autonomous Vehicles and Machines\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.2352/j.imagingsci.technol.2021.65.6.060408\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Autonomous Vehicles and Machines","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2352/j.imagingsci.technol.2021.65.6.060408","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Adversarial Attacks on Multi-task Visual Perception for Autonomous Driving
In recent years, deep neural networks (DNNs) have accomplished impressive success in various applications, including autonomous driving perception tasks. However, current deep neural networks are easily deceived by adversarial attacks. This vulnerability raises significant concerns, particularly in safety-critical applications. As a result, research into attacking and defending DNNs has gained much coverage. In this work, detailed adversarial attacks are applied on a diverse multi-task visual perception deep network across distance estimation, semantic segmentation, motion detection, and object detection. The experiments consider both white and black box attacks for targeted and un-targeted cases, while attacking a task and inspecting the effect on all others, in addition to inspecting the effect of applying a simple defense method. We conclude this paper by comparing and discussing the experimental results, proposing insights and future work. The visualizations of the attacks are available at https://youtu.be/6AixN90budY.