Adversarially Robust Edge-Based Object Detection for Assuredly Autonomous Systems

Robert Canady, Xingyu Zhou, Yogesh D. Barve, D. Balasubramanian, A. Gokhale
{"title":"Adversarially Robust Edge-Based Object Detection for Assuredly Autonomous Systems","authors":"Robert Canady, Xingyu Zhou, Yogesh D. Barve, D. Balasubramanian, A. Gokhale","doi":"10.1109/ICAA52185.2022.00021","DOIUrl":null,"url":null,"abstract":"Edge-based and autonomous, deep learning computer vision applications, such as those used in surveillance or traffic management, must be assuredly correct and performant. However, realizing these applications in practice incurs a number of challenges. First, the constraints on edge resources precludes the use of large-sized, deep learning computer vision models. Second, the heterogeneity in edge resource types causes different execution speeds and energy consumption during model inference. Third, deep learning models are known to be vulnerable to adversarial perturbations, which can make them ineffective or lead to incorrect inferences. Although some research that addresses the first two challenges exists, defending against adversarial attacks at the edge remains mostly an unresolved problem. To that end, this paper presents techniques to realize robust and edge-based deep learning computer vision applications thereby providing a level of assured autonomy. We utilize state-of-the-art (SOTA) object detection attacks from the TOG (adversarial objectness gradient attacks) suite to design a generalized adversarial robustness evaluation procedure. It enables fast robustness evaluations on popular object detection architectures of YOLOv3, YOLOv3-tiny, and Faster R-CNN with different image classification backbones to test the robustness of these object detection models. We explore two variations of adversarial training. The first variant augments the training data with multiple types of attacks. The second variant exchanges a clean image in the training set for a randomly chosen adversarial image. Our solutions are then evaluated using the PASCAL VOC dataset. Using the first variant, we are able to improve the robustness of YOLOv3-tiny models by 1–2% mean average precision (mAP) and YOLOv3 realized an improvement of up to 17% mAP on attacked data. The second variant saw even better results in some cases with improvements in robustness of over 25% for YOLOv3. The Faster RCNN models also saw improvement, however, less substantially at around 10–15%. Yet, their mAP was improved on clean data as well.","PeriodicalId":206047,"journal":{"name":"2022 IEEE International Conference on Assured Autonomy (ICAA)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Conference on Assured Autonomy (ICAA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICAA52185.2022.00021","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Edge-based and autonomous, deep learning computer vision applications, such as those used in surveillance or traffic management, must be assuredly correct and performant. However, realizing these applications in practice incurs a number of challenges. First, the constraints on edge resources precludes the use of large-sized, deep learning computer vision models. Second, the heterogeneity in edge resource types causes different execution speeds and energy consumption during model inference. Third, deep learning models are known to be vulnerable to adversarial perturbations, which can make them ineffective or lead to incorrect inferences. Although some research that addresses the first two challenges exists, defending against adversarial attacks at the edge remains mostly an unresolved problem. To that end, this paper presents techniques to realize robust and edge-based deep learning computer vision applications thereby providing a level of assured autonomy. We utilize state-of-the-art (SOTA) object detection attacks from the TOG (adversarial objectness gradient attacks) suite to design a generalized adversarial robustness evaluation procedure. It enables fast robustness evaluations on popular object detection architectures of YOLOv3, YOLOv3-tiny, and Faster R-CNN with different image classification backbones to test the robustness of these object detection models. We explore two variations of adversarial training. The first variant augments the training data with multiple types of attacks. The second variant exchanges a clean image in the training set for a randomly chosen adversarial image. Our solutions are then evaluated using the PASCAL VOC dataset. Using the first variant, we are able to improve the robustness of YOLOv3-tiny models by 1–2% mean average precision (mAP) and YOLOv3 realized an improvement of up to 17% mAP on attacked data. The second variant saw even better results in some cases with improvements in robustness of over 25% for YOLOv3. The Faster RCNN models also saw improvement, however, less substantially at around 10–15%. Yet, their mAP was improved on clean data as well.
面向可靠自治系统的对抗鲁棒边缘目标检测
基于边缘和自主的深度学习计算机视觉应用程序,例如用于监控或交通管理的应用程序,必须确保正确和高性能。然而,在实践中实现这些应用程序会带来许多挑战。首先,边缘资源的限制阻碍了大型深度学习计算机视觉模型的使用。其次,边缘资源类型的异质性导致模型推理过程中的执行速度和能量消耗不同。第三,众所周知,深度学习模型容易受到对抗性扰动的影响,这可能使它们无效或导致错误的推断。尽管存在一些针对前两个挑战的研究,但在边缘防御对抗性攻击仍然是一个未解决的问题。为此,本文提出了实现鲁棒和基于边缘的深度学习计算机视觉应用的技术,从而提供了一定程度的保证自治。我们利用TOG(对抗性对象梯度攻击)套件中的最先进(SOTA)目标检测攻击来设计一个广义的对抗性鲁棒性评估程序。它可以对YOLOv3, YOLOv3-tiny和Faster R-CNN等流行的目标检测架构进行快速鲁棒性评估,并具有不同的图像分类主干,以测试这些目标检测模型的鲁棒性。我们探讨了对抗性训练的两种变体。第一种变体使用多种类型的攻击来增强训练数据。第二种变体将训练集中的干净图像交换为随机选择的对抗图像。然后使用PASCAL VOC数据集评估我们的解决方案。使用第一种变体,我们能够将YOLOv3-tiny模型的鲁棒性提高1-2%的平均精度(mAP), YOLOv3在受攻击数据上实现了高达17%的mAP改进。第二种变体在某些情况下看到了更好的结果,YOLOv3的鲁棒性提高了25%以上。然而,更快的RCNN模型也有改善,但没有那么明显,大约在10-15%左右。然而,他们的mAP在干净数据上也得到了改进。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信