A Reliable Approach for Generating Realistic Adversarial Attack via Trust Region-Based Optimization

IF 2.9 4区 综合性期刊 Q1 Multidisciplinary
Lovi Dhamija, Urvashi Bansal
{"title":"A Reliable Approach for Generating Realistic Adversarial Attack via Trust Region-Based Optimization","authors":"Lovi Dhamija, Urvashi Bansal","doi":"10.1007/s13369-024-09293-y","DOIUrl":null,"url":null,"abstract":"<p>Adversarial attacks involve introducing minimal perturbations into the original input to manipulate deep learning models into making incorrect network predictions. Despite substantial interest, there remains insufficient research investigating the impact of adversarial attacks in real-world scenarios. Moreover, adversarial attacks have been extensively examined within the digital domain, but adapting them to realistic scenarios brings new challenges and opportunities. Existing physical world adversarial attacks often look perceptible and attention-grabbing, failing to imitate real-world scenarios credibly when tested on object detectors. This research attempts to craft a physical world adversarial attack that deceives object recognition systems and human observers to address the mentioned issues. The devised attacking approach tried to simulate the realistic appearance of stains left by rain particles on traffic signs, making the adversarial examples blend seamlessly into their environment. This work proposed a region reflection algorithm to localize the optimal perturbation points that reflected the trusted regions by employing the trust region optimization with a multi-quadratic function. The experimental evaluation reveals that the proposed work achieved an average attack success rate (ASR) of 94.18%. Experimentation underscores its applicability in a dynamic range of real-world settings through experiments involving distance and angle variations in physical world settings. However, the performance evaluation across various detection models reveals its generalizable and transferable nature. The outcomes of this study help to understand the vulnerabilities of object detectors and inspire AI (artificial intelligence) researchers to develop more robust and resilient defensive mechanisms.</p>","PeriodicalId":8109,"journal":{"name":"Arabian Journal for Science and Engineering","volume":"20 1","pages":""},"PeriodicalIF":2.9000,"publicationDate":"2024-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Arabian Journal for Science and Engineering","FirstCategoryId":"103","ListUrlMain":"https://doi.org/10.1007/s13369-024-09293-y","RegionNum":4,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Multidisciplinary","Score":null,"Total":0}
引用次数: 0

Abstract

Adversarial attacks involve introducing minimal perturbations into the original input to manipulate deep learning models into making incorrect network predictions. Despite substantial interest, there remains insufficient research investigating the impact of adversarial attacks in real-world scenarios. Moreover, adversarial attacks have been extensively examined within the digital domain, but adapting them to realistic scenarios brings new challenges and opportunities. Existing physical world adversarial attacks often look perceptible and attention-grabbing, failing to imitate real-world scenarios credibly when tested on object detectors. This research attempts to craft a physical world adversarial attack that deceives object recognition systems and human observers to address the mentioned issues. The devised attacking approach tried to simulate the realistic appearance of stains left by rain particles on traffic signs, making the adversarial examples blend seamlessly into their environment. This work proposed a region reflection algorithm to localize the optimal perturbation points that reflected the trusted regions by employing the trust region optimization with a multi-quadratic function. The experimental evaluation reveals that the proposed work achieved an average attack success rate (ASR) of 94.18%. Experimentation underscores its applicability in a dynamic range of real-world settings through experiments involving distance and angle variations in physical world settings. However, the performance evaluation across various detection models reveals its generalizable and transferable nature. The outcomes of this study help to understand the vulnerabilities of object detectors and inspire AI (artificial intelligence) researchers to develop more robust and resilient defensive mechanisms.

Abstract Image

通过基于信任区域的优化生成真实逆向攻击的可靠方法
对抗性攻击是指在原始输入中引入最小扰动,操纵深度学习模型做出错误的网络预测。尽管人们对此兴趣浓厚,但对对抗性攻击在现实世界中的影响的研究仍然不足。此外,对抗性攻击已在数字领域得到广泛研究,但将其应用于现实场景会带来新的挑战和机遇。现有的物理世界中的对抗性攻击通常看起来很容易察觉并引起注意,但在物体探测器上进行测试时,却无法令人信服地模仿真实世界的场景。本研究试图设计一种物理世界对抗攻击,欺骗物体识别系统和人类观察者,以解决上述问题。所设计的攻击方法试图模拟雨滴颗粒在交通标志上留下的污渍的真实外观,使对抗示例与环境完美融合。这项工作提出了一种区域反射算法,通过使用多二次函数的信任区域优化来定位反映信任区域的最佳扰动点。实验评估显示,所提出的工作实现了 94.18% 的平均攻击成功率 (ASR)。通过涉及物理世界环境中距离和角度变化的实验,强调了其在真实世界环境动态范围内的适用性。然而,对各种检测模型的性能评估显示了它的通用性和可转移性。这项研究的成果有助于了解物体检测器的弱点,并激励人工智能(AI)研究人员开发更强大、更有弹性的防御机制。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Arabian Journal for Science and Engineering
Arabian Journal for Science and Engineering 综合性期刊-综合性期刊
CiteScore
5.20
自引率
3.40%
发文量
0
审稿时长
4.3 months
期刊介绍: King Fahd University of Petroleum & Minerals (KFUPM) partnered with Springer to publish the Arabian Journal for Science and Engineering (AJSE). AJSE, which has been published by KFUPM since 1975, is a recognized national, regional and international journal that provides a great opportunity for the dissemination of research advances from the Kingdom of Saudi Arabia, MENA and the world.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信