{"title":"A Reliable Approach for Generating Realistic Adversarial Attack via Trust Region-Based Optimization","authors":"Lovi Dhamija, Urvashi Bansal","doi":"10.1007/s13369-024-09293-y","DOIUrl":null,"url":null,"abstract":"<p>Adversarial attacks involve introducing minimal perturbations into the original input to manipulate deep learning models into making incorrect network predictions. Despite substantial interest, there remains insufficient research investigating the impact of adversarial attacks in real-world scenarios. Moreover, adversarial attacks have been extensively examined within the digital domain, but adapting them to realistic scenarios brings new challenges and opportunities. Existing physical world adversarial attacks often look perceptible and attention-grabbing, failing to imitate real-world scenarios credibly when tested on object detectors. This research attempts to craft a physical world adversarial attack that deceives object recognition systems and human observers to address the mentioned issues. The devised attacking approach tried to simulate the realistic appearance of stains left by rain particles on traffic signs, making the adversarial examples blend seamlessly into their environment. This work proposed a region reflection algorithm to localize the optimal perturbation points that reflected the trusted regions by employing the trust region optimization with a multi-quadratic function. The experimental evaluation reveals that the proposed work achieved an average attack success rate (ASR) of 94.18%. Experimentation underscores its applicability in a dynamic range of real-world settings through experiments involving distance and angle variations in physical world settings. However, the performance evaluation across various detection models reveals its generalizable and transferable nature. The outcomes of this study help to understand the vulnerabilities of object detectors and inspire AI (artificial intelligence) researchers to develop more robust and resilient defensive mechanisms.</p>","PeriodicalId":8109,"journal":{"name":"Arabian Journal for Science and Engineering","volume":"20 1","pages":""},"PeriodicalIF":2.9000,"publicationDate":"2024-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Arabian Journal for Science and Engineering","FirstCategoryId":"103","ListUrlMain":"https://doi.org/10.1007/s13369-024-09293-y","RegionNum":4,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Multidisciplinary","Score":null,"Total":0}
引用次数: 0
Abstract
Adversarial attacks involve introducing minimal perturbations into the original input to manipulate deep learning models into making incorrect network predictions. Despite substantial interest, there remains insufficient research investigating the impact of adversarial attacks in real-world scenarios. Moreover, adversarial attacks have been extensively examined within the digital domain, but adapting them to realistic scenarios brings new challenges and opportunities. Existing physical world adversarial attacks often look perceptible and attention-grabbing, failing to imitate real-world scenarios credibly when tested on object detectors. This research attempts to craft a physical world adversarial attack that deceives object recognition systems and human observers to address the mentioned issues. The devised attacking approach tried to simulate the realistic appearance of stains left by rain particles on traffic signs, making the adversarial examples blend seamlessly into their environment. This work proposed a region reflection algorithm to localize the optimal perturbation points that reflected the trusted regions by employing the trust region optimization with a multi-quadratic function. The experimental evaluation reveals that the proposed work achieved an average attack success rate (ASR) of 94.18%. Experimentation underscores its applicability in a dynamic range of real-world settings through experiments involving distance and angle variations in physical world settings. However, the performance evaluation across various detection models reveals its generalizable and transferable nature. The outcomes of this study help to understand the vulnerabilities of object detectors and inspire AI (artificial intelligence) researchers to develop more robust and resilient defensive mechanisms.
期刊介绍:
King Fahd University of Petroleum & Minerals (KFUPM) partnered with Springer to publish the Arabian Journal for Science and Engineering (AJSE).
AJSE, which has been published by KFUPM since 1975, is a recognized national, regional and international journal that provides a great opportunity for the dissemination of research advances from the Kingdom of Saudi Arabia, MENA and the world.