Jiale Hu, Xiang Li, Changzheng Liu, Ronghua Zhang, Junwei Tang, Yi Sun, Yuedong Wang
{"title":"APDL: an adaptive step size method for white-box adversarial attacks","authors":"Jiale Hu, Xiang Li, Changzheng Liu, Ronghua Zhang, Junwei Tang, Yi Sun, Yuedong Wang","doi":"10.1007/s40747-024-01748-x","DOIUrl":null,"url":null,"abstract":"<p>Recent research has shown that deep learning models are vulnerable to adversarial attacks, including gradient attacks, which can lead to incorrect outputs. The existing gradient attack methods typically rely on repetitive multistep strategies to improve their attack success rates, resulting in longer training times and severe overfitting. To address these issues, we propose an adaptive perturbation-based gradient attack method with dual-loss optimization (APDL). This method adaptively adjusts the single-step perturbation magnitude based on an exponential distance function, thereby accelerating the convergence process. APDL achieves convergence in fewer than 10 iterations, outperforming the traditional nonadaptive methods and achieving a high attack success rate with fewer iterations. Furthermore, to increase the transferability of gradient attacks such as APDL across different models and reduce the effects of overfitting on the training model, we introduce a triple-differential logit fusion (TDLF) method grounded in knowledge distillation principles. This approach mitigates the edge effects associated with gradient attacks by adjusting the hardness and softness of labels. Experiments conducted on ImageNet-compatible datasets demonstrate that APDL is significantly faster than the commonly used nonadaptive methods, whereas the TDLF method exhibits strong transferability.</p>","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"34 1","pages":""},"PeriodicalIF":5.0000,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Complex & Intelligent Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s40747-024-01748-x","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Recent research has shown that deep learning models are vulnerable to adversarial attacks, including gradient attacks, which can lead to incorrect outputs. The existing gradient attack methods typically rely on repetitive multistep strategies to improve their attack success rates, resulting in longer training times and severe overfitting. To address these issues, we propose an adaptive perturbation-based gradient attack method with dual-loss optimization (APDL). This method adaptively adjusts the single-step perturbation magnitude based on an exponential distance function, thereby accelerating the convergence process. APDL achieves convergence in fewer than 10 iterations, outperforming the traditional nonadaptive methods and achieving a high attack success rate with fewer iterations. Furthermore, to increase the transferability of gradient attacks such as APDL across different models and reduce the effects of overfitting on the training model, we introduce a triple-differential logit fusion (TDLF) method grounded in knowledge distillation principles. This approach mitigates the edge effects associated with gradient attacks by adjusting the hardness and softness of labels. Experiments conducted on ImageNet-compatible datasets demonstrate that APDL is significantly faster than the commonly used nonadaptive methods, whereas the TDLF method exhibits strong transferability.
期刊介绍:
Complex & Intelligent Systems aims to provide a forum for presenting and discussing novel approaches, tools and techniques meant for attaining a cross-fertilization between the broad fields of complex systems, computational simulation, and intelligent analytics and visualization. The transdisciplinary research that the journal focuses on will expand the boundaries of our understanding by investigating the principles and processes that underlie many of the most profound problems facing society today.