{"title":"High-transferability black-box attack of binary image segmentation via adversarial example augmentation","authors":"Xuebiao Zhu, Wu Chen, Qiuping Jiang","doi":"10.1016/j.displa.2024.102957","DOIUrl":null,"url":null,"abstract":"<div><div>The application of deep neural networks (DNNs) has significantly advanced the binary image segmentation (BIS) task. However, DNNs have been found to be susceptible to adversarial attacks involving subtle perturbations. The existing black-box attack methods usually generate one single adversarial example for different target models, leading to poor transferability. To address this issue, this paper proposes a novel adversarial example augmentation (AEA) framework to improve the transferability of black-box attacks. Our method dedicates to generating an adversarial example set (AES) which contains a set of distinct adversarial examples. Specifically, we first employ an existing model as the surrogate model which is attacked to optimize the adversarial perturbation via maximizing the Binary Cross-Entropy (BCE) loss between the prediction of the surrogate model and the pseudo label, thus producing a sequence of adversarial examples. During the optimization process, besides the BCE loss, we additionally introduce deep feature losses among different adversarial examples to fully distinguish the generated adversarial examples. In this way, we can obtain an AES that contains different adversarial examples with diverse deep features to achieve the augmentation of adversarial examples. Given the diversity of the generated adversarial examples in the AES of the surrogate model, the optimal adversarial example for a certain target model is likely contained in our generated AES. Thus, the generated AES is expected to have high-transferability. In order to find the optimal adversarial example of a specific target model in the AES, we use the query method to achieve this goal. Experimental results showcase the superiority of the proposed AEA framework for black-box attack in two representative BIS tasks including salient object detection and camouflage object detection.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"87 ","pages":"Article 102957"},"PeriodicalIF":3.7000,"publicationDate":"2025-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Displays","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0141938224003214","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0
Abstract
The application of deep neural networks (DNNs) has significantly advanced the binary image segmentation (BIS) task. However, DNNs have been found to be susceptible to adversarial attacks involving subtle perturbations. The existing black-box attack methods usually generate one single adversarial example for different target models, leading to poor transferability. To address this issue, this paper proposes a novel adversarial example augmentation (AEA) framework to improve the transferability of black-box attacks. Our method dedicates to generating an adversarial example set (AES) which contains a set of distinct adversarial examples. Specifically, we first employ an existing model as the surrogate model which is attacked to optimize the adversarial perturbation via maximizing the Binary Cross-Entropy (BCE) loss between the prediction of the surrogate model and the pseudo label, thus producing a sequence of adversarial examples. During the optimization process, besides the BCE loss, we additionally introduce deep feature losses among different adversarial examples to fully distinguish the generated adversarial examples. In this way, we can obtain an AES that contains different adversarial examples with diverse deep features to achieve the augmentation of adversarial examples. Given the diversity of the generated adversarial examples in the AES of the surrogate model, the optimal adversarial example for a certain target model is likely contained in our generated AES. Thus, the generated AES is expected to have high-transferability. In order to find the optimal adversarial example of a specific target model in the AES, we use the query method to achieve this goal. Experimental results showcase the superiority of the proposed AEA framework for black-box attack in two representative BIS tasks including salient object detection and camouflage object detection.
期刊介绍:
Displays is the international journal covering the research and development of display technology, its effective presentation and perception of information, and applications and systems including display-human interface.
Technical papers on practical developments in Displays technology provide an effective channel to promote greater understanding and cross-fertilization across the diverse disciplines of the Displays community. Original research papers solving ergonomics issues at the display-human interface advance effective presentation of information. Tutorial papers covering fundamentals intended for display technologies and human factor engineers new to the field will also occasionally featured.