{"title":"仅使用预测标签的多标签黑盒对抗性攻击","authors":"Linghao Kong;Wenjian Luo;Zipeng Ye;Qi Zhou;Yan Jia","doi":"10.1109/TAI.2024.3522869","DOIUrl":null,"url":null,"abstract":"Multilabel adversarial examples have become a threat to deep neural network models (DNNs). Most of the current work on multilabel adversarial examples are focused on white-box environments. In this article, we focus on a black-box environment where the available information is extremely limited: a label-only black-box environment. Under the label-only black-box environment, the attacker can only obtain the predicted labels, and cannot obtain any other information such as the model's internal structure, parameters, the training dataset, and the output prediction confidence. We propose a label-only black-box attack framework, and through this framework to implement two black-box adversarial attacks: multi-label boundary-based attack (ML-BA) and multilabel label-only black-box attack (ML-LBA). The ML-BA is developed by transplanting the boundary-based attack in the multiclass domain to the multilabel domain, and the ML-LBA is based on differential evolution. Experimental results show that both the proposed algorithms can achieve the hiding single label attack in label-only black-box environments. Besides, ML-LBA requires fewer queries and its perturbations are significantly less. This demonstrates the effectiveness of the proposed label-only black-box attack framework and the advantageous of differential evolution in optimizing high-dimensional problems.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"6 5","pages":"1284-1297"},"PeriodicalIF":0.0000,"publicationDate":"2024-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Multilabel Black-Box Adversarial Attacks Only With Predicted Labels\",\"authors\":\"Linghao Kong;Wenjian Luo;Zipeng Ye;Qi Zhou;Yan Jia\",\"doi\":\"10.1109/TAI.2024.3522869\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Multilabel adversarial examples have become a threat to deep neural network models (DNNs). Most of the current work on multilabel adversarial examples are focused on white-box environments. In this article, we focus on a black-box environment where the available information is extremely limited: a label-only black-box environment. Under the label-only black-box environment, the attacker can only obtain the predicted labels, and cannot obtain any other information such as the model's internal structure, parameters, the training dataset, and the output prediction confidence. We propose a label-only black-box attack framework, and through this framework to implement two black-box adversarial attacks: multi-label boundary-based attack (ML-BA) and multilabel label-only black-box attack (ML-LBA). The ML-BA is developed by transplanting the boundary-based attack in the multiclass domain to the multilabel domain, and the ML-LBA is based on differential evolution. Experimental results show that both the proposed algorithms can achieve the hiding single label attack in label-only black-box environments. Besides, ML-LBA requires fewer queries and its perturbations are significantly less. This demonstrates the effectiveness of the proposed label-only black-box attack framework and the advantageous of differential evolution in optimizing high-dimensional problems.\",\"PeriodicalId\":73305,\"journal\":{\"name\":\"IEEE transactions on artificial intelligence\",\"volume\":\"6 5\",\"pages\":\"1284-1297\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-12-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on artificial intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10816724/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on artificial intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10816724/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Multilabel Black-Box Adversarial Attacks Only With Predicted Labels
Multilabel adversarial examples have become a threat to deep neural network models (DNNs). Most of the current work on multilabel adversarial examples are focused on white-box environments. In this article, we focus on a black-box environment where the available information is extremely limited: a label-only black-box environment. Under the label-only black-box environment, the attacker can only obtain the predicted labels, and cannot obtain any other information such as the model's internal structure, parameters, the training dataset, and the output prediction confidence. We propose a label-only black-box attack framework, and through this framework to implement two black-box adversarial attacks: multi-label boundary-based attack (ML-BA) and multilabel label-only black-box attack (ML-LBA). The ML-BA is developed by transplanting the boundary-based attack in the multiclass domain to the multilabel domain, and the ML-LBA is based on differential evolution. Experimental results show that both the proposed algorithms can achieve the hiding single label attack in label-only black-box environments. Besides, ML-LBA requires fewer queries and its perturbations are significantly less. This demonstrates the effectiveness of the proposed label-only black-box attack framework and the advantageous of differential evolution in optimizing high-dimensional problems.