Yu Ran , Ao-Xiang Zhang , Mingjie Li , Weixuan Tang , Yuan-Gen Wang
{"title":"针对图像质量评估模型的黑盒对抗攻击","authors":"Yu Ran , Ao-Xiang Zhang , Mingjie Li , Weixuan Tang , Yuan-Gen Wang","doi":"10.1016/j.eswa.2024.125415","DOIUrl":null,"url":null,"abstract":"<div><div>The problem of No-Reference Image Quality Assessment (NR-IQA) is to predict the perceptual quality of an image in line with its subjective evaluation. However, the vulnerabilities of NR-IQA models to the adversarial attacks have not been thoroughly studied for model refinement. This paper aims to investigate the potential loopholes of NR-IQA models via <em>black-box</em> adversarial attacks. Specifically, we first formulate the attack problem as maximizing the deviation between the estimated quality scores of original and perturbed images, while restricting the perturbed image distortions for visual quality preservation. Under such formulation, we then design a Bi-directional loss function to mislead the estimated quality scores of adversarial examples towards an opposite direction with maximum deviation. On this basis, we finally develop an efficient and effective black-box attack method for NR-IQA models based on a random search paradigm. Comprehensive experiments on three benchmark datasets show that all evaluated NR-IQA models are significantly vulnerable to the proposed attack method. After being attacked, the average change rates in terms of two well-known IQA performance metrics achieved by victim models reach 97% and 101%, respectively. In addition, our attack method also outperforms a newly introduced black-box attack approach on IQA models. We also observe that the generated perturbations are not transferable, which points out a <em>new</em> research direction in NR-IQA community. The source code is available at <span><span>https://github.com/GZHU-DVL/AttackIQA</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":7,"journal":{"name":"ACS Applied Polymer Materials","volume":null,"pages":null},"PeriodicalIF":4.4000,"publicationDate":"2024-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Black-box adversarial attacks against image quality assessment models\",\"authors\":\"Yu Ran , Ao-Xiang Zhang , Mingjie Li , Weixuan Tang , Yuan-Gen Wang\",\"doi\":\"10.1016/j.eswa.2024.125415\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>The problem of No-Reference Image Quality Assessment (NR-IQA) is to predict the perceptual quality of an image in line with its subjective evaluation. However, the vulnerabilities of NR-IQA models to the adversarial attacks have not been thoroughly studied for model refinement. This paper aims to investigate the potential loopholes of NR-IQA models via <em>black-box</em> adversarial attacks. Specifically, we first formulate the attack problem as maximizing the deviation between the estimated quality scores of original and perturbed images, while restricting the perturbed image distortions for visual quality preservation. Under such formulation, we then design a Bi-directional loss function to mislead the estimated quality scores of adversarial examples towards an opposite direction with maximum deviation. On this basis, we finally develop an efficient and effective black-box attack method for NR-IQA models based on a random search paradigm. Comprehensive experiments on three benchmark datasets show that all evaluated NR-IQA models are significantly vulnerable to the proposed attack method. After being attacked, the average change rates in terms of two well-known IQA performance metrics achieved by victim models reach 97% and 101%, respectively. In addition, our attack method also outperforms a newly introduced black-box attack approach on IQA models. We also observe that the generated perturbations are not transferable, which points out a <em>new</em> research direction in NR-IQA community. The source code is available at <span><span>https://github.com/GZHU-DVL/AttackIQA</span><svg><path></path></svg></span>.</div></div>\",\"PeriodicalId\":7,\"journal\":{\"name\":\"ACS Applied Polymer Materials\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":4.4000,\"publicationDate\":\"2024-09-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACS Applied Polymer Materials\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0957417424022826\",\"RegionNum\":2,\"RegionCategory\":\"化学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"MATERIALS SCIENCE, MULTIDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACS Applied Polymer Materials","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0957417424022826","RegionNum":2,"RegionCategory":"化学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MATERIALS SCIENCE, MULTIDISCIPLINARY","Score":null,"Total":0}
Black-box adversarial attacks against image quality assessment models
The problem of No-Reference Image Quality Assessment (NR-IQA) is to predict the perceptual quality of an image in line with its subjective evaluation. However, the vulnerabilities of NR-IQA models to the adversarial attacks have not been thoroughly studied for model refinement. This paper aims to investigate the potential loopholes of NR-IQA models via black-box adversarial attacks. Specifically, we first formulate the attack problem as maximizing the deviation between the estimated quality scores of original and perturbed images, while restricting the perturbed image distortions for visual quality preservation. Under such formulation, we then design a Bi-directional loss function to mislead the estimated quality scores of adversarial examples towards an opposite direction with maximum deviation. On this basis, we finally develop an efficient and effective black-box attack method for NR-IQA models based on a random search paradigm. Comprehensive experiments on three benchmark datasets show that all evaluated NR-IQA models are significantly vulnerable to the proposed attack method. After being attacked, the average change rates in terms of two well-known IQA performance metrics achieved by victim models reach 97% and 101%, respectively. In addition, our attack method also outperforms a newly introduced black-box attack approach on IQA models. We also observe that the generated perturbations are not transferable, which points out a new research direction in NR-IQA community. The source code is available at https://github.com/GZHU-DVL/AttackIQA.
期刊介绍:
ACS Applied Polymer Materials is an interdisciplinary journal publishing original research covering all aspects of engineering, chemistry, physics, and biology relevant to applications of polymers.
The journal is devoted to reports of new and original experimental and theoretical research of an applied nature that integrates fundamental knowledge in the areas of materials, engineering, physics, bioscience, polymer science and chemistry into important polymer applications. The journal is specifically interested in work that addresses relationships among structure, processing, morphology, chemistry, properties, and function as well as work that provide insights into mechanisms critical to the performance of the polymer for applications.