Siyu Zhai, Zhibo He, Xiaofeng Cong, Junming Hou, Jie Gui, Jian Wei You, Xin Gong, James Tin-Yau Kwok, Yuan Yan Tang
{"title":"未揭示的威胁:水下图像增强模型的对抗鲁棒性综合研究","authors":"Siyu Zhai, Zhibo He, Xiaofeng Cong, Junming Hou, Jie Gui, Jian Wei You, Xin Gong, James Tin-Yau Kwok, Yuan Yan Tang","doi":"arxiv-2409.06420","DOIUrl":null,"url":null,"abstract":"Learning-based methods for underwater image enhancement (UWIE) have undergone\nextensive exploration. However, learning-based models are usually vulnerable to\nadversarial examples so as the UWIE models. To the best of our knowledge, there\nis no comprehensive study on the adversarial robustness of UWIE models, which\nindicates that UWIE models are potentially under the threat of adversarial\nattacks. In this paper, we propose a general adversarial attack protocol. We\nmake a first attempt to conduct adversarial attacks on five well-designed UWIE\nmodels on three common underwater image benchmark datasets. Considering the\nscattering and absorption of light in the underwater environment, there exists\na strong correlation between color correction and underwater image enhancement.\nOn the basis of that, we also design two effective UWIE-oriented adversarial\nattack methods Pixel Attack and Color Shift Attack targeting different color\nspaces. The results show that five models exhibit varying degrees of\nvulnerability to adversarial attacks and well-designed small perturbations on\ndegraded images are capable of preventing UWIE models from generating enhanced\nresults. Further, we conduct adversarial training on these models and\nsuccessfully mitigated the effectiveness of adversarial attacks. In summary, we\nreveal the adversarial vulnerability of UWIE models and propose a new\nevaluation dimension of UWIE models.","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":"42 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Unrevealed Threats: A Comprehensive Study of the Adversarial Robustness of Underwater Image Enhancement Models\",\"authors\":\"Siyu Zhai, Zhibo He, Xiaofeng Cong, Junming Hou, Jie Gui, Jian Wei You, Xin Gong, James Tin-Yau Kwok, Yuan Yan Tang\",\"doi\":\"arxiv-2409.06420\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Learning-based methods for underwater image enhancement (UWIE) have undergone\\nextensive exploration. However, learning-based models are usually vulnerable to\\nadversarial examples so as the UWIE models. To the best of our knowledge, there\\nis no comprehensive study on the adversarial robustness of UWIE models, which\\nindicates that UWIE models are potentially under the threat of adversarial\\nattacks. In this paper, we propose a general adversarial attack protocol. We\\nmake a first attempt to conduct adversarial attacks on five well-designed UWIE\\nmodels on three common underwater image benchmark datasets. Considering the\\nscattering and absorption of light in the underwater environment, there exists\\na strong correlation between color correction and underwater image enhancement.\\nOn the basis of that, we also design two effective UWIE-oriented adversarial\\nattack methods Pixel Attack and Color Shift Attack targeting different color\\nspaces. The results show that five models exhibit varying degrees of\\nvulnerability to adversarial attacks and well-designed small perturbations on\\ndegraded images are capable of preventing UWIE models from generating enhanced\\nresults. Further, we conduct adversarial training on these models and\\nsuccessfully mitigated the effectiveness of adversarial attacks. In summary, we\\nreveal the adversarial vulnerability of UWIE models and propose a new\\nevaluation dimension of UWIE models.\",\"PeriodicalId\":501289,\"journal\":{\"name\":\"arXiv - EE - Image and Video Processing\",\"volume\":\"42 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - EE - Image and Video Processing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.06420\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - EE - Image and Video Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.06420","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Unrevealed Threats: A Comprehensive Study of the Adversarial Robustness of Underwater Image Enhancement Models
Learning-based methods for underwater image enhancement (UWIE) have undergone
extensive exploration. However, learning-based models are usually vulnerable to
adversarial examples so as the UWIE models. To the best of our knowledge, there
is no comprehensive study on the adversarial robustness of UWIE models, which
indicates that UWIE models are potentially under the threat of adversarial
attacks. In this paper, we propose a general adversarial attack protocol. We
make a first attempt to conduct adversarial attacks on five well-designed UWIE
models on three common underwater image benchmark datasets. Considering the
scattering and absorption of light in the underwater environment, there exists
a strong correlation between color correction and underwater image enhancement.
On the basis of that, we also design two effective UWIE-oriented adversarial
attack methods Pixel Attack and Color Shift Attack targeting different color
spaces. The results show that five models exhibit varying degrees of
vulnerability to adversarial attacks and well-designed small perturbations on
degraded images are capable of preventing UWIE models from generating enhanced
results. Further, we conduct adversarial training on these models and
successfully mitigated the effectiveness of adversarial attacks. In summary, we
reveal the adversarial vulnerability of UWIE models and propose a new
evaluation dimension of UWIE models.