Akshay Jain;Shiv Ram Dubey;Satish Kumar Singh;KC Santosh;Bidyut Baran Chaudhuri
{"title":"欺骗卷积神经网络的非均匀光照攻击","authors":"Akshay Jain;Shiv Ram Dubey;Satish Kumar Singh;KC Santosh;Bidyut Baran Chaudhuri","doi":"10.1109/TAI.2025.3549396","DOIUrl":null,"url":null,"abstract":"Convolutional neural networks (CNNs) have made remarkable strides; however, they remain susceptible to vulnerabilities, particularly to image perturbations that humans can easily recognize. This weakness, often termed as “attacks,” underscores the limited robustness of CNNs and the need for research into fortifying their resistance against such manipulations. This study introduces a novel nonuniform illumination (NUI) attack technique, where images are subtly altered using varying NUI masks. Extensive experiments are conducted on widely accepted datasets including CIFAR10, TinyImageNet, CalTech256, and NWPU-RESISC45 focusing on image classification with 12 different NUI masks. The resilience of VGG, ResNet, MobilenetV3-small, InceptionV3, and EfficientNet_b0 models against NUI attacks are evaluated. Our results show a substantial decline in the CNN models’ classification accuracy when subjected to NUI attacks, due to changes in the image pixel value distribution, indicating their vulnerability under NUI. To mitigate this, a defense strategy is proposed, including NUI-attacked images, generated through the new NUI transformation, into the training set. The results demonstrate a significant enhancement in CNN model performance when confronted with perturbed images affected by NUI attacks. This strategy seeks to bolster CNN models’ resilience against NUI attacks. A comparative study with other attack techniques shows the effectiveness of the NUI attack and defense technique.<xref><sup>1</sup></xref><fn><p><sup>1</sup>The code is available at <uri>https://github.com/Akshayjain97/Non-Uniform_Illumination</uri></p></fn>","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"6 9","pages":"2476-2485"},"PeriodicalIF":0.0000,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Non-uniform Illumination Attack for Fooling Convolutional Neural Networks\",\"authors\":\"Akshay Jain;Shiv Ram Dubey;Satish Kumar Singh;KC Santosh;Bidyut Baran Chaudhuri\",\"doi\":\"10.1109/TAI.2025.3549396\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Convolutional neural networks (CNNs) have made remarkable strides; however, they remain susceptible to vulnerabilities, particularly to image perturbations that humans can easily recognize. This weakness, often termed as “attacks,” underscores the limited robustness of CNNs and the need for research into fortifying their resistance against such manipulations. This study introduces a novel nonuniform illumination (NUI) attack technique, where images are subtly altered using varying NUI masks. Extensive experiments are conducted on widely accepted datasets including CIFAR10, TinyImageNet, CalTech256, and NWPU-RESISC45 focusing on image classification with 12 different NUI masks. The resilience of VGG, ResNet, MobilenetV3-small, InceptionV3, and EfficientNet_b0 models against NUI attacks are evaluated. Our results show a substantial decline in the CNN models’ classification accuracy when subjected to NUI attacks, due to changes in the image pixel value distribution, indicating their vulnerability under NUI. To mitigate this, a defense strategy is proposed, including NUI-attacked images, generated through the new NUI transformation, into the training set. The results demonstrate a significant enhancement in CNN model performance when confronted with perturbed images affected by NUI attacks. This strategy seeks to bolster CNN models’ resilience against NUI attacks. A comparative study with other attack techniques shows the effectiveness of the NUI attack and defense technique.<xref><sup>1</sup></xref><fn><p><sup>1</sup>The code is available at <uri>https://github.com/Akshayjain97/Non-Uniform_Illumination</uri></p></fn>\",\"PeriodicalId\":73305,\"journal\":{\"name\":\"IEEE transactions on artificial intelligence\",\"volume\":\"6 9\",\"pages\":\"2476-2485\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-03-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on artificial intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10916770/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on artificial intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10916770/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Non-uniform Illumination Attack for Fooling Convolutional Neural Networks
Convolutional neural networks (CNNs) have made remarkable strides; however, they remain susceptible to vulnerabilities, particularly to image perturbations that humans can easily recognize. This weakness, often termed as “attacks,” underscores the limited robustness of CNNs and the need for research into fortifying their resistance against such manipulations. This study introduces a novel nonuniform illumination (NUI) attack technique, where images are subtly altered using varying NUI masks. Extensive experiments are conducted on widely accepted datasets including CIFAR10, TinyImageNet, CalTech256, and NWPU-RESISC45 focusing on image classification with 12 different NUI masks. The resilience of VGG, ResNet, MobilenetV3-small, InceptionV3, and EfficientNet_b0 models against NUI attacks are evaluated. Our results show a substantial decline in the CNN models’ classification accuracy when subjected to NUI attacks, due to changes in the image pixel value distribution, indicating their vulnerability under NUI. To mitigate this, a defense strategy is proposed, including NUI-attacked images, generated through the new NUI transformation, into the training set. The results demonstrate a significant enhancement in CNN model performance when confronted with perturbed images affected by NUI attacks. This strategy seeks to bolster CNN models’ resilience against NUI attacks. A comparative study with other attack techniques shows the effectiveness of the NUI attack and defense technique.1
1The code is available at https://github.com/Akshayjain97/Non-Uniform_Illumination