{"title":"Building a Robust and Efficient Defensive System Using Hybrid Adversarial Attack","authors":"Rachel Selva Dhanaraj;M. Sridevi","doi":"10.1109/TAI.2024.3384337","DOIUrl":null,"url":null,"abstract":"Adversarial attack is a method used to deceive machine learning models, which offers a technique to test the robustness of the given model, and it is vital to balance robustness with accuracy. Artificial intelligence (AI) researchers are constantly trying to find a better balance to develop new techniques and approaches to minimize loss of accuracy and increase robustness. To address these gaps, this article proposes a hybrid adversarial attack strategy by utilizing the Fast Gradient Sign Method and Projected Gradient Descent effectively to compute the perturbations that deceive deep neural networks, thus quantifying robustness without compromising its accuracy. Three distinct datasets—CelebA, CIFAR-10, and MNIST—were used in the extensive experiment, and six analyses were carried out to assess how well the suggested technique performed against attacks and defense mechanisms. The proposed model yielded confidence values of 99.99% for the MNIST dataset, 99.93% for the CelebA dataset, and 99.99% for the CIFAR-10 dataset. Defense study revealed that the proposed model outperformed previous models with a robust accuracy of 75.33% for the CelebA dataset, 55.4% for the CIFAR-10 dataset, and 98.65% for the MNIST dataset. The results of the experiment demonstrate that the proposed model is better than the other existing methods in computing the adversarial test and improvising the robustness of the system, thereby minimizing the accuracy loss.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"5 9","pages":"4470-4478"},"PeriodicalIF":0.0000,"publicationDate":"2024-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on artificial intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10488755/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Adversarial attack is a method used to deceive machine learning models, which offers a technique to test the robustness of the given model, and it is vital to balance robustness with accuracy. Artificial intelligence (AI) researchers are constantly trying to find a better balance to develop new techniques and approaches to minimize loss of accuracy and increase robustness. To address these gaps, this article proposes a hybrid adversarial attack strategy by utilizing the Fast Gradient Sign Method and Projected Gradient Descent effectively to compute the perturbations that deceive deep neural networks, thus quantifying robustness without compromising its accuracy. Three distinct datasets—CelebA, CIFAR-10, and MNIST—were used in the extensive experiment, and six analyses were carried out to assess how well the suggested technique performed against attacks and defense mechanisms. The proposed model yielded confidence values of 99.99% for the MNIST dataset, 99.93% for the CelebA dataset, and 99.99% for the CIFAR-10 dataset. Defense study revealed that the proposed model outperformed previous models with a robust accuracy of 75.33% for the CelebA dataset, 55.4% for the CIFAR-10 dataset, and 98.65% for the MNIST dataset. The results of the experiment demonstrate that the proposed model is better than the other existing methods in computing the adversarial test and improvising the robustness of the system, thereby minimizing the accuracy loss.