Bhavana Kumbar, Ankita Mane, Varsha Chalageri, Shashidhara B. Vyakaranal, S. Meena, Sunil V. Gurlahosur, Uday Kulkarni
{"title":"A Comparative Study on Adversarial Attacks and Defense Mechanisms","authors":"Bhavana Kumbar, Ankita Mane, Varsha Chalageri, Shashidhara B. Vyakaranal, S. Meena, Sunil V. Gurlahosur, Uday Kulkarni","doi":"10.1109/CONIT55038.2022.9848088","DOIUrl":null,"url":null,"abstract":"Deep Neural Networks (DNNs) have exemplified exceptional success in solving various complicated tasks that were difficult to solve in the past using conventional machine learning methods. Deep learning has become an inevitable part of several applications in the present scenarios. However., the latest works have found that the DNNs are unfortified against the prevailing adversarial attacks. The addition of imperceptible perturbations to the inputs causes the neural networks to fail and predict incorrect outputs. In practice., adversarial attacks create a significant challenge to the success of deep learning as they aim to deteriorate the performance of the classifiers by fooling the deep learning algorithms. This paper provides a comprehensive comparative study on the common adversarial attacks and countermeasures against them and also analyzes their behavior on standard datasets such as MNIST and CIFAR10 and also on a custom dataset that spans over 1000 images consisting of 5 classes. To mitigate the adversarial effects on deep learning models., we provide solutions against the conventional adversarial attacks that reduce 70% accuracy. It results in making the deep learning models more resilient against adversaries.","PeriodicalId":270445,"journal":{"name":"2022 2nd International Conference on Intelligent Technologies (CONIT)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2022-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 2nd International Conference on Intelligent Technologies (CONIT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CONIT55038.2022.9848088","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Deep Neural Networks (DNNs) have exemplified exceptional success in solving various complicated tasks that were difficult to solve in the past using conventional machine learning methods. Deep learning has become an inevitable part of several applications in the present scenarios. However., the latest works have found that the DNNs are unfortified against the prevailing adversarial attacks. The addition of imperceptible perturbations to the inputs causes the neural networks to fail and predict incorrect outputs. In practice., adversarial attacks create a significant challenge to the success of deep learning as they aim to deteriorate the performance of the classifiers by fooling the deep learning algorithms. This paper provides a comprehensive comparative study on the common adversarial attacks and countermeasures against them and also analyzes their behavior on standard datasets such as MNIST and CIFAR10 and also on a custom dataset that spans over 1000 images consisting of 5 classes. To mitigate the adversarial effects on deep learning models., we provide solutions against the conventional adversarial attacks that reduce 70% accuracy. It results in making the deep learning models more resilient against adversaries.