{"title":"Adversarial Attacks against Neural Networks using Projected Gradient Descent with Line Search Algorithm","authors":"Lourdu Mahimai Doss P, M. Gunasekaran","doi":"10.1109/ViTECoN58111.2023.10157254","DOIUrl":null,"url":null,"abstract":"The aim of the research is to investigate the security challenges posed by the deployment of neural networks, with a focus on evasion attacks. The research uses the MNIST dataset as a representative example and employs the projected gradient descent with the line search (PGDLS) algorithm to craft adversarial examples that can deceive the network into making incorrect predictions. In this research, we demonstrate that neural networks are vulnerable to such attacks and investigate PGDLS ability to craft adversarial examples. The research also aims to provide insights into the security challenges posed by evasion attacks and to contribute to the ongoing research into the security of neural networks. The ultimate goal of the research is to raise awareness of the security risks associated with the deployment of neural networks and to provide valuable information for the development of robust and secure models. In order to prevent neural networks from malfunctioning or being misused in real-world scenarios, ongoing research is essential to enhance their security.","PeriodicalId":407488,"journal":{"name":"2023 2nd International Conference on Vision Towards Emerging Trends in Communication and Networking Technologies (ViTECoN)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 2nd International Conference on Vision Towards Emerging Trends in Communication and Networking Technologies (ViTECoN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ViTECoN58111.2023.10157254","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The aim of the research is to investigate the security challenges posed by the deployment of neural networks, with a focus on evasion attacks. The research uses the MNIST dataset as a representative example and employs the projected gradient descent with the line search (PGDLS) algorithm to craft adversarial examples that can deceive the network into making incorrect predictions. In this research, we demonstrate that neural networks are vulnerable to such attacks and investigate PGDLS ability to craft adversarial examples. The research also aims to provide insights into the security challenges posed by evasion attacks and to contribute to the ongoing research into the security of neural networks. The ultimate goal of the research is to raise awareness of the security risks associated with the deployment of neural networks and to provide valuable information for the development of robust and secure models. In order to prevent neural networks from malfunctioning or being misused in real-world scenarios, ongoing research is essential to enhance their security.