基于投影梯度下降和直线搜索算法的神经网络对抗性攻击

Lourdu Mahimai Doss P, M. Gunasekaran
{"title":"基于投影梯度下降和直线搜索算法的神经网络对抗性攻击","authors":"Lourdu Mahimai Doss P, M. Gunasekaran","doi":"10.1109/ViTECoN58111.2023.10157254","DOIUrl":null,"url":null,"abstract":"The aim of the research is to investigate the security challenges posed by the deployment of neural networks, with a focus on evasion attacks. The research uses the MNIST dataset as a representative example and employs the projected gradient descent with the line search (PGDLS) algorithm to craft adversarial examples that can deceive the network into making incorrect predictions. In this research, we demonstrate that neural networks are vulnerable to such attacks and investigate PGDLS ability to craft adversarial examples. The research also aims to provide insights into the security challenges posed by evasion attacks and to contribute to the ongoing research into the security of neural networks. The ultimate goal of the research is to raise awareness of the security risks associated with the deployment of neural networks and to provide valuable information for the development of robust and secure models. In order to prevent neural networks from malfunctioning or being misused in real-world scenarios, ongoing research is essential to enhance their security.","PeriodicalId":407488,"journal":{"name":"2023 2nd International Conference on Vision Towards Emerging Trends in Communication and Networking Technologies (ViTECoN)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Adversarial Attacks against Neural Networks using Projected Gradient Descent with Line Search Algorithm\",\"authors\":\"Lourdu Mahimai Doss P, M. Gunasekaran\",\"doi\":\"10.1109/ViTECoN58111.2023.10157254\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The aim of the research is to investigate the security challenges posed by the deployment of neural networks, with a focus on evasion attacks. The research uses the MNIST dataset as a representative example and employs the projected gradient descent with the line search (PGDLS) algorithm to craft adversarial examples that can deceive the network into making incorrect predictions. In this research, we demonstrate that neural networks are vulnerable to such attacks and investigate PGDLS ability to craft adversarial examples. The research also aims to provide insights into the security challenges posed by evasion attacks and to contribute to the ongoing research into the security of neural networks. The ultimate goal of the research is to raise awareness of the security risks associated with the deployment of neural networks and to provide valuable information for the development of robust and secure models. In order to prevent neural networks from malfunctioning or being misused in real-world scenarios, ongoing research is essential to enhance their security.\",\"PeriodicalId\":407488,\"journal\":{\"name\":\"2023 2nd International Conference on Vision Towards Emerging Trends in Communication and Networking Technologies (ViTECoN)\",\"volume\":\"35 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-05-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 2nd International Conference on Vision Towards Emerging Trends in Communication and Networking Technologies (ViTECoN)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ViTECoN58111.2023.10157254\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 2nd International Conference on Vision Towards Emerging Trends in Communication and Networking Technologies (ViTECoN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ViTECoN58111.2023.10157254","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

这项研究的目的是调查神经网络部署带来的安全挑战,重点是逃避攻击。该研究使用MNIST数据集作为代表性示例,并使用投影梯度下降与线搜索(PGDLS)算法来制作对抗性示例,这些示例可以欺骗网络做出错误的预测。在这项研究中,我们证明了神经网络容易受到这种攻击,并研究了PGDLS制作对抗性示例的能力。该研究还旨在为逃避攻击带来的安全挑战提供见解,并为正在进行的神经网络安全性研究做出贡献。该研究的最终目标是提高人们对与神经网络部署相关的安全风险的认识,并为开发健壮和安全的模型提供有价值的信息。为了防止神经网络在现实场景中出现故障或被滥用,正在进行的研究对增强其安全性至关重要。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Adversarial Attacks against Neural Networks using Projected Gradient Descent with Line Search Algorithm
The aim of the research is to investigate the security challenges posed by the deployment of neural networks, with a focus on evasion attacks. The research uses the MNIST dataset as a representative example and employs the projected gradient descent with the line search (PGDLS) algorithm to craft adversarial examples that can deceive the network into making incorrect predictions. In this research, we demonstrate that neural networks are vulnerable to such attacks and investigate PGDLS ability to craft adversarial examples. The research also aims to provide insights into the security challenges posed by evasion attacks and to contribute to the ongoing research into the security of neural networks. The ultimate goal of the research is to raise awareness of the security risks associated with the deployment of neural networks and to provide valuable information for the development of robust and secure models. In order to prevent neural networks from malfunctioning or being misused in real-world scenarios, ongoing research is essential to enhance their security.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信