Adversarial attack defense analysis: An empirical approach in cybersecurity perspective

IF 1.3 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING
Kousik Barik , Sanjay Misra
{"title":"Adversarial attack defense analysis: An empirical approach in cybersecurity perspective","authors":"Kousik Barik ,&nbsp;Sanjay Misra","doi":"10.1016/j.simpa.2024.100681","DOIUrl":null,"url":null,"abstract":"<div><p>Advancements in artificial intelligence in the cybersecurity domain introduce significant security challenges. A critical concern is the exposure of deep learning techniques to adversarial attacks. Adversary users intentionally attempt to mislead the techniques by infiltrating adversarial samples to mislead the prediction of security devices. The study presents extensive experimentation of defense methods using Python-based open-source code with two benchmark datasets, and the outcomes are demonstrated using evaluation metrics. This code library can be easily utilized and reproduced for cybersecurity research on countering adversarial attacks. Exploring strategies for protecting against adversarial attacks is significant in enhancing the resilience of deep learning techniques.</p></div>","PeriodicalId":29771,"journal":{"name":"Software Impacts","volume":null,"pages":null},"PeriodicalIF":1.3000,"publicationDate":"2024-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2665963824000691/pdfft?md5=21bed32ce73b54cc3d2a33e51bf65798&pid=1-s2.0-S2665963824000691-main.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Software Impacts","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2665963824000691","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0

Abstract

Advancements in artificial intelligence in the cybersecurity domain introduce significant security challenges. A critical concern is the exposure of deep learning techniques to adversarial attacks. Adversary users intentionally attempt to mislead the techniques by infiltrating adversarial samples to mislead the prediction of security devices. The study presents extensive experimentation of defense methods using Python-based open-source code with two benchmark datasets, and the outcomes are demonstrated using evaluation metrics. This code library can be easily utilized and reproduced for cybersecurity research on countering adversarial attacks. Exploring strategies for protecting against adversarial attacks is significant in enhancing the resilience of deep learning techniques.

对抗性攻击防御分析:网络安全视角下的实证方法
人工智能在网络安全领域的进步带来了重大的安全挑战。一个关键问题是深度学习技术会受到对抗性攻击。敌对用户有意通过渗透敌对样本来误导安全设备的预测,从而误导深度学习技术。本研究使用基于 Python 的开源代码,利用两个基准数据集对防御方法进行了广泛的实验,并使用评估指标对实验结果进行了展示。该代码库可以方便地用于网络安全研究,并在对抗对抗性攻击方面进行复制。探索抵御对抗性攻击的策略对于增强深度学习技术的复原力意义重大。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Software Impacts
Software Impacts Software
CiteScore
2.70
自引率
9.50%
发文量
0
审稿时长
16 days
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信