A Game Theoretical vulnerability analysis of Adversarial Attack

Khondker Fariha Hossain, A. Tavakkoli, S. Sengupta
{"title":"A Game Theoretical vulnerability analysis of Adversarial Attack","authors":"Khondker Fariha Hossain, A. Tavakkoli, S. Sengupta","doi":"10.48550/arXiv.2210.06670","DOIUrl":null,"url":null,"abstract":"In recent times deep learning has been widely used for automating various security tasks in Cyber Domains. However, adversaries manipulate data in many situations and diminish the deployed deep learning model's accuracy. One notable example is fooling CAPTCHA data to access the CAPTCHA-based Classifier leading to the critical system being vulnerable to cybersecurity attacks. To alleviate this, we propose a computational framework of game theory to analyze the CAPTCHA-based Classifier's vulnerability, strategy, and outcomes by forming a simultaneous two-player game. We apply the Fast Gradient Symbol Method (FGSM) and One Pixel Attack on CAPTCHA Data to imitate real-life scenarios of possible cyber-attack. Subsequently, to interpret this scenario from a Game theoretical perspective, we represent the interaction in the Stackelberg Game in Kuhn tree to study players' possible behaviors and actions by applying our Classifier's actual predicted values. Thus, we interpret potential attacks in deep learning applications while representing viable defense strategies in the game theory prospect.","PeriodicalId":91444,"journal":{"name":"Advances in visual computing : ... international symposium, ISVC ... : proceedings. International Symposium on Visual Computing","volume":"3 1","pages":"369-380"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Advances in visual computing : ... international symposium, ISVC ... : proceedings. International Symposium on Visual Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.48550/arXiv.2210.06670","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

In recent times deep learning has been widely used for automating various security tasks in Cyber Domains. However, adversaries manipulate data in many situations and diminish the deployed deep learning model's accuracy. One notable example is fooling CAPTCHA data to access the CAPTCHA-based Classifier leading to the critical system being vulnerable to cybersecurity attacks. To alleviate this, we propose a computational framework of game theory to analyze the CAPTCHA-based Classifier's vulnerability, strategy, and outcomes by forming a simultaneous two-player game. We apply the Fast Gradient Symbol Method (FGSM) and One Pixel Attack on CAPTCHA Data to imitate real-life scenarios of possible cyber-attack. Subsequently, to interpret this scenario from a Game theoretical perspective, we represent the interaction in the Stackelberg Game in Kuhn tree to study players' possible behaviors and actions by applying our Classifier's actual predicted values. Thus, we interpret potential attacks in deep learning applications while representing viable defense strategies in the game theory prospect.
对抗性攻击的博弈脆弱性分析
近年来,深度学习已被广泛用于网络领域各种安全任务的自动化。然而,攻击者在许多情况下操纵数据,降低了部署的深度学习模型的准确性。一个值得注意的例子是欺骗CAPTCHA数据来访问基于CAPTCHA的分类器,导致关键系统容易受到网络安全攻击。为了缓解这一问题,我们提出了一个博弈论的计算框架,通过形成一个同步的双人游戏来分析基于captcha的分类器的漏洞、策略和结果。我们应用快速梯度符号法(FGSM)和对验证码数据的一像素攻击来模拟可能的网络攻击的现实场景。随后,为了从博弈论的角度解释这一场景,我们将Stackelberg博弈中的相互作用表示为库恩树,通过应用我们的分类器的实际预测值来研究参与者可能的行为和行动。因此,我们解释了深度学习应用中潜在的攻击,同时代表了博弈论前景中可行的防御策略。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信