A Game Theoretic Perspective on Adversarial Machine Learning and Related Cybersecurity Applications

Yan Zhou, Murat Kantarcioglu, B. Xi
{"title":"A Game Theoretic Perspective on Adversarial Machine Learning and Related Cybersecurity Applications","authors":"Yan Zhou, Murat Kantarcioglu, B. Xi","doi":"10.1002/9781119723950.ch13","DOIUrl":null,"url":null,"abstract":"In cybersecurity applications where machine learning algorithms are increasingly used to detect vulnerabilities, a somewhat unique challenge arises as exploits targeting machine learning models are constantly devised by the attackers. Traditional machine learning models are no longer robust and reliable when they are under attack. The action and reaction between machine learning systems and the adversary can be modeled as a game between two or more players. Under well‐defined attack models, game theory can provide robustness guarantee for machine learning models that are otherwise vulnerable to application‐time data corruption. We review two cases of game theory‐based machine learning techniques: in one case, players play a zero sum game by following a minimax strategy, while in the other case, players play a sequential game with one player as the leader and the rest as the followers. Experimental results on e‐mail spam and web spam datasets are presented. In the zero sum game, we demonstrate that an adversarial SVM model built upon the minimax strategy is much more resilient to adversarial attacks than standard SVM and one‐class SVM models. We also show that optimal learning strategies derived to counter overly pessimistic attack models can produce unsatisfactory results when the real attacks are much weaker. In the sequential game, we demonstrate that the mixed strategy, allowing a player to randomize over available strategies, is the best solution in general without knowing what types of adversaries machine learning applications are facing in the wild. We also discuss scenarios where players' behavior may derail rational decision making and models that consider such decision risks.","PeriodicalId":332247,"journal":{"name":"Game Theory and Machine Learning for Cyber Security","volume":"40 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Game Theory and Machine Learning for Cyber Security","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1002/9781119723950.ch13","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

In cybersecurity applications where machine learning algorithms are increasingly used to detect vulnerabilities, a somewhat unique challenge arises as exploits targeting machine learning models are constantly devised by the attackers. Traditional machine learning models are no longer robust and reliable when they are under attack. The action and reaction between machine learning systems and the adversary can be modeled as a game between two or more players. Under well‐defined attack models, game theory can provide robustness guarantee for machine learning models that are otherwise vulnerable to application‐time data corruption. We review two cases of game theory‐based machine learning techniques: in one case, players play a zero sum game by following a minimax strategy, while in the other case, players play a sequential game with one player as the leader and the rest as the followers. Experimental results on e‐mail spam and web spam datasets are presented. In the zero sum game, we demonstrate that an adversarial SVM model built upon the minimax strategy is much more resilient to adversarial attacks than standard SVM and one‐class SVM models. We also show that optimal learning strategies derived to counter overly pessimistic attack models can produce unsatisfactory results when the real attacks are much weaker. In the sequential game, we demonstrate that the mixed strategy, allowing a player to randomize over available strategies, is the best solution in general without knowing what types of adversaries machine learning applications are facing in the wild. We also discuss scenarios where players' behavior may derail rational decision making and models that consider such decision risks.
对抗性机器学习及相关网络安全应用的博弈论视角
在网络安全应用中,机器学习算法越来越多地用于检测漏洞,随着攻击者不断设计针对机器学习模型的攻击,出现了一个有点独特的挑战。传统的机器学习模型在受到攻击时不再具有鲁棒性和可靠性。机器学习系统和对手之间的动作和反应可以建模为两个或更多玩家之间的游戏。在定义良好的攻击模型下,博弈论可以为机器学习模型提供鲁棒性保证,否则容易受到应用程序时间数据损坏的影响。我们回顾了基于博弈论的机器学习技术的两种情况:在一种情况下,玩家通过遵循极大极小策略进行零和博弈,而在另一种情况下,玩家进行顺序博弈,其中一个玩家作为领导者,其余玩家作为追随者。给出了在电子邮件垃圾邮件和网页垃圾邮件数据集上的实验结果。在零和博弈中,我们证明了建立在极大极小策略上的对抗性支持向量机模型比标准支持向量机和一类支持向量机模型更能抵御对抗性攻击。我们还表明,当真实的攻击要弱得多时,为对抗过于悲观的攻击模型而衍生的最优学习策略可能会产生令人不满意的结果。在序列博弈中,我们证明了混合策略,允许玩家在可用策略上随机化,通常是最好的解决方案,而不知道机器学习应用程序在野外面临什么类型的对手。我们还讨论了玩家行为可能偏离理性决策制定的场景,以及考虑此类决策风险的模型。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信