Reducing Adversarial Vulnerability Using GANs

Ciprian-Alin Simion
{"title":"Reducing Adversarial Vulnerability Using GANs","authors":"Ciprian-Alin Simion","doi":"10.1109/SYNASC57785.2022.00064","DOIUrl":null,"url":null,"abstract":"The Cyber-Threat industry is ever-growing and it is very likely that malware creators are using generative methods to create new malware as these algorithms prove to be very potent. As the majority of researchers in this field are focused on new methods to generate better adversarial examples (w.r.t. fidelity, variety or number) just a small portion of them are concerned with defense methods. This paper explores three methods of feature selection in the context of adversarial attacks. These methods aim to reduce the vulnerability of a Multi-Layer Perceptron to GAN-inflicted attacks by removing features based on rankings computed by type or by using LIME or F-Score. Even if no strong conclusion can be drawn, this paper stands as a Proof-of-Concept that because of good results in some cases, adversarial feature selection is a worthy exploration path.","PeriodicalId":446065,"journal":{"name":"2022 24th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC)","volume":"77 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 24th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SYNASC57785.2022.00064","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The Cyber-Threat industry is ever-growing and it is very likely that malware creators are using generative methods to create new malware as these algorithms prove to be very potent. As the majority of researchers in this field are focused on new methods to generate better adversarial examples (w.r.t. fidelity, variety or number) just a small portion of them are concerned with defense methods. This paper explores three methods of feature selection in the context of adversarial attacks. These methods aim to reduce the vulnerability of a Multi-Layer Perceptron to GAN-inflicted attacks by removing features based on rankings computed by type or by using LIME or F-Score. Even if no strong conclusion can be drawn, this paper stands as a Proof-of-Concept that because of good results in some cases, adversarial feature selection is a worthy exploration path.
利用gan减少对抗性脆弱性
网络威胁行业正在不断增长,恶意软件创建者很可能正在使用生成方法来创建新的恶意软件,因为这些算法被证明是非常有效的。由于该领域的大多数研究人员都专注于生成更好的对抗示例的新方法(w.r.t.保真度,多样性或数量),只有一小部分研究人员关注防御方法。本文探讨了对抗性攻击背景下的三种特征选择方法。这些方法旨在通过删除基于类型计算的排名或使用LIME或F-Score的特征来减少多层感知器对gan造成的攻击的脆弱性。即使不能得出强有力的结论,本文也可以作为一个概念证明,因为在某些情况下效果很好,对抗性特征选择是一个值得探索的路径。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信