Generating Optimal Attack Paths in Generative Adversarial Phishing

Rayah Al-Qurashi, Ahmed Aleroud, A. Saifan, Mohammad Alsmadi, I. Alsmadi
{"title":"Generating Optimal Attack Paths in Generative Adversarial Phishing","authors":"Rayah Al-Qurashi, Ahmed Aleroud, A. Saifan, Mohammad Alsmadi, I. Alsmadi","doi":"10.1109/ISI53945.2021.9624751","DOIUrl":null,"url":null,"abstract":"Phishing attacks have witnessed a rapid increase thanks to the matured social engineering techniques, COVID-19 pandemic, and recently adversarial deep learning techniques. Even though adversarial phishing attacks are recent, attackers are crafting such attacks by considering context, testing different attack paths, then selecting paths that can evade machine learning phishing detectors. This research proposes an approach that generates adversarial phishing attacks by finding optimal subsets of features that lead to higher evasion rate. We used feature engineering techniques such as Recursive Feature Elimination, Lasso, and Cancel Out to generate then test attack vectors that have higher potential to evade phishing detectors. We tested the evasion performance of each technique then classified different evasion tests as passed or failed depending on their evasion rate. Our findings showed that our threat model has better evasion capability compared to the original Generative Adversarial Deep Neural Network (GAN) which perturbs features in a random manner.","PeriodicalId":347770,"journal":{"name":"2021 IEEE International Conference on Intelligence and Security Informatics (ISI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Conference on Intelligence and Security Informatics (ISI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISI53945.2021.9624751","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

Abstract

Phishing attacks have witnessed a rapid increase thanks to the matured social engineering techniques, COVID-19 pandemic, and recently adversarial deep learning techniques. Even though adversarial phishing attacks are recent, attackers are crafting such attacks by considering context, testing different attack paths, then selecting paths that can evade machine learning phishing detectors. This research proposes an approach that generates adversarial phishing attacks by finding optimal subsets of features that lead to higher evasion rate. We used feature engineering techniques such as Recursive Feature Elimination, Lasso, and Cancel Out to generate then test attack vectors that have higher potential to evade phishing detectors. We tested the evasion performance of each technique then classified different evasion tests as passed or failed depending on their evasion rate. Our findings showed that our threat model has better evasion capability compared to the original Generative Adversarial Deep Neural Network (GAN) which perturbs features in a random manner.
生成式对抗网络钓鱼中最优攻击路径的生成
由于成熟的社会工程技术、COVID-19大流行和最近的对抗性深度学习技术,网络钓鱼攻击迅速增加。尽管对抗性网络钓鱼攻击是最近才出现的,但攻击者正在通过考虑上下文,测试不同的攻击路径,然后选择可以逃避机器学习网络钓鱼检测器的路径来制作此类攻击。本研究提出了一种通过寻找导致更高逃避率的最优特征子集来生成对抗性网络钓鱼攻击的方法。我们使用递归特征消除、Lasso和Cancel Out等特征工程技术来生成和测试具有更高逃避网络钓鱼检测器潜力的攻击向量。我们测试了每种技术的逃避性能,然后根据逃避率将不同的逃避测试分类为通过或失败。我们的研究结果表明,与原始的以随机方式扰动特征的生成对抗深度神经网络(GAN)相比,我们的威胁模型具有更好的逃避能力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信