L2R-MLP: a multilabel classification scheme for the detection of DNS tunneling

Emmanuel Oluwatobi Asani , Mojiire Oluwaseun Ayoola , Emmanuel Tunbosun Aderemi , Victoria Oluwaseyi Adedayo-Ajayi , Joyce A. Ayoola , Oluwatobi Noah Akande , Jide Kehinde Adeniyi , Oluwambo Tolulope Olowe
{"title":"L2R-MLP: a multilabel classification scheme for the detection of DNS tunneling","authors":"Emmanuel Oluwatobi Asani ,&nbsp;Mojiire Oluwaseun Ayoola ,&nbsp;Emmanuel Tunbosun Aderemi ,&nbsp;Victoria Oluwaseyi Adedayo-Ajayi ,&nbsp;Joyce A. Ayoola ,&nbsp;Oluwatobi Noah Akande ,&nbsp;Jide Kehinde Adeniyi ,&nbsp;Oluwambo Tolulope Olowe","doi":"10.1016/j.dsm.2024.10.005","DOIUrl":null,"url":null,"abstract":"<div><div>Domain name system (DNS) tunneling attacks can bypass firewalls, which typically “trust” DNS transmissions by concealing malicious traffic in the packets trusted to convey legitimate ones, thereby making detection using conventional security techniques challenging. To address this issue, we propose a Lebesgue-2 regularized multilayer perceptron (L2R-MLP) algorithm for detecting DNS tunneling attacks. The DNS dataset was carefully curated from a publicly available repository, and relevant features, such as packet size and count, were selected using the recusive feature elimination technique. L2 regularization in the MLP classifier's hidden layers enhances pattern recognition during training, effectively countering the risk of overfitting. When evaluated against a benchmark MLP model, L2R-MLP demonstrated superior performance with 99.46% accuracy, 97.00% precision, 97.00% F1-score, 99.95% recall, and an AUC of 89.00%. In comparison, the benchmark MLP achieved 92.53% accuracy, 96.00% precision, 97.00% F1-score, 99.95% recall, and an AUC of 87.00%. This highlights the effectiveness of L2 regularization in improving predictive capabilities and model generalization for unseen instances.</div></div>","PeriodicalId":100353,"journal":{"name":"Data Science and Management","volume":"8 3","pages":"Pages 323-331"},"PeriodicalIF":0.0000,"publicationDate":"2024-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Data Science and Management","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666764924000560","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Domain name system (DNS) tunneling attacks can bypass firewalls, which typically “trust” DNS transmissions by concealing malicious traffic in the packets trusted to convey legitimate ones, thereby making detection using conventional security techniques challenging. To address this issue, we propose a Lebesgue-2 regularized multilayer perceptron (L2R-MLP) algorithm for detecting DNS tunneling attacks. The DNS dataset was carefully curated from a publicly available repository, and relevant features, such as packet size and count, were selected using the recusive feature elimination technique. L2 regularization in the MLP classifier's hidden layers enhances pattern recognition during training, effectively countering the risk of overfitting. When evaluated against a benchmark MLP model, L2R-MLP demonstrated superior performance with 99.46% accuracy, 97.00% precision, 97.00% F1-score, 99.95% recall, and an AUC of 89.00%. In comparison, the benchmark MLP achieved 92.53% accuracy, 96.00% precision, 97.00% F1-score, 99.95% recall, and an AUC of 87.00%. This highlights the effectiveness of L2 regularization in improving predictive capabilities and model generalization for unseen instances.
L2R-MLP:用于检测DNS隧道的多标签分类方案
域名系统(DNS)隧道攻击可以绕过防火墙,防火墙通常通过将恶意流量隐藏在可信的数据包中以传递合法流量来“信任”DNS传输,从而使使用传统安全技术进行检测变得具有挑战性。为了解决这个问题,我们提出了一种用于检测DNS隧道攻击的lebesgu2正则化多层感知器(L2R-MLP)算法。DNS数据集是从一个公开可用的存储库中精心挑选出来的,并且使用隐式特征消除技术选择了相关特征,如数据包大小和计数。MLP分类器隐藏层中的L2正则化增强了训练过程中的模式识别,有效地对抗了过拟合的风险。当与基准MLP模型进行评估时,L2R-MLP表现出优异的性能,准确率为99.46%,精密度为97.00%,f1分数为97.00%,召回率为99.95%,AUC为89.00%。相比之下,基准MLP的准确率为92.53%,精密度为96.00%,f1评分为97.00%,召回率为99.95%,AUC为87.00%。这突出了L2正则化在提高未知实例的预测能力和模型泛化方面的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
7.50
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信