The Analysis of the Generator Architectures and Loss Functions in Improving the Stability of GANs Training towards Efficient Intrusion Detection

Raha Soleymanzadeh, R. Kashef
{"title":"The Analysis of the Generator Architectures and Loss Functions in Improving the Stability of GANs Training towards Efficient Intrusion Detection","authors":"Raha Soleymanzadeh, R. Kashef","doi":"10.1109/ISCMI56532.2022.10068468","DOIUrl":null,"url":null,"abstract":"Various research studies have been recently introduced in developing generative models, especially in computer vision and image classification. These models are inspired by a generator and discriminator network architecture in a min-max optimization game called Generative Adversarial Networks (GANs). However, GANs-based models suffer from training instability, which means high oscillations during the training, which provides inaccurate results. There are various causes beyond the instability behaviours, such as the adopted generator architecture, loss function, and distance metrics. In this paper, we focus on the impact of the generator architectures and the loss functions on the GANs training. We aim to provide a comparative assessment of various architectures focusing on ensemble and hybrid models and loss functions such as Focal loss, Binary Cross-Entropy and Mean Squared loss function. Experimental results on NSL-KDD and UNSW-NB15 datasets show that the ensemble models are more stable in terms of training and have higher intrusion detection rates. Additionally, the focal loss can improve the performance of detection minority classes. Using Mean squared loss improved the detection rate for discriminator, however with the Binary Cross entropy loss function, the deep features representation is improved and there is more stability in trends for all architectures.","PeriodicalId":340397,"journal":{"name":"2022 9th International Conference on Soft Computing & Machine Intelligence (ISCMI)","volume":"162 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 9th International Conference on Soft Computing & Machine Intelligence (ISCMI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISCMI56532.2022.10068468","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Various research studies have been recently introduced in developing generative models, especially in computer vision and image classification. These models are inspired by a generator and discriminator network architecture in a min-max optimization game called Generative Adversarial Networks (GANs). However, GANs-based models suffer from training instability, which means high oscillations during the training, which provides inaccurate results. There are various causes beyond the instability behaviours, such as the adopted generator architecture, loss function, and distance metrics. In this paper, we focus on the impact of the generator architectures and the loss functions on the GANs training. We aim to provide a comparative assessment of various architectures focusing on ensemble and hybrid models and loss functions such as Focal loss, Binary Cross-Entropy and Mean Squared loss function. Experimental results on NSL-KDD and UNSW-NB15 datasets show that the ensemble models are more stable in terms of training and have higher intrusion detection rates. Additionally, the focal loss can improve the performance of detection minority classes. Using Mean squared loss improved the detection rate for discriminator, however with the Binary Cross entropy loss function, the deep features representation is improved and there is more stability in trends for all architectures.
生成器结构和损失函数在提高gan训练稳定性以实现高效入侵检测中的分析
近年来,人们对生成模型的发展进行了大量的研究,特别是在计算机视觉和图像分类方面。这些模型的灵感来自最小-最大优化博弈生成对抗网络(GANs)中的生成器和鉴别器网络架构。然而,基于高斯的模型存在训练不稳定性,这意味着训练过程中的高振荡,从而提供不准确的结果。除不稳定行为外,还有各种原因,如采用的发电机结构、损失函数和距离度量。在本文中,我们重点研究了生成器结构和损失函数对gan训练的影响。我们的目标是提供各种架构的比较评估,重点是集成和混合模型以及损失函数,如焦损失,二元交叉熵和均方损失函数。在NSL-KDD和UNSW-NB15数据集上的实验结果表明,集成模型在训练方面更稳定,入侵检测率更高。此外,焦损可以提高检测少数类的性能。使用均方损失提高了鉴别器的检测率,而使用二元交叉熵损失函数改进了深度特征表示,并且对所有体系结构都具有更大的趋势稳定性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信