{"title":"DTITD: An Intelligent Insider Threat Detection Framework Based on Digital Twin and Self-Attention Based Deep Learning Models","authors":"Zhi Qiang Wang;Abdulmotaleb El Saddik","doi":"10.1109/ACCESS.2023.3324371","DOIUrl":null,"url":null,"abstract":"Recent statistics and studies show that the loss generated by insider threats is much higher than that generated by external attacks. More and more organizations are investing in or purchasing insider threat detection systems to prevent insider risks. However, the accurate and timely detection of insider threats faces significant challenges. In this study, we proposed an intelligent insider threat detection framework based on Digital Twins and self-attentions based deep learning models. First, this paper introduces insider threats and the challenges in detecting them. Then this paper presents recent related works on solving insider threat detection problems and their limitations. Next, we propose our solutions to address these challenges: building an innovative intelligent insider threat detection framework based on Digital Twin (DT) and self-attention based deep learning models, performing insight analysis of users’ behavior and entities, adopting contextual word embedding techniques using Bidirectional Encoder Representations from Transformers (BERT) model and sentence embedding technique using Generative Pre-trained Transformer 2 (GPT-2) model to perform data augmentation to overcome significant data imbalance, and adopting temporal semantic representation of users’ behaviors to build user behavior time sequences. Subsequently, this study built self-attention-based deep learning models to quickly detect insider threats. This study proposes a simplified transformer model named DistilledTrans and applies the original transformer model, DistilledTrans, BERT + final layer, Robustly Optimized BERT Approach (RoBERTa) + final layer, and a hybrid method combining pre-trained (BERT, RoBERTa) with a Convolutional Neural Network (CNN) or Long Short-term Memory (LSTM) network model to detect insider threats. Finally, this paper presents experimental results on a dense dataset CERT r4.2 and augmented sporadic dataset CERT r6.2, evaluates their performance, and performs a comparison analysis with state-of-the-art models. Promising experimental results show that 1) contextual word embedding insert and substitution predicted by the BERT model, and context embedding sentences predicted by the GPT-2 model are effective data augmentation approaches to address high data imbalance; 2) DistilledTrans trained with sporadic dataset CERT r6.2 augmented by the contextual embedding sentence method predicted by GPT-2, outperforms the state-of-the-art models in terms of all evaluation metrics, including accuracy, precision, recall, F1-score, and Area Under the ROC Curve (AUC). Additionally, its structure is much simpler, and thus training time and computing cost are much less than those of recent models; 3) when trained with the dense dataset CERT r4.2, pre-trained models BERT plus a final layer or RoBERTa plus a final layer can achieve significantly higher performance than the current models with a very little sacrifice of precision. However, complex hybrid methods may not be required.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"11 ","pages":"114013-114030"},"PeriodicalIF":3.4000,"publicationDate":"2023-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/6287639/10005208/10285086.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Access","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10285086/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Recent statistics and studies show that the loss generated by insider threats is much higher than that generated by external attacks. More and more organizations are investing in or purchasing insider threat detection systems to prevent insider risks. However, the accurate and timely detection of insider threats faces significant challenges. In this study, we proposed an intelligent insider threat detection framework based on Digital Twins and self-attentions based deep learning models. First, this paper introduces insider threats and the challenges in detecting them. Then this paper presents recent related works on solving insider threat detection problems and their limitations. Next, we propose our solutions to address these challenges: building an innovative intelligent insider threat detection framework based on Digital Twin (DT) and self-attention based deep learning models, performing insight analysis of users’ behavior and entities, adopting contextual word embedding techniques using Bidirectional Encoder Representations from Transformers (BERT) model and sentence embedding technique using Generative Pre-trained Transformer 2 (GPT-2) model to perform data augmentation to overcome significant data imbalance, and adopting temporal semantic representation of users’ behaviors to build user behavior time sequences. Subsequently, this study built self-attention-based deep learning models to quickly detect insider threats. This study proposes a simplified transformer model named DistilledTrans and applies the original transformer model, DistilledTrans, BERT + final layer, Robustly Optimized BERT Approach (RoBERTa) + final layer, and a hybrid method combining pre-trained (BERT, RoBERTa) with a Convolutional Neural Network (CNN) or Long Short-term Memory (LSTM) network model to detect insider threats. Finally, this paper presents experimental results on a dense dataset CERT r4.2 and augmented sporadic dataset CERT r6.2, evaluates their performance, and performs a comparison analysis with state-of-the-art models. Promising experimental results show that 1) contextual word embedding insert and substitution predicted by the BERT model, and context embedding sentences predicted by the GPT-2 model are effective data augmentation approaches to address high data imbalance; 2) DistilledTrans trained with sporadic dataset CERT r6.2 augmented by the contextual embedding sentence method predicted by GPT-2, outperforms the state-of-the-art models in terms of all evaluation metrics, including accuracy, precision, recall, F1-score, and Area Under the ROC Curve (AUC). Additionally, its structure is much simpler, and thus training time and computing cost are much less than those of recent models; 3) when trained with the dense dataset CERT r4.2, pre-trained models BERT plus a final layer or RoBERTa plus a final layer can achieve significantly higher performance than the current models with a very little sacrifice of precision. However, complex hybrid methods may not be required.
IEEE AccessCOMPUTER SCIENCE, INFORMATION SYSTEMSENGIN-ENGINEERING, ELECTRICAL & ELECTRONIC
CiteScore
9.80
自引率
7.70%
发文量
6673
审稿时长
6 weeks
期刊介绍:
IEEE Access® is a multidisciplinary, open access (OA), applications-oriented, all-electronic archival journal that continuously presents the results of original research or development across all of IEEE''s fields of interest.
IEEE Access will publish articles that are of high interest to readers, original, technically correct, and clearly presented. Supported by author publication charges (APC), its hallmarks are a rapid peer review and publication process with open access to all readers. Unlike IEEE''s traditional Transactions or Journals, reviews are "binary", in that reviewers will either Accept or Reject an article in the form it is submitted in order to achieve rapid turnaround. Especially encouraged are submissions on:
Multidisciplinary topics, or applications-oriented articles and negative results that do not fit within the scope of IEEE''s traditional journals.
Practical articles discussing new experiments or measurement techniques, interesting solutions to engineering.
Development of new or improved fabrication or manufacturing techniques.
Reviews or survey articles of new or evolving fields oriented to assist others in understanding the new area.