{"title":"使用无监督深度学习结构进行异常检测的快速硬件辅助在线学习","authors":"Khaled Alrawashdeh, C. Purdy","doi":"10.1109/INFOCT.2018.8356855","DOIUrl":null,"url":null,"abstract":"Real-time scenarios of deep learning algorithms are challenged by two less frequently addressed issues. The first is data inefficiency i.e., the model requires several epochs of trial and error to converge which makes it impractical to be applied to real-time applications. The Second is the high precision computation load of the deep learning algorithms to achieve high accuracy during training and inference. In this paper, we address these two issues and apply our model to the task of online anomaly detection using FPGA. To address the first issue, we propose a compressed training model for the contrastive divergence algorithm (CD) in the Deep Belief Network (DBN). The goal is to dynamically adjust the training vector according to the feedback from the free energy and the reconstruction error, which allows for better generalization. To address the second issue, we propose a Hybrid-Stochastic-Dynamic-Fixed-Point (HSDFP) method, which provides training environment with high reduction in calculation, area, and power in FPGA. Our framework enables the DBN structure to take actions and detect attacks online. Thus, the network can collect efficient number of training samples and avoid overfitting. We show that (1) our proposed method converges faster than the state-of-the-art deep learning methods, (2) FPGA implementation achieves accelerated inference speed of 0.008ms and a high power efficiency of 37 G-ops/s/W compared to CPU, GPU, and 16-bit fixed-point arithmetic (3) FPGA also achieves minimal degradation in accuracy of 95%, 95.4%, and 97.9% on the benchmark datasets: MNIST, NSL-KDD, and Kyoto datasets.","PeriodicalId":376443,"journal":{"name":"2018 International Conference on Information and Computer Technologies (ICICT)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":"{\"title\":\"Fast hardware assisted online learning using unsupervised deep learning structure for anomaly detection\",\"authors\":\"Khaled Alrawashdeh, C. Purdy\",\"doi\":\"10.1109/INFOCT.2018.8356855\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Real-time scenarios of deep learning algorithms are challenged by two less frequently addressed issues. The first is data inefficiency i.e., the model requires several epochs of trial and error to converge which makes it impractical to be applied to real-time applications. The Second is the high precision computation load of the deep learning algorithms to achieve high accuracy during training and inference. In this paper, we address these two issues and apply our model to the task of online anomaly detection using FPGA. To address the first issue, we propose a compressed training model for the contrastive divergence algorithm (CD) in the Deep Belief Network (DBN). The goal is to dynamically adjust the training vector according to the feedback from the free energy and the reconstruction error, which allows for better generalization. To address the second issue, we propose a Hybrid-Stochastic-Dynamic-Fixed-Point (HSDFP) method, which provides training environment with high reduction in calculation, area, and power in FPGA. Our framework enables the DBN structure to take actions and detect attacks online. Thus, the network can collect efficient number of training samples and avoid overfitting. We show that (1) our proposed method converges faster than the state-of-the-art deep learning methods, (2) FPGA implementation achieves accelerated inference speed of 0.008ms and a high power efficiency of 37 G-ops/s/W compared to CPU, GPU, and 16-bit fixed-point arithmetic (3) FPGA also achieves minimal degradation in accuracy of 95%, 95.4%, and 97.9% on the benchmark datasets: MNIST, NSL-KDD, and Kyoto datasets.\",\"PeriodicalId\":376443,\"journal\":{\"name\":\"2018 International Conference on Information and Computer Technologies (ICICT)\",\"volume\":\"41 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-03-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"10\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 International Conference on Information and Computer Technologies (ICICT)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/INFOCT.2018.8356855\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 International Conference on Information and Computer Technologies (ICICT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/INFOCT.2018.8356855","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Fast hardware assisted online learning using unsupervised deep learning structure for anomaly detection
Real-time scenarios of deep learning algorithms are challenged by two less frequently addressed issues. The first is data inefficiency i.e., the model requires several epochs of trial and error to converge which makes it impractical to be applied to real-time applications. The Second is the high precision computation load of the deep learning algorithms to achieve high accuracy during training and inference. In this paper, we address these two issues and apply our model to the task of online anomaly detection using FPGA. To address the first issue, we propose a compressed training model for the contrastive divergence algorithm (CD) in the Deep Belief Network (DBN). The goal is to dynamically adjust the training vector according to the feedback from the free energy and the reconstruction error, which allows for better generalization. To address the second issue, we propose a Hybrid-Stochastic-Dynamic-Fixed-Point (HSDFP) method, which provides training environment with high reduction in calculation, area, and power in FPGA. Our framework enables the DBN structure to take actions and detect attacks online. Thus, the network can collect efficient number of training samples and avoid overfitting. We show that (1) our proposed method converges faster than the state-of-the-art deep learning methods, (2) FPGA implementation achieves accelerated inference speed of 0.008ms and a high power efficiency of 37 G-ops/s/W compared to CPU, GPU, and 16-bit fixed-point arithmetic (3) FPGA also achieves minimal degradation in accuracy of 95%, 95.4%, and 97.9% on the benchmark datasets: MNIST, NSL-KDD, and Kyoto datasets.