{"title":"Quantized Reservoir Computing on Edge Devices for Communication Applications","authors":"Shiya Liu, Lingjia Liu, Y. Yi","doi":"10.1109/SEC50012.2020.00068","DOIUrl":null,"url":null,"abstract":"With the advance of edge computing, a fast and efficient machine learning model running on edge devices is needed. In this paper, we propose a novel quantization approach that reduces the memory and compute demands on edge devices without losing much accuracy. Also, we explore its application in communication such as symbol detection in 5G systems, attack detection of smart grid, and dynamic spectrum access. Conventional neural networks such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs) could be exploited on these applications and achieve state-of-the-art performance. However, conventional neural networks consume a large amount of computation and storage resources, and thus do not fit well to edge devices. Reservoir computing (RC), which is a framework for computation derived from RNN, consists of a fixed reservoir layer and a trained readout layer. The advantages of RC compared to traditional RNNs are faster learning and lower training costs. Besides, RC has faster inference speed with fewer parameters and resistance to overfitting issues. These merits make the RC system more suitable for applications running on edge devices. We apply the proposed quantization approach to RC systems and demonstrate the proposed quantized RC system on Xilinx Zynq®-7000 FPGA board. On the sequential MNIST dataset, the quantized RC system utilizes 62%, 65%, and 64% less of DSP, FF, and LUT, respectively compared to the floating-point RNN. The inference speed is improved by 17 times with an 8% accuracy drop.","PeriodicalId":375577,"journal":{"name":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SEC50012.2020.00068","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
With the advance of edge computing, a fast and efficient machine learning model running on edge devices is needed. In this paper, we propose a novel quantization approach that reduces the memory and compute demands on edge devices without losing much accuracy. Also, we explore its application in communication such as symbol detection in 5G systems, attack detection of smart grid, and dynamic spectrum access. Conventional neural networks such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs) could be exploited on these applications and achieve state-of-the-art performance. However, conventional neural networks consume a large amount of computation and storage resources, and thus do not fit well to edge devices. Reservoir computing (RC), which is a framework for computation derived from RNN, consists of a fixed reservoir layer and a trained readout layer. The advantages of RC compared to traditional RNNs are faster learning and lower training costs. Besides, RC has faster inference speed with fewer parameters and resistance to overfitting issues. These merits make the RC system more suitable for applications running on edge devices. We apply the proposed quantization approach to RC systems and demonstrate the proposed quantized RC system on Xilinx Zynq®-7000 FPGA board. On the sequential MNIST dataset, the quantized RC system utilizes 62%, 65%, and 64% less of DSP, FF, and LUT, respectively compared to the floating-point RNN. The inference speed is improved by 17 times with an 8% accuracy drop.