{"title":"Low-precision deep-learning-based automatic modulation recognition system","authors":"Satish Kumar, Aakash Agarwal, Neeraj Varshney, Rajarshi Mahapatra","doi":"10.52953/ctyj2699","DOIUrl":null,"url":null,"abstract":"Convolution Neural Network (CNN)-based deep learning models have recently been employed in Automated Modulation Classification (AMC) systems, with excellent results. However, hardware deployment of these CNN-based AMC models is very difficult due to their large size, floating point weights and activations, and real-time processing requirements in hardware such as Field Programmable Gate Arrays (FPGAs). In this study, we designed CNN-based AMC techniques for complex-valued temporal radio signal domains and made them less complex with a small memory footprint for FPGA implementation. This work mainly focuses on quantized CNN, low precision mathematics, and quantization-aware CNN training to overcome the problem of larger model sizes, floating-point weights, and activations. Low precision weights, activations, and quantized CNN, on the other hand, have a considerable impact on the accuracy of the model. Thus, we propose an iterative pruning-based training mechanism to maintain the overall accuracy above a certain threshold while decreasing the model size for hardware implementation. The proposed schemes are 21.55 times less complex and achieve at least 1.6% higher accuracy than the baseline. Moreover, results show that our convolution layer-based Quantized Modulation Classification Network (QMCNet) with pruning has 92.01% less multiply-accumulate bit operations (bit_operations), 61.39% less activation bits, and 87.58% less weight bits than the 8 bit quantized baseline model whereas the quantized and pruned Residual-Unit based model (RUNet) has 95.36% less bit_operations, 29.97% less activation bits and 98.22% less weight bits than the 8 bit quantized baseline model.","PeriodicalId":274720,"journal":{"name":"ITU Journal on Future and Evolving Technologies","volume":"197 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ITU Journal on Future and Evolving Technologies","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.52953/ctyj2699","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Convolution Neural Network (CNN)-based deep learning models have recently been employed in Automated Modulation Classification (AMC) systems, with excellent results. However, hardware deployment of these CNN-based AMC models is very difficult due to their large size, floating point weights and activations, and real-time processing requirements in hardware such as Field Programmable Gate Arrays (FPGAs). In this study, we designed CNN-based AMC techniques for complex-valued temporal radio signal domains and made them less complex with a small memory footprint for FPGA implementation. This work mainly focuses on quantized CNN, low precision mathematics, and quantization-aware CNN training to overcome the problem of larger model sizes, floating-point weights, and activations. Low precision weights, activations, and quantized CNN, on the other hand, have a considerable impact on the accuracy of the model. Thus, we propose an iterative pruning-based training mechanism to maintain the overall accuracy above a certain threshold while decreasing the model size for hardware implementation. The proposed schemes are 21.55 times less complex and achieve at least 1.6% higher accuracy than the baseline. Moreover, results show that our convolution layer-based Quantized Modulation Classification Network (QMCNet) with pruning has 92.01% less multiply-accumulate bit operations (bit_operations), 61.39% less activation bits, and 87.58% less weight bits than the 8 bit quantized baseline model whereas the quantized and pruned Residual-Unit based model (RUNet) has 95.36% less bit_operations, 29.97% less activation bits and 98.22% less weight bits than the 8 bit quantized baseline model.