{"title":"Learning to Decode Polar Codes with Quantized LLRs Passing","authors":"Jian Gao, Jincheng Dai, K. Niu","doi":"10.1109/PIMRC.2019.8904378","DOIUrl":null,"url":null,"abstract":"In this paper, a weighted successive cancellation (WSC) algorithm is proposed to improve the decoding performance of polar codes with the quantized log-likelihood ratio (LLR). The weights used in the WSC are automatically learned by a neural network (NN). A novel NN model and its simplified architecture are build to select the optimal weights of the WSC, and all-zero codewords can train the NN. Besides, we impose the constraints on weights to direct the learning process. The small number of trainable parameters lead to faster learning without performance loss. Simulation results show that the WSC algorithm is valid to various codewords and the trained weights make it outperform SC algorithm with the same quantization precision. Notably, the WSC with 3-bit quantization precision achieves a near floating point performance for short length.","PeriodicalId":412182,"journal":{"name":"2019 IEEE 30th Annual International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE 30th Annual International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/PIMRC.2019.8904378","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
In this paper, a weighted successive cancellation (WSC) algorithm is proposed to improve the decoding performance of polar codes with the quantized log-likelihood ratio (LLR). The weights used in the WSC are automatically learned by a neural network (NN). A novel NN model and its simplified architecture are build to select the optimal weights of the WSC, and all-zero codewords can train the NN. Besides, we impose the constraints on weights to direct the learning process. The small number of trainable parameters lead to faster learning without performance loss. Simulation results show that the WSC algorithm is valid to various codewords and the trained weights make it outperform SC algorithm with the same quantization precision. Notably, the WSC with 3-bit quantization precision achieves a near floating point performance for short length.