{"title":"Variable Length Joint Source-Channel Coding of Text Using Deep Neural Networks","authors":"Milind Rao, N. Farsad, A. Goldsmith","doi":"10.1109/SPAWC.2018.8445924","DOIUrl":null,"url":null,"abstract":"We consider joint source and channel coding of natural language over a noisy channel using deep learning. While the typical approach based on separate source and channel code design minimizes bit error rates, the proposed deep learning approach preserves semantic information of sentences. In particular, unlike previous work which used a fixed-length encoding per sentence, a variable-length neural network encoder is presented. The performance of this new architecture is compared to the one with fixed-length encoding per sentence. We show that the variable-length encoder has a lower word error rate compared with the fixed-length encoder as well as separate source and channel coding schemes across several different communication channels.","PeriodicalId":240036,"journal":{"name":"2018 IEEE 19th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE 19th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SPAWC.2018.8445924","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8
Abstract
We consider joint source and channel coding of natural language over a noisy channel using deep learning. While the typical approach based on separate source and channel code design minimizes bit error rates, the proposed deep learning approach preserves semantic information of sentences. In particular, unlike previous work which used a fixed-length encoding per sentence, a variable-length neural network encoder is presented. The performance of this new architecture is compared to the one with fixed-length encoding per sentence. We show that the variable-length encoder has a lower word error rate compared with the fixed-length encoder as well as separate source and channel coding schemes across several different communication channels.