Capacity-achieving codes for channels with memory with maximum-likelihood decoding

J. Bae, A. Anastasopoulos
{"title":"Capacity-achieving codes for channels with memory with maximum-likelihood decoding","authors":"J. Bae, A. Anastasopoulos","doi":"10.1109/ISIT.2009.5205692","DOIUrl":null,"url":null,"abstract":"Codes on sparse graphs have been shown to achieve remarkable performance in point-to-point channels with low decoding complexity. Most of the results in this area are based on experimental evidence and/or approximate analysis. The question of whether codes on sparse graphs can achieve the capacity of noisy channels with iterative decoding is still open, and has only been conclusively and positively answered for the binary erasure channel. On the other hand, codes on sparse graphs have been proven to achieve the capacity of memoryless, binary-input, output-symmetric channels with finite graphical complexity per information bit when maximum likelihood (ML) decoding is performed. In this paper, we consider transmission over finite-state channels (FSCs). We derive upper bounds on the average error probability of code ensembles with ML decoding. Based on these bounds we show that codes on sparse graphs can achieve the symmetric information rate (SIR) of FSCs, which is the maximum achievable rate with independently and uniformly distributed input sequences. In order to achieve rates beyond the SIR, we consider a simple quantization scheme that when applied to ensembles of codes on sparse graphs induces a Markov distribution on the transmitted sequence. By deriving average error probability bounds for these quantized code ensembles, we prove that they can achieve the information rates corresponding to the induced Markov distribution, and thus approach the FSC capacity.","PeriodicalId":412925,"journal":{"name":"2009 IEEE International Symposium on Information Theory","volume":"19 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2009-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2009 IEEE International Symposium on Information Theory","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISIT.2009.5205692","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Codes on sparse graphs have been shown to achieve remarkable performance in point-to-point channels with low decoding complexity. Most of the results in this area are based on experimental evidence and/or approximate analysis. The question of whether codes on sparse graphs can achieve the capacity of noisy channels with iterative decoding is still open, and has only been conclusively and positively answered for the binary erasure channel. On the other hand, codes on sparse graphs have been proven to achieve the capacity of memoryless, binary-input, output-symmetric channels with finite graphical complexity per information bit when maximum likelihood (ML) decoding is performed. In this paper, we consider transmission over finite-state channels (FSCs). We derive upper bounds on the average error probability of code ensembles with ML decoding. Based on these bounds we show that codes on sparse graphs can achieve the symmetric information rate (SIR) of FSCs, which is the maximum achievable rate with independently and uniformly distributed input sequences. In order to achieve rates beyond the SIR, we consider a simple quantization scheme that when applied to ensembles of codes on sparse graphs induces a Markov distribution on the transmitted sequence. By deriving average error probability bounds for these quantized code ensembles, we prove that they can achieve the information rates corresponding to the induced Markov distribution, and thus approach the FSC capacity.
具有最大似然解码的存储器信道的容量实现码
稀疏图上的码在点对点信道中具有较低的译码复杂度,具有显著的性能。这一领域的大多数结果都是基于实验证据和/或近似分析。稀疏图上的码能否达到迭代译码的噪声信道容量的问题仍然是开放的,只有对于二进制擦除信道才得到了决定性的肯定的回答。另一方面,稀疏图上的编码已被证明在执行最大似然(ML)解码时,具有每个信息位有限图形复杂度的无内存、二进制输入、输出对称信道的容量。在本文中,我们考虑有限状态信道(FSCs)上的传输。我们给出了用ML解码的码集平均错误概率的上界。基于这些边界,我们证明了稀疏图上的码可以达到FSCs的对称信息率(SIR),这是在独立且均匀分布的输入序列下可以达到的最大信息率。为了达到超过SIR的速率,我们考虑了一种简单的量化方案,当应用于稀疏图上的编码集合时,在传输序列上诱导出马尔可夫分布。通过推导这些量化码集的平均误差概率界,证明它们可以达到与诱导马尔可夫分布相对应的信息率,从而接近FSC容量。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信