Derivation of Mutual Information and Linear Minimum Mean-Square Error for Viterbi Decoding of Convolutional Codes Using the Innovations Method

IF 2.9 3区 计算机科学 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS
Masato Tajima
{"title":"Derivation of Mutual Information and Linear Minimum Mean-Square Error for Viterbi Decoding of Convolutional Codes Using the Innovations Method","authors":"Masato Tajima","doi":"10.1109/TIT.2025.3591781","DOIUrl":null,"url":null,"abstract":"We apply the innovations method to Viterbi decoding of convolutional codes. First, we calculate the covariance matrix of the innovation (i.e., the soft-decision input to the main decoder in a scarce-state-transition (SST) Viterbi decoder). Then a covariance matrix corresponding to that of the one-step prediction error in the Kalman filter is obtained. Furthermore, from that matrix, a covariance matrix corresponding to that of the filtering error in the Kalman filter is derived using the formula in the Kalman filter. This is justified from the fact that Viterbi decoding of convolutional codes has the structure of the Kalman filter. As a result, an upper bound on the average mutual information per branch for Viterbi decoding of convolutional codes is given using these covariance matrices. Also, the trace of the latter covariance matrix represents the (filtering) linear minimum mean-square error (LMMSE) per branch. We show that an upper bound on the average mutual information per branch is sandwiched between half the SNR times the filtering and one-step prediction LMMSEs per branch. In the case of quick-look-in (QLI) codes, from the covariance matrix of the soft-decision input to the main decoder, we can get a matrix. We show that the trace of this matrix has some connection with the linear smoothing error.","PeriodicalId":13494,"journal":{"name":"IEEE Transactions on Information Theory","volume":"71 10","pages":"7435-7458"},"PeriodicalIF":2.9000,"publicationDate":"2025-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Information Theory","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/11095371/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

We apply the innovations method to Viterbi decoding of convolutional codes. First, we calculate the covariance matrix of the innovation (i.e., the soft-decision input to the main decoder in a scarce-state-transition (SST) Viterbi decoder). Then a covariance matrix corresponding to that of the one-step prediction error in the Kalman filter is obtained. Furthermore, from that matrix, a covariance matrix corresponding to that of the filtering error in the Kalman filter is derived using the formula in the Kalman filter. This is justified from the fact that Viterbi decoding of convolutional codes has the structure of the Kalman filter. As a result, an upper bound on the average mutual information per branch for Viterbi decoding of convolutional codes is given using these covariance matrices. Also, the trace of the latter covariance matrix represents the (filtering) linear minimum mean-square error (LMMSE) per branch. We show that an upper bound on the average mutual information per branch is sandwiched between half the SNR times the filtering and one-step prediction LMMSEs per branch. In the case of quick-look-in (QLI) codes, from the covariance matrix of the soft-decision input to the main decoder, we can get a matrix. We show that the trace of this matrix has some connection with the linear smoothing error.
基于创新方法的卷积码Viterbi译码互信息推导及线性最小均方误差
我们将创新方法应用于卷积码的维特比译码。首先,我们计算创新的协方差矩阵(即,在稀缺状态转换(SST) Viterbi解码器中,主解码器的软决策输入)。然后得到与卡尔曼滤波中一步预测误差对应的协方差矩阵。进而,利用卡尔曼滤波器中的公式,从该矩阵推导出与卡尔曼滤波器中滤波误差对应的协方差矩阵。从卷积码的维特比译码具有卡尔曼滤波器的结构这一事实可以证明这一点。利用这些协方差矩阵给出了卷积码Viterbi译码的每个分支平均互信息的上界。此外,后一个协方差矩阵的迹表示(滤波)每个分支的线性最小均方误差(LMMSE)。我们证明了每个分支的平均互信息的上界夹在每个分支的一半信噪比乘以滤波和一步预测lmmse之间。对于快速入码,从软判决输入到主解码器的协方差矩阵可以得到一个矩阵。我们证明了该矩阵的迹线与线性平滑误差有一定的联系。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Transactions on Information Theory
IEEE Transactions on Information Theory 工程技术-工程:电子与电气
CiteScore
5.70
自引率
20.00%
发文量
514
审稿时长
12 months
期刊介绍: The IEEE Transactions on Information Theory is a journal that publishes theoretical and experimental papers concerned with the transmission, processing, and utilization of information. The boundaries of acceptable subject matter are intentionally not sharply delimited. Rather, it is hoped that as the focus of research activity changes, a flexible policy will permit this Transactions to follow suit. Current appropriate topics are best reflected by recent Tables of Contents; they are summarized in the titles of editorial areas that appear on the inside front cover.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信