{"title":"基于创新方法的卷积码Viterbi译码互信息推导及线性最小均方误差","authors":"Masato Tajima","doi":"10.1109/TIT.2025.3591781","DOIUrl":null,"url":null,"abstract":"We apply the innovations method to Viterbi decoding of convolutional codes. First, we calculate the covariance matrix of the innovation (i.e., the soft-decision input to the main decoder in a scarce-state-transition (SST) Viterbi decoder). Then a covariance matrix corresponding to that of the one-step prediction error in the Kalman filter is obtained. Furthermore, from that matrix, a covariance matrix corresponding to that of the filtering error in the Kalman filter is derived using the formula in the Kalman filter. This is justified from the fact that Viterbi decoding of convolutional codes has the structure of the Kalman filter. As a result, an upper bound on the average mutual information per branch for Viterbi decoding of convolutional codes is given using these covariance matrices. Also, the trace of the latter covariance matrix represents the (filtering) linear minimum mean-square error (LMMSE) per branch. We show that an upper bound on the average mutual information per branch is sandwiched between half the SNR times the filtering and one-step prediction LMMSEs per branch. In the case of quick-look-in (QLI) codes, from the covariance matrix of the soft-decision input to the main decoder, we can get a matrix. We show that the trace of this matrix has some connection with the linear smoothing error.","PeriodicalId":13494,"journal":{"name":"IEEE Transactions on Information Theory","volume":"71 10","pages":"7435-7458"},"PeriodicalIF":2.9000,"publicationDate":"2025-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Derivation of Mutual Information and Linear Minimum Mean-Square Error for Viterbi Decoding of Convolutional Codes Using the Innovations Method\",\"authors\":\"Masato Tajima\",\"doi\":\"10.1109/TIT.2025.3591781\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We apply the innovations method to Viterbi decoding of convolutional codes. First, we calculate the covariance matrix of the innovation (i.e., the soft-decision input to the main decoder in a scarce-state-transition (SST) Viterbi decoder). Then a covariance matrix corresponding to that of the one-step prediction error in the Kalman filter is obtained. Furthermore, from that matrix, a covariance matrix corresponding to that of the filtering error in the Kalman filter is derived using the formula in the Kalman filter. This is justified from the fact that Viterbi decoding of convolutional codes has the structure of the Kalman filter. As a result, an upper bound on the average mutual information per branch for Viterbi decoding of convolutional codes is given using these covariance matrices. Also, the trace of the latter covariance matrix represents the (filtering) linear minimum mean-square error (LMMSE) per branch. We show that an upper bound on the average mutual information per branch is sandwiched between half the SNR times the filtering and one-step prediction LMMSEs per branch. In the case of quick-look-in (QLI) codes, from the covariance matrix of the soft-decision input to the main decoder, we can get a matrix. We show that the trace of this matrix has some connection with the linear smoothing error.\",\"PeriodicalId\":13494,\"journal\":{\"name\":\"IEEE Transactions on Information Theory\",\"volume\":\"71 10\",\"pages\":\"7435-7458\"},\"PeriodicalIF\":2.9000,\"publicationDate\":\"2025-07-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Information Theory\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/11095371/\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Information Theory","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/11095371/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
Derivation of Mutual Information and Linear Minimum Mean-Square Error for Viterbi Decoding of Convolutional Codes Using the Innovations Method
We apply the innovations method to Viterbi decoding of convolutional codes. First, we calculate the covariance matrix of the innovation (i.e., the soft-decision input to the main decoder in a scarce-state-transition (SST) Viterbi decoder). Then a covariance matrix corresponding to that of the one-step prediction error in the Kalman filter is obtained. Furthermore, from that matrix, a covariance matrix corresponding to that of the filtering error in the Kalman filter is derived using the formula in the Kalman filter. This is justified from the fact that Viterbi decoding of convolutional codes has the structure of the Kalman filter. As a result, an upper bound on the average mutual information per branch for Viterbi decoding of convolutional codes is given using these covariance matrices. Also, the trace of the latter covariance matrix represents the (filtering) linear minimum mean-square error (LMMSE) per branch. We show that an upper bound on the average mutual information per branch is sandwiched between half the SNR times the filtering and one-step prediction LMMSEs per branch. In the case of quick-look-in (QLI) codes, from the covariance matrix of the soft-decision input to the main decoder, we can get a matrix. We show that the trace of this matrix has some connection with the linear smoothing error.
期刊介绍:
The IEEE Transactions on Information Theory is a journal that publishes theoretical and experimental papers concerned with the transmission, processing, and utilization of information. The boundaries of acceptable subject matter are intentionally not sharply delimited. Rather, it is hoped that as the focus of research activity changes, a flexible policy will permit this Transactions to follow suit. Current appropriate topics are best reflected by recent Tables of Contents; they are summarized in the titles of editorial areas that appear on the inside front cover.