{"title":"有限状态马尔可夫通信信道上ML序列检测的隐式信道估计","authors":"Z. Krusevac, R. Kennedy, P. Rapajic","doi":"10.1109/AUSCTW.2006.1625269","DOIUrl":null,"url":null,"abstract":"This paper shows the existence of the optimal training, in terms of achievable mutual information rate, for an output feedback implicit estimator for finite-state Markov communication channels. Implicit (blind) estimation is based on a measure of how modified is the input distribution when filtered by the channel transfer function and it is shown that there is no modification of an input distribution with maximum entropy rate. Input signal entropy rate reduction enables implicit (blind) channel process estimation, but decreases information transmission rate. The optimal input entropy rate (optimal implicit training rate) which achieves the maximum mutual information rate, is found","PeriodicalId":206040,"journal":{"name":"2006 Australian Communications Theory Workshop","volume":"75 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2006-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Implicit channel estimation for ML sequence detection over finite-state Markov communication channels\",\"authors\":\"Z. Krusevac, R. Kennedy, P. Rapajic\",\"doi\":\"10.1109/AUSCTW.2006.1625269\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper shows the existence of the optimal training, in terms of achievable mutual information rate, for an output feedback implicit estimator for finite-state Markov communication channels. Implicit (blind) estimation is based on a measure of how modified is the input distribution when filtered by the channel transfer function and it is shown that there is no modification of an input distribution with maximum entropy rate. Input signal entropy rate reduction enables implicit (blind) channel process estimation, but decreases information transmission rate. The optimal input entropy rate (optimal implicit training rate) which achieves the maximum mutual information rate, is found\",\"PeriodicalId\":206040,\"journal\":{\"name\":\"2006 Australian Communications Theory Workshop\",\"volume\":\"75 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2006-05-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2006 Australian Communications Theory Workshop\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/AUSCTW.2006.1625269\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2006 Australian Communications Theory Workshop","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AUSCTW.2006.1625269","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Implicit channel estimation for ML sequence detection over finite-state Markov communication channels
This paper shows the existence of the optimal training, in terms of achievable mutual information rate, for an output feedback implicit estimator for finite-state Markov communication channels. Implicit (blind) estimation is based on a measure of how modified is the input distribution when filtered by the channel transfer function and it is shown that there is no modification of an input distribution with maximum entropy rate. Input signal entropy rate reduction enables implicit (blind) channel process estimation, but decreases information transmission rate. The optimal input entropy rate (optimal implicit training rate) which achieves the maximum mutual information rate, is found