D. Shiping, Chen Tao, Z. Xianyin, Wang Jian, Wei Yuming
{"title":"Training Second-Order Hidden Markov Models with Multiple Observation Sequences","authors":"D. Shiping, Chen Tao, Z. Xianyin, Wang Jian, Wei Yuming","doi":"10.1109/IFCSTA.2009.12","DOIUrl":null,"url":null,"abstract":"Second-order hidden Markov models (HMM2) have been widely used in pattern recognition, especially in speech recognition. Their main advantages are their capabilities to model noisy temporal signals of variable length. In this article, we introduce a new HMM2 with multiple observable sequences, assuming that all the observable sequences are statistically correlated. In this treatment, the multiple observation probability is expressed as a combination of individual observation probabilities without losing generality. This combinatorial method gives one more freedom in making different dependence-independence assumptions. By generalizing Baum’s auxiliary function into this framework and building up an associated objective function using Lagrange multiplier method, several new formulae solving model training problem are theoretically derived. We show that the model training equations can be easily derived with an independence assumption.","PeriodicalId":256032,"journal":{"name":"2009 International Forum on Computer Science-Technology and Applications","volume":"4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2009-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2009 International Forum on Computer Science-Technology and Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IFCSTA.2009.12","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
Second-order hidden Markov models (HMM2) have been widely used in pattern recognition, especially in speech recognition. Their main advantages are their capabilities to model noisy temporal signals of variable length. In this article, we introduce a new HMM2 with multiple observable sequences, assuming that all the observable sequences are statistically correlated. In this treatment, the multiple observation probability is expressed as a combination of individual observation probabilities without losing generality. This combinatorial method gives one more freedom in making different dependence-independence assumptions. By generalizing Baum’s auxiliary function into this framework and building up an associated objective function using Lagrange multiplier method, several new formulae solving model training problem are theoretically derived. We show that the model training equations can be easily derived with an independence assumption.