{"title":"结合自组织映射和LVQ的连续密度隐马尔可夫模型的训练","authors":"M. Kurimo, K. Torkkola","doi":"10.1109/NNSP.1992.253695","DOIUrl":null,"url":null,"abstract":"The authors propose a novel initialization method for continuous observation density hidden Markov models (CDHMMs) that is based on self-organizing maps (SOMs) and learning vector quantization (LVQ). The framework is to transcribe speech into phoneme sequences using CDHMMs as phoneme models. When numerous mixtures of, for example, Gaussian density functions are used to model the observation distributions of CDHMMs, good initial values are necessary in order for the Baum-Welch estimation to converge satisfactorily. The authors have experimented with constructing rapidly good initial values by SOMs, and with enhancing the discriminatory power of the phoneme models by adaptively training the state output distributions by using the LVQ algorithm. Experiments indicate that an improvement to the pure Baum-Welch and the segmentation K-means procedures can be obtained using the proposed method.<<ETX>>","PeriodicalId":438250,"journal":{"name":"Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop","volume":"23 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1992-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":"{\"title\":\"Training continuous density hidden Markov models in association with self-organizing maps and LVQ\",\"authors\":\"M. Kurimo, K. Torkkola\",\"doi\":\"10.1109/NNSP.1992.253695\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The authors propose a novel initialization method for continuous observation density hidden Markov models (CDHMMs) that is based on self-organizing maps (SOMs) and learning vector quantization (LVQ). The framework is to transcribe speech into phoneme sequences using CDHMMs as phoneme models. When numerous mixtures of, for example, Gaussian density functions are used to model the observation distributions of CDHMMs, good initial values are necessary in order for the Baum-Welch estimation to converge satisfactorily. The authors have experimented with constructing rapidly good initial values by SOMs, and with enhancing the discriminatory power of the phoneme models by adaptively training the state output distributions by using the LVQ algorithm. Experiments indicate that an improvement to the pure Baum-Welch and the segmentation K-means procedures can be obtained using the proposed method.<<ETX>>\",\"PeriodicalId\":438250,\"journal\":{\"name\":\"Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop\",\"volume\":\"23 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1992-08-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"10\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/NNSP.1992.253695\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NNSP.1992.253695","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Training continuous density hidden Markov models in association with self-organizing maps and LVQ
The authors propose a novel initialization method for continuous observation density hidden Markov models (CDHMMs) that is based on self-organizing maps (SOMs) and learning vector quantization (LVQ). The framework is to transcribe speech into phoneme sequences using CDHMMs as phoneme models. When numerous mixtures of, for example, Gaussian density functions are used to model the observation distributions of CDHMMs, good initial values are necessary in order for the Baum-Welch estimation to converge satisfactorily. The authors have experimented with constructing rapidly good initial values by SOMs, and with enhancing the discriminatory power of the phoneme models by adaptively training the state output distributions by using the LVQ algorithm. Experiments indicate that an improvement to the pure Baum-Welch and the segmentation K-means procedures can be obtained using the proposed method.<>