{"title":"基于Hmm的语音识别基形和子词模型的组合优化","authors":"T. Holter, T. Svendsen","doi":"10.1109/ISSPA.1996.615746","DOIUrl":null,"url":null,"abstract":"In this paper a framework for combined optimisation of baseforms and subword models for a speech recogniser is proposed. Given a set of subword Hidden Markov Models (HMMs) and a set of utterances of a specific word, the modified tree-trellis algorithm and the BaumWelch re-estimation procedure is used iteratively to achieve a combined optimisation of baseforms and subword models. The DARPA Resource Management (RM) database was used to evaluate the combined optimisation scheme. The proposed method resulted in a monotonic increase in the likelihood score of both test- and training data. When compared to the initial lexicon derived from the DARPA RM-distribution and a set of initial HMMs, a 13% reduction in word error rate is achieved at best.","PeriodicalId":359344,"journal":{"name":"Fourth International Symposium on Signal Processing and Its Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"Combined Optimisation of Baseforms and Subword Models for an Hmm Based Speech Recogniser\",\"authors\":\"T. Holter, T. Svendsen\",\"doi\":\"10.1109/ISSPA.1996.615746\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper a framework for combined optimisation of baseforms and subword models for a speech recogniser is proposed. Given a set of subword Hidden Markov Models (HMMs) and a set of utterances of a specific word, the modified tree-trellis algorithm and the BaumWelch re-estimation procedure is used iteratively to achieve a combined optimisation of baseforms and subword models. The DARPA Resource Management (RM) database was used to evaluate the combined optimisation scheme. The proposed method resulted in a monotonic increase in the likelihood score of both test- and training data. When compared to the initial lexicon derived from the DARPA RM-distribution and a set of initial HMMs, a 13% reduction in word error rate is achieved at best.\",\"PeriodicalId\":359344,\"journal\":{\"name\":\"Fourth International Symposium on Signal Processing and Its Applications\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1900-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Fourth International Symposium on Signal Processing and Its Applications\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISSPA.1996.615746\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Fourth International Symposium on Signal Processing and Its Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISSPA.1996.615746","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Combined Optimisation of Baseforms and Subword Models for an Hmm Based Speech Recogniser
In this paper a framework for combined optimisation of baseforms and subword models for a speech recogniser is proposed. Given a set of subword Hidden Markov Models (HMMs) and a set of utterances of a specific word, the modified tree-trellis algorithm and the BaumWelch re-estimation procedure is used iteratively to achieve a combined optimisation of baseforms and subword models. The DARPA Resource Management (RM) database was used to evaluate the combined optimisation scheme. The proposed method resulted in a monotonic increase in the likelihood score of both test- and training data. When compared to the initial lexicon derived from the DARPA RM-distribution and a set of initial HMMs, a 13% reduction in word error rate is achieved at best.