{"title":"Discriminant clustering using an HMM isolated-word recognizer","authors":"R. Lippmann, E. A. Martin","doi":"10.1109/ICASSP.1988.196506","DOIUrl":null,"url":null,"abstract":"One limitation of hidden Markov model (HMM) recognizers is that subword models are not learned but must be prespecified before training. This can lead to excessive computation during recognition and/or poor discrimination between similar sounding words. A training procedure called discriminant clustering is presented that creates subword models automatically. Node sequences from whole-word models are merged using statistical clustering techniques. This procedure reduced the computation required during recognition for a 35-word vocabulary by roughly one-third while maintaining a low error rate. It was also found that five iterations of the forward-backward algorithm are sufficient and that adding nodes to HMM word models improves performance until the minimum word transition time becomes excessive.<<ETX>>","PeriodicalId":448544,"journal":{"name":"ICASSP-88., International Conference on Acoustics, Speech, and Signal Processing","volume":"38 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1988-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ICASSP-88., International Conference on Acoustics, Speech, and Signal Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICASSP.1988.196506","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
One limitation of hidden Markov model (HMM) recognizers is that subword models are not learned but must be prespecified before training. This can lead to excessive computation during recognition and/or poor discrimination between similar sounding words. A training procedure called discriminant clustering is presented that creates subword models automatically. Node sequences from whole-word models are merged using statistical clustering techniques. This procedure reduced the computation required during recognition for a 35-word vocabulary by roughly one-third while maintaining a low error rate. It was also found that five iterations of the forward-backward algorithm are sufficient and that adding nodes to HMM word models improves performance until the minimum word transition time becomes excessive.<>