{"title":"连续语音识别中前馈网络的概率估计","authors":"S. Renals, N. Morgan, H. Bourlard","doi":"10.1109/NNSP.1991.239511","DOIUrl":null,"url":null,"abstract":"The authors review the use of feedforward neural networks as estimators of probability densities in hidden Markov modelling. In this paper, they are mostly concerned with radial basis functions (RBF) networks. They not the isomorphism of RBF networks to tied mixture density estimators; additionally they note that RBF networks are trained to estimate posteriors rather than the likelihoods estimated by tied mixture density estimators. They show how the neural network training should be modified to resolve this mismatch. They also discuss problems with discriminative training, particularly the problem of dealing with unlabelled training data and the mismatch between model and data priors.<<ETX>>","PeriodicalId":354832,"journal":{"name":"Neural Networks for Signal Processing Proceedings of the 1991 IEEE Workshop","volume":"49 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1991-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"18","resultStr":"{\"title\":\"Probability estimation by feed-forward networks in continuous speech recognition\",\"authors\":\"S. Renals, N. Morgan, H. Bourlard\",\"doi\":\"10.1109/NNSP.1991.239511\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The authors review the use of feedforward neural networks as estimators of probability densities in hidden Markov modelling. In this paper, they are mostly concerned with radial basis functions (RBF) networks. They not the isomorphism of RBF networks to tied mixture density estimators; additionally they note that RBF networks are trained to estimate posteriors rather than the likelihoods estimated by tied mixture density estimators. They show how the neural network training should be modified to resolve this mismatch. They also discuss problems with discriminative training, particularly the problem of dealing with unlabelled training data and the mismatch between model and data priors.<<ETX>>\",\"PeriodicalId\":354832,\"journal\":{\"name\":\"Neural Networks for Signal Processing Proceedings of the 1991 IEEE Workshop\",\"volume\":\"49 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1991-09-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"18\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neural Networks for Signal Processing Proceedings of the 1991 IEEE Workshop\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/NNSP.1991.239511\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks for Signal Processing Proceedings of the 1991 IEEE Workshop","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NNSP.1991.239511","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Probability estimation by feed-forward networks in continuous speech recognition
The authors review the use of feedforward neural networks as estimators of probability densities in hidden Markov modelling. In this paper, they are mostly concerned with radial basis functions (RBF) networks. They not the isomorphism of RBF networks to tied mixture density estimators; additionally they note that RBF networks are trained to estimate posteriors rather than the likelihoods estimated by tied mixture density estimators. They show how the neural network training should be modified to resolve this mismatch. They also discuss problems with discriminative training, particularly the problem of dealing with unlabelled training data and the mismatch between model and data priors.<>