{"title":"Joint optimization of classifier and feature space in speech recognition","authors":"G. Kuhn","doi":"10.1109/IJCNN.1992.227235","DOIUrl":null,"url":null,"abstract":"The author presents a feedforward network which classifies the spoken letter names 'b', 'd', 'e', and 'v' with 88.5% accuracy. For many poorly discriminated training examples, the outputs of this network are unstable or sensitive to perturbations of the values of the input features. This residual sensitivity is exploited by inserting into the network a new first hidden layer with localized receptive fields. The new layer gives the network a few additional degrees of freedom with which to optimize the input feature space for the desired classification. The benefit of further, joint optimization of the classifier and the input features was suggested in an experiment in which recognition accuracy was raised to 89.6%.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"81 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCNN.1992.227235","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
The author presents a feedforward network which classifies the spoken letter names 'b', 'd', 'e', and 'v' with 88.5% accuracy. For many poorly discriminated training examples, the outputs of this network are unstable or sensitive to perturbations of the values of the input features. This residual sensitivity is exploited by inserting into the network a new first hidden layer with localized receptive fields. The new layer gives the network a few additional degrees of freedom with which to optimize the input feature space for the desired classification. The benefit of further, joint optimization of the classifier and the input features was suggested in an experiment in which recognition accuracy was raised to 89.6%.<>