{"title":"Class-based speech recognition using a maximum dissimilarity criterion and a tolerance classification margin","authors":"Arsenii Gorin, D. Jouvet","doi":"10.1109/SLT.2012.6424203","DOIUrl":null,"url":null,"abstract":"One of the difficult problems of Automatic Speech Recognition (ASR) is dealing with the acoustic signal variability. Much state-of-the-art research has demonstrated that splitting data into classes and using a model specific to each class provides better results. However, when the dataset is not large enough and the number of classes increases, there is less data for adapting the class models and the performance degrades. This work extends and combines previous research on un-supervised splits of datasets to build maximally separated classes and the introduction of a tolerance classification margin for a better training of the class model parameters. Experiments, carried out on the French radio broadcast ESTER2 data, show an improvement in recognition results compared to the ones obtained previously. Finally, we demonstrate that combining the decoding results from different class models leads to even more significant improvements.","PeriodicalId":375378,"journal":{"name":"2012 IEEE Spoken Language Technology Workshop (SLT)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 IEEE Spoken Language Technology Workshop (SLT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SLT.2012.6424203","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
One of the difficult problems of Automatic Speech Recognition (ASR) is dealing with the acoustic signal variability. Much state-of-the-art research has demonstrated that splitting data into classes and using a model specific to each class provides better results. However, when the dataset is not large enough and the number of classes increases, there is less data for adapting the class models and the performance degrades. This work extends and combines previous research on un-supervised splits of datasets to build maximally separated classes and the introduction of a tolerance classification margin for a better training of the class model parameters. Experiments, carried out on the French radio broadcast ESTER2 data, show an improvement in recognition results compared to the ones obtained previously. Finally, we demonstrate that combining the decoding results from different class models leads to even more significant improvements.