Alexandru-Lucian Georgescu, Cristian Manolache, Dan Oneaţă, H. Cucu, C. Burileanu
{"title":"自动语音识别系统自训练的数据滤波方法","authors":"Alexandru-Lucian Georgescu, Cristian Manolache, Dan Oneaţă, H. Cucu, C. Burileanu","doi":"10.1109/SLT48900.2021.9383577","DOIUrl":null,"url":null,"abstract":"Self-training is a simple and efficient way of leveraging un-labeled speech data: (i) start with a seed system trained on transcribed speech; (ii) pass the unlabeled data through this seed system to automatically generate transcriptions; (iii) en-large the initial dataset with the self-labeled data and retrain the speech recognition system. However, in order not to pol-lute the augmented dataset with incorrect transcriptions, an important intermediary step is to select those parts of the self-labeled data that are accurate. Several approaches have been proposed in the community, but most of the works address only a single method. In contrast, in this paper we inspect three distinct classes of data-filtering for self-training, leveraging: (i) confidence scores, (ii) multiple ASR hypotheses and (iii) approximate transcriptions. We evaluate these approaches from two perspectives: quantity vs. quality of the selected data and improvement of the seed ASR by including this data. The proposed methodology achieves state-of-the-art results on Romanian speech, obtaining 25% relative improvement over prior work. Among the three methods, approximate transcriptions bring the highest performance gain, even if they yield the least quantity of data.","PeriodicalId":243211,"journal":{"name":"2021 IEEE Spoken Language Technology Workshop (SLT)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Data-Filtering Methods for Self-Training of Automatic Speech Recognition Systems\",\"authors\":\"Alexandru-Lucian Georgescu, Cristian Manolache, Dan Oneaţă, H. Cucu, C. Burileanu\",\"doi\":\"10.1109/SLT48900.2021.9383577\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Self-training is a simple and efficient way of leveraging un-labeled speech data: (i) start with a seed system trained on transcribed speech; (ii) pass the unlabeled data through this seed system to automatically generate transcriptions; (iii) en-large the initial dataset with the self-labeled data and retrain the speech recognition system. However, in order not to pol-lute the augmented dataset with incorrect transcriptions, an important intermediary step is to select those parts of the self-labeled data that are accurate. Several approaches have been proposed in the community, but most of the works address only a single method. In contrast, in this paper we inspect three distinct classes of data-filtering for self-training, leveraging: (i) confidence scores, (ii) multiple ASR hypotheses and (iii) approximate transcriptions. We evaluate these approaches from two perspectives: quantity vs. quality of the selected data and improvement of the seed ASR by including this data. The proposed methodology achieves state-of-the-art results on Romanian speech, obtaining 25% relative improvement over prior work. Among the three methods, approximate transcriptions bring the highest performance gain, even if they yield the least quantity of data.\",\"PeriodicalId\":243211,\"journal\":{\"name\":\"2021 IEEE Spoken Language Technology Workshop (SLT)\",\"volume\":\"21 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-01-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE Spoken Language Technology Workshop (SLT)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SLT48900.2021.9383577\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE Spoken Language Technology Workshop (SLT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SLT48900.2021.9383577","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Data-Filtering Methods for Self-Training of Automatic Speech Recognition Systems
Self-training is a simple and efficient way of leveraging un-labeled speech data: (i) start with a seed system trained on transcribed speech; (ii) pass the unlabeled data through this seed system to automatically generate transcriptions; (iii) en-large the initial dataset with the self-labeled data and retrain the speech recognition system. However, in order not to pol-lute the augmented dataset with incorrect transcriptions, an important intermediary step is to select those parts of the self-labeled data that are accurate. Several approaches have been proposed in the community, but most of the works address only a single method. In contrast, in this paper we inspect three distinct classes of data-filtering for self-training, leveraging: (i) confidence scores, (ii) multiple ASR hypotheses and (iii) approximate transcriptions. We evaluate these approaches from two perspectives: quantity vs. quality of the selected data and improvement of the seed ASR by including this data. The proposed methodology achieves state-of-the-art results on Romanian speech, obtaining 25% relative improvement over prior work. Among the three methods, approximate transcriptions bring the highest performance gain, even if they yield the least quantity of data.