{"title":"混合单元建模在维吾尔语语音识别中的应用研究","authors":"Pengfei Hu, Shen Huang, Zhiqiang Lv","doi":"10.21437/SLTU.2018-45","DOIUrl":null,"url":null,"abstract":"Uyghur is a highly agglutinative language with a large number of words derived from the same root. For such languages the use of subwords in speech recognition becomes a natural choice, which can solve the OOV issues. However, short units in subword modeling will weaken the constraint of linguistic context. Besides, vowel weakening and reduction occur frequently in Uyghur language, which may lead to high deletion errors for short unit sequence recognition. In this paper, we investigate using mixed units in Uyghur speech recognition. Subwords and whole-words are mixed together to build a hybrid lexicon and language models for recognition. We also introduce an interpolated LM to further improve the performance. Experiment results show that the mixed-unit based modeling do outperform word or subword based modeling. About 10% relative reduction in Word Error Rate and 8% reduction in Character Error Rate have been achieved for test datasets compared with baseline system.","PeriodicalId":190269,"journal":{"name":"Workshop on Spoken Language Technologies for Under-resourced Languages","volume":"10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Investigating the Use of Mixed-Units Based Modeling for Improving Uyghur Speech Recognition\",\"authors\":\"Pengfei Hu, Shen Huang, Zhiqiang Lv\",\"doi\":\"10.21437/SLTU.2018-45\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Uyghur is a highly agglutinative language with a large number of words derived from the same root. For such languages the use of subwords in speech recognition becomes a natural choice, which can solve the OOV issues. However, short units in subword modeling will weaken the constraint of linguistic context. Besides, vowel weakening and reduction occur frequently in Uyghur language, which may lead to high deletion errors for short unit sequence recognition. In this paper, we investigate using mixed units in Uyghur speech recognition. Subwords and whole-words are mixed together to build a hybrid lexicon and language models for recognition. We also introduce an interpolated LM to further improve the performance. Experiment results show that the mixed-unit based modeling do outperform word or subword based modeling. About 10% relative reduction in Word Error Rate and 8% reduction in Character Error Rate have been achieved for test datasets compared with baseline system.\",\"PeriodicalId\":190269,\"journal\":{\"name\":\"Workshop on Spoken Language Technologies for Under-resourced Languages\",\"volume\":\"10 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-08-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Workshop on Spoken Language Technologies for Under-resourced Languages\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.21437/SLTU.2018-45\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Workshop on Spoken Language Technologies for Under-resourced Languages","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.21437/SLTU.2018-45","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Investigating the Use of Mixed-Units Based Modeling for Improving Uyghur Speech Recognition
Uyghur is a highly agglutinative language with a large number of words derived from the same root. For such languages the use of subwords in speech recognition becomes a natural choice, which can solve the OOV issues. However, short units in subword modeling will weaken the constraint of linguistic context. Besides, vowel weakening and reduction occur frequently in Uyghur language, which may lead to high deletion errors for short unit sequence recognition. In this paper, we investigate using mixed units in Uyghur speech recognition. Subwords and whole-words are mixed together to build a hybrid lexicon and language models for recognition. We also introduce an interpolated LM to further improve the performance. Experiment results show that the mixed-unit based modeling do outperform word or subword based modeling. About 10% relative reduction in Word Error Rate and 8% reduction in Character Error Rate have been achieved for test datasets compared with baseline system.