Shuai Zhang, Jiangyan Yi, Zhengkun Tian, J. Tao, Ye Bai
{"title":"基于语言偏差的rnn换能器端到端中英文码转换语音识别","authors":"Shuai Zhang, Jiangyan Yi, Zhengkun Tian, J. Tao, Ye Bai","doi":"10.1109/ISCSLP49672.2021.9362075","DOIUrl":null,"url":null,"abstract":"Recently, language identity information has been utilized to improve the performance of end-to-end code-switching (CS) speech recognition task. However, previous work use an additional language identification (LID) model as an auxiliary module, which increases computation cost. In this work, we propose an improved recurrent neural network transducer (RNN-T) model with language bias to alleviate the problem. We use the language identities to bias the model to predict the CS points. This promotes the model to learn the language identity information directly from transcriptions, and no additional LID model is needed. We evaluate the approach on a Mandarin-English CS corpus SEAME. Compared to our RNN-T baseline, the RNN-T with language bias can achieve 16.2% and 12.9% relative mixed error reduction on two test sets, respectively.","PeriodicalId":279828,"journal":{"name":"2021 12th International Symposium on Chinese Spoken Language Processing (ISCSLP)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"21","resultStr":"{\"title\":\"Rnn-transducer With Language Bias For End-to-end Mandarin-English Code-switching Speech Recognition\",\"authors\":\"Shuai Zhang, Jiangyan Yi, Zhengkun Tian, J. Tao, Ye Bai\",\"doi\":\"10.1109/ISCSLP49672.2021.9362075\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recently, language identity information has been utilized to improve the performance of end-to-end code-switching (CS) speech recognition task. However, previous work use an additional language identification (LID) model as an auxiliary module, which increases computation cost. In this work, we propose an improved recurrent neural network transducer (RNN-T) model with language bias to alleviate the problem. We use the language identities to bias the model to predict the CS points. This promotes the model to learn the language identity information directly from transcriptions, and no additional LID model is needed. We evaluate the approach on a Mandarin-English CS corpus SEAME. Compared to our RNN-T baseline, the RNN-T with language bias can achieve 16.2% and 12.9% relative mixed error reduction on two test sets, respectively.\",\"PeriodicalId\":279828,\"journal\":{\"name\":\"2021 12th International Symposium on Chinese Spoken Language Processing (ISCSLP)\",\"volume\":\"53 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-02-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"21\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 12th International Symposium on Chinese Spoken Language Processing (ISCSLP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISCSLP49672.2021.9362075\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 12th International Symposium on Chinese Spoken Language Processing (ISCSLP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISCSLP49672.2021.9362075","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Rnn-transducer With Language Bias For End-to-end Mandarin-English Code-switching Speech Recognition
Recently, language identity information has been utilized to improve the performance of end-to-end code-switching (CS) speech recognition task. However, previous work use an additional language identification (LID) model as an auxiliary module, which increases computation cost. In this work, we propose an improved recurrent neural network transducer (RNN-T) model with language bias to alleviate the problem. We use the language identities to bias the model to predict the CS points. This promotes the model to learn the language identity information directly from transcriptions, and no additional LID model is needed. We evaluate the approach on a Mandarin-English CS corpus SEAME. Compared to our RNN-T baseline, the RNN-T with language bias can achieve 16.2% and 12.9% relative mixed error reduction on two test sets, respectively.