{"title":"基于N-gram神经网络改进基于注意的端到端ASR","authors":"Junyi Ao, Tom Ko","doi":"10.1109/ISCSLP49672.2021.9362055","DOIUrl":null,"url":null,"abstract":"In attention-based end-to-end ASR, the intrinsic LM is modeled by an RNN and it forms the major part of the decoder. Comparing with external LMs, the intrinsic LM is considered as modest as it is only trained with the transcription associated with the speech data. Although it is a common practise to interpolate the scores of the end-to-end model and the external LM, the need of an external model hurts the novelty of end-to-end. Therefore, researchers are investigating different ways of improving the intrinsic LM of the end-to-end model. By observing the fact that N-gram LMs and RNN LMs can complement each other, we would like to investigate the effect of implementing an N-gram neural network inside the end-to-end model. In this paper, we examine two implementations of N-gram neural network in the context of attention-based end-to-end ASR. We find that both implementations improve the baseline and CBOW (Continuous Bag-of-Words) performs slightly better. We further propose a way to minimize the size of the N-gram component by utilizing the coda information of the modeling units. Experiments on LibriSpeech dataset show that our proposed method achieves obvious improvement with only a slight increase in model parameters.","PeriodicalId":279828,"journal":{"name":"2021 12th International Symposium on Chinese Spoken Language Processing (ISCSLP)","volume":"97 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Improving Attention-based End-to-end ASR by Incorporating an N-gram Neural Network\",\"authors\":\"Junyi Ao, Tom Ko\",\"doi\":\"10.1109/ISCSLP49672.2021.9362055\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In attention-based end-to-end ASR, the intrinsic LM is modeled by an RNN and it forms the major part of the decoder. Comparing with external LMs, the intrinsic LM is considered as modest as it is only trained with the transcription associated with the speech data. Although it is a common practise to interpolate the scores of the end-to-end model and the external LM, the need of an external model hurts the novelty of end-to-end. Therefore, researchers are investigating different ways of improving the intrinsic LM of the end-to-end model. By observing the fact that N-gram LMs and RNN LMs can complement each other, we would like to investigate the effect of implementing an N-gram neural network inside the end-to-end model. In this paper, we examine two implementations of N-gram neural network in the context of attention-based end-to-end ASR. We find that both implementations improve the baseline and CBOW (Continuous Bag-of-Words) performs slightly better. We further propose a way to minimize the size of the N-gram component by utilizing the coda information of the modeling units. Experiments on LibriSpeech dataset show that our proposed method achieves obvious improvement with only a slight increase in model parameters.\",\"PeriodicalId\":279828,\"journal\":{\"name\":\"2021 12th International Symposium on Chinese Spoken Language Processing (ISCSLP)\",\"volume\":\"97 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-01-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 12th International Symposium on Chinese Spoken Language Processing (ISCSLP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISCSLP49672.2021.9362055\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 12th International Symposium on Chinese Spoken Language Processing (ISCSLP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISCSLP49672.2021.9362055","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Improving Attention-based End-to-end ASR by Incorporating an N-gram Neural Network
In attention-based end-to-end ASR, the intrinsic LM is modeled by an RNN and it forms the major part of the decoder. Comparing with external LMs, the intrinsic LM is considered as modest as it is only trained with the transcription associated with the speech data. Although it is a common practise to interpolate the scores of the end-to-end model and the external LM, the need of an external model hurts the novelty of end-to-end. Therefore, researchers are investigating different ways of improving the intrinsic LM of the end-to-end model. By observing the fact that N-gram LMs and RNN LMs can complement each other, we would like to investigate the effect of implementing an N-gram neural network inside the end-to-end model. In this paper, we examine two implementations of N-gram neural network in the context of attention-based end-to-end ASR. We find that both implementations improve the baseline and CBOW (Continuous Bag-of-Words) performs slightly better. We further propose a way to minimize the size of the N-gram component by utilizing the coda information of the modeling units. Experiments on LibriSpeech dataset show that our proposed method achieves obvious improvement with only a slight increase in model parameters.