Shreekantha Nadig, S. Chakraborty, Anuj K. Shah, Chaitanaya Sharma, V. Ramasubramanian, Sachit Rao
{"title":"使用基于注意力的对齐和不确定性来权衡损失,共同学习对齐和转录","authors":"Shreekantha Nadig, S. Chakraborty, Anuj K. Shah, Chaitanaya Sharma, V. Ramasubramanian, Sachit Rao","doi":"10.1109/SPCOM50965.2020.9179519","DOIUrl":null,"url":null,"abstract":"End-to-end Automatic Speech Recognition (ASR) models with attention, especially the Joint Connectionist Temporal Classification (CTC) and Attention in Encoder-Decoder models have shown promising results. In this joint CTC and Attention framework, misalignment of attention with the ground truth is not penalised, as the focus is on optimising only the CTC and Attention cost functions. In this paper, a function that additionally minimizes alignment errors is introduced. This function is expected to enable the ASR system to attend to the right part of the input sequence, and in turn, minimize alignment and transcription errors. We also implement a dynamic weighting of losses corresponding with the tasks of CTC, attention, and alignment. We demonstrate that in many cases, the proposed design framework results in better performance and faster convergence. We show results on two datasets - TIMIT and Librispeech 100 hours for the phone recognition task by taking the alignments from a previously trained monophone Gaussian Mixture Model-Hidden Markov Model (GMM-HMM) model.","PeriodicalId":208527,"journal":{"name":"2020 International Conference on Signal Processing and Communications (SPCOM)","volume":"72 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Jointly learning to align and transcribe using attention-based alignment and uncertainty-to-weigh losses\",\"authors\":\"Shreekantha Nadig, S. Chakraborty, Anuj K. Shah, Chaitanaya Sharma, V. Ramasubramanian, Sachit Rao\",\"doi\":\"10.1109/SPCOM50965.2020.9179519\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"End-to-end Automatic Speech Recognition (ASR) models with attention, especially the Joint Connectionist Temporal Classification (CTC) and Attention in Encoder-Decoder models have shown promising results. In this joint CTC and Attention framework, misalignment of attention with the ground truth is not penalised, as the focus is on optimising only the CTC and Attention cost functions. In this paper, a function that additionally minimizes alignment errors is introduced. This function is expected to enable the ASR system to attend to the right part of the input sequence, and in turn, minimize alignment and transcription errors. We also implement a dynamic weighting of losses corresponding with the tasks of CTC, attention, and alignment. We demonstrate that in many cases, the proposed design framework results in better performance and faster convergence. We show results on two datasets - TIMIT and Librispeech 100 hours for the phone recognition task by taking the alignments from a previously trained monophone Gaussian Mixture Model-Hidden Markov Model (GMM-HMM) model.\",\"PeriodicalId\":208527,\"journal\":{\"name\":\"2020 International Conference on Signal Processing and Communications (SPCOM)\",\"volume\":\"72 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 International Conference on Signal Processing and Communications (SPCOM)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SPCOM50965.2020.9179519\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 International Conference on Signal Processing and Communications (SPCOM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SPCOM50965.2020.9179519","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Jointly learning to align and transcribe using attention-based alignment and uncertainty-to-weigh losses
End-to-end Automatic Speech Recognition (ASR) models with attention, especially the Joint Connectionist Temporal Classification (CTC) and Attention in Encoder-Decoder models have shown promising results. In this joint CTC and Attention framework, misalignment of attention with the ground truth is not penalised, as the focus is on optimising only the CTC and Attention cost functions. In this paper, a function that additionally minimizes alignment errors is introduced. This function is expected to enable the ASR system to attend to the right part of the input sequence, and in turn, minimize alignment and transcription errors. We also implement a dynamic weighting of losses corresponding with the tasks of CTC, attention, and alignment. We demonstrate that in many cases, the proposed design framework results in better performance and faster convergence. We show results on two datasets - TIMIT and Librispeech 100 hours for the phone recognition task by taking the alignments from a previously trained monophone Gaussian Mixture Model-Hidden Markov Model (GMM-HMM) model.