Hardik B. Sailor, S. Deena, Md. Asif Jalal, R. Lileikyte, Thomas Hain
{"title":"Unsupervised Adaptation of Acoustic Models for ASR Using Utterance-Level Embeddings from Squeeze and Excitation Networks","authors":"Hardik B. Sailor, S. Deena, Md. Asif Jalal, R. Lileikyte, Thomas Hain","doi":"10.1109/ASRU46091.2019.9003755","DOIUrl":null,"url":null,"abstract":"This paper proposes the adaptation of neural network-based acoustic models using a Squeeze-and-Excitation (SE) network for automatic speech recognition (ASR). In particular, this work explores to use the SE network to learn utterance-level embeddings. The acoustic modelling is performed using Light Gated Recurrent Units (LiGRU). The utterance embed-dings are learned from hidden unit activations jointly with LiGRU and used to scale respective activations of hidden layers in the LiGRU network. The advantage of such approach is that it does not require domain labels, such as speakers and noise to be known in order to perform the adaptation, thereby providing unsupervised adaptation. The global average and attentive pooling are applied on hidden units to extract utterance-level information that represents the speakers and acoustic conditions. ASR experiments were carried out on the TIMIT and Aurora 4 corpora. The proposed model achieves better performance on both the datasets compared to their respective baselines with relative improvements of 5.59% and 5.54% for TIMIT and Aurora 4 database, respectively. These experiments show the potential of using the conditioning information learned via utterance embeddings in the SE network to adapt acoustic models for speakers, noise, and other acoustic conditions.","PeriodicalId":150913,"journal":{"name":"2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ASRU46091.2019.9003755","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
This paper proposes the adaptation of neural network-based acoustic models using a Squeeze-and-Excitation (SE) network for automatic speech recognition (ASR). In particular, this work explores to use the SE network to learn utterance-level embeddings. The acoustic modelling is performed using Light Gated Recurrent Units (LiGRU). The utterance embed-dings are learned from hidden unit activations jointly with LiGRU and used to scale respective activations of hidden layers in the LiGRU network. The advantage of such approach is that it does not require domain labels, such as speakers and noise to be known in order to perform the adaptation, thereby providing unsupervised adaptation. The global average and attentive pooling are applied on hidden units to extract utterance-level information that represents the speakers and acoustic conditions. ASR experiments were carried out on the TIMIT and Aurora 4 corpora. The proposed model achieves better performance on both the datasets compared to their respective baselines with relative improvements of 5.59% and 5.54% for TIMIT and Aurora 4 database, respectively. These experiments show the potential of using the conditioning information learned via utterance embeddings in the SE network to adapt acoustic models for speakers, noise, and other acoustic conditions.