{"title":"面向远场语音增强的变分自编码器框架中的联合分布学习","authors":"Mahesh K. Chelimilla, Shashi Kumar, S. Rath","doi":"10.1109/ASRU46091.2019.9004024","DOIUrl":null,"url":null,"abstract":"Far-field speech recognition is a challenging task as speech recognizers trained on close-talk speech do not generalize well to far-field speech. In order to handle such issues, neural network based speech enhancement is typically applied using denoising autoencoder (DA). Recently generative models have become more popular particularly in the field of image generation and translation. One of the popular techniques in this generative framework is variational autoencoder (VAE). In this paper we consider VAE for speech enhancement task in the context of automatic speech recognition (ASR). We propose a novel modification in the conventional VAE to model joint distribution of the far-field and close-talk features for a common latent space representation, which we refer to as joint-VAE. Unlike conventional VAE, joint-VAE involves one encoder network that projects the far-field features onto a latent space and two decoder networks that generate close-talk and far-field features separately. Experiments conducted on the AMI corpus show that it gives a relative WER improvement of 9% compared to conventional DA and a relative improvement of 19.2% compared to mismatched train and test scenario.","PeriodicalId":150913,"journal":{"name":"2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"Joint Distribution Learning in the Framework of Variational Autoencoders for Far-Field Speech Enhancement\",\"authors\":\"Mahesh K. Chelimilla, Shashi Kumar, S. Rath\",\"doi\":\"10.1109/ASRU46091.2019.9004024\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Far-field speech recognition is a challenging task as speech recognizers trained on close-talk speech do not generalize well to far-field speech. In order to handle such issues, neural network based speech enhancement is typically applied using denoising autoencoder (DA). Recently generative models have become more popular particularly in the field of image generation and translation. One of the popular techniques in this generative framework is variational autoencoder (VAE). In this paper we consider VAE for speech enhancement task in the context of automatic speech recognition (ASR). We propose a novel modification in the conventional VAE to model joint distribution of the far-field and close-talk features for a common latent space representation, which we refer to as joint-VAE. Unlike conventional VAE, joint-VAE involves one encoder network that projects the far-field features onto a latent space and two decoder networks that generate close-talk and far-field features separately. Experiments conducted on the AMI corpus show that it gives a relative WER improvement of 9% compared to conventional DA and a relative improvement of 19.2% compared to mismatched train and test scenario.\",\"PeriodicalId\":150913,\"journal\":{\"name\":\"2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)\",\"volume\":\"61 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ASRU46091.2019.9004024\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ASRU46091.2019.9004024","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Joint Distribution Learning in the Framework of Variational Autoencoders for Far-Field Speech Enhancement
Far-field speech recognition is a challenging task as speech recognizers trained on close-talk speech do not generalize well to far-field speech. In order to handle such issues, neural network based speech enhancement is typically applied using denoising autoencoder (DA). Recently generative models have become more popular particularly in the field of image generation and translation. One of the popular techniques in this generative framework is variational autoencoder (VAE). In this paper we consider VAE for speech enhancement task in the context of automatic speech recognition (ASR). We propose a novel modification in the conventional VAE to model joint distribution of the far-field and close-talk features for a common latent space representation, which we refer to as joint-VAE. Unlike conventional VAE, joint-VAE involves one encoder network that projects the far-field features onto a latent space and two decoder networks that generate close-talk and far-field features separately. Experiments conducted on the AMI corpus show that it gives a relative WER improvement of 9% compared to conventional DA and a relative improvement of 19.2% compared to mismatched train and test scenario.