基于多任务学习嵌入的混响条件下声学模型自适应

Aditya Raikar, Meet H. Soni, Ashish Panda, S. Kopparapu
{"title":"基于多任务学习嵌入的混响条件下声学模型自适应","authors":"Aditya Raikar, Meet H. Soni, Ashish Panda, S. Kopparapu","doi":"10.23919/eusipco55093.2022.9909579","DOIUrl":null,"url":null,"abstract":"Acoustic environment plays a major role in the performance of a large-scale Automatic Speech Recognition (ASR) system. It becomes a lot more challenging when substantial amount of distortions, such as background noise and reverberations are present. Of late, it has been standard to use i-vectors for Acoustic Model (AM) adaptation. Embeddings from Single Task Learned (STL) neural network systems, such as x-vectors and r-vectors, have also been used to a varying degree of success. This paper proposes the use of Multi Task Learned (MTL) embeddings for large vocabulary hybrid acoustic model adaptation in reverberant environments. MTL embeddings are extracted from an affine layer of the deep neural network trained on multiple tasks such as speaker information and room information. Our experiments show that the proposed MTL embeddings outperform i-vectors, x-vectors and r-vectors for AM adaptation in reverberant conditions. Besides, it has been demonstrated that the proposed MTL-embeddings can be fused with i-vectors to provide further improvement. We provide results on artificially reverberated Librispeech data as well as real world reverberated HRRE data. On Librispeech database, the proposed method provides an improvement of 10.9% and 8.7% relative to i-vector in reverberated test-clean and test-other data respectively, while an improvement of 7% is observed relative to i-vector when the proposed system is tested on HRRE dataset.","PeriodicalId":231263,"journal":{"name":"2022 30th European Signal Processing Conference (EUSIPCO)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Acoustic Model Adaptation In Reverberant Conditions Using Multi-task Learned Embeddings\",\"authors\":\"Aditya Raikar, Meet H. Soni, Ashish Panda, S. Kopparapu\",\"doi\":\"10.23919/eusipco55093.2022.9909579\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Acoustic environment plays a major role in the performance of a large-scale Automatic Speech Recognition (ASR) system. It becomes a lot more challenging when substantial amount of distortions, such as background noise and reverberations are present. Of late, it has been standard to use i-vectors for Acoustic Model (AM) adaptation. Embeddings from Single Task Learned (STL) neural network systems, such as x-vectors and r-vectors, have also been used to a varying degree of success. This paper proposes the use of Multi Task Learned (MTL) embeddings for large vocabulary hybrid acoustic model adaptation in reverberant environments. MTL embeddings are extracted from an affine layer of the deep neural network trained on multiple tasks such as speaker information and room information. Our experiments show that the proposed MTL embeddings outperform i-vectors, x-vectors and r-vectors for AM adaptation in reverberant conditions. Besides, it has been demonstrated that the proposed MTL-embeddings can be fused with i-vectors to provide further improvement. We provide results on artificially reverberated Librispeech data as well as real world reverberated HRRE data. On Librispeech database, the proposed method provides an improvement of 10.9% and 8.7% relative to i-vector in reverberated test-clean and test-other data respectively, while an improvement of 7% is observed relative to i-vector when the proposed system is tested on HRRE dataset.\",\"PeriodicalId\":231263,\"journal\":{\"name\":\"2022 30th European Signal Processing Conference (EUSIPCO)\",\"volume\":\"10 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-08-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 30th European Signal Processing Conference (EUSIPCO)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.23919/eusipco55093.2022.9909579\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 30th European Signal Processing Conference (EUSIPCO)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/eusipco55093.2022.9909579","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

声环境对大规模自动语音识别系统的性能起着至关重要的作用。当大量的失真,如背景噪音和混响存在时,它变得更具挑战性。最近,使用i向量进行声学模型(AM)适配已成为标准。来自单任务学习(STL)神经网络系统的嵌入,如x向量和r向量,也已被用于不同程度的成功。本文提出了在混响环境下使用多任务学习(MTL)嵌入来适应大词汇混合声学模型。MTL嵌入是从深度神经网络的仿射层中提取的,深度神经网络是在多个任务(如说话者信息和房间信息)上训练的。我们的实验表明,在混响条件下,所提出的MTL嵌入在调幅适应方面优于i向量、x向量和r向量。此外,还证明了所提出的mtl嵌入可以与i向量融合,以提供进一步的改进。我们提供了人工混响librisspeech数据和真实世界混响HRRE数据的结果。在librisspeech数据库上,该方法在混响test-clean和test-other数据上相对于i-vector分别提高了10.9%和8.7%,在HRRE数据集上相对于i-vector提高了7%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Acoustic Model Adaptation In Reverberant Conditions Using Multi-task Learned Embeddings
Acoustic environment plays a major role in the performance of a large-scale Automatic Speech Recognition (ASR) system. It becomes a lot more challenging when substantial amount of distortions, such as background noise and reverberations are present. Of late, it has been standard to use i-vectors for Acoustic Model (AM) adaptation. Embeddings from Single Task Learned (STL) neural network systems, such as x-vectors and r-vectors, have also been used to a varying degree of success. This paper proposes the use of Multi Task Learned (MTL) embeddings for large vocabulary hybrid acoustic model adaptation in reverberant environments. MTL embeddings are extracted from an affine layer of the deep neural network trained on multiple tasks such as speaker information and room information. Our experiments show that the proposed MTL embeddings outperform i-vectors, x-vectors and r-vectors for AM adaptation in reverberant conditions. Besides, it has been demonstrated that the proposed MTL-embeddings can be fused with i-vectors to provide further improvement. We provide results on artificially reverberated Librispeech data as well as real world reverberated HRRE data. On Librispeech database, the proposed method provides an improvement of 10.9% and 8.7% relative to i-vector in reverberated test-clean and test-other data respectively, while an improvement of 7% is observed relative to i-vector when the proposed system is tested on HRRE dataset.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信