Advances in Speaker Recognition for Multilingual Conversational Telephone Speech: The JHU-MIT System for NIST SRE20 CTS Challenge

J. Villalba, B. J. Borgstrom, Saurabh Kataria, Jaejin Cho, P. Torres-Carrasquillo, N. Dehak
{"title":"Advances in Speaker Recognition for Multilingual Conversational Telephone Speech: The JHU-MIT System for NIST SRE20 CTS Challenge","authors":"J. Villalba, B. J. Borgstrom, Saurabh Kataria, Jaejin Cho, P. Torres-Carrasquillo, N. Dehak","doi":"10.21437/odyssey.2022-47","DOIUrl":null,"url":null,"abstract":"We present a condensed description of the joint effort of JHU-CLSP/HLTCOE and MIT-LL for NIST SRE20. NIST SRE20 CTS consisted of multilingual conversational telephone speech. The set of languages included in the evaluation was not pro-vided, encouraging the participants to develop systems robust to any language. We evaluated x-vector architectures based on ResNet, squeeze-excitation ResNets, Transformers and Ef-ficientNets. Though squeeze-excitation ResNets and Efficient-Nets provide superior performance in in-domain tasks like VoxCeleb, regular ResNet34 was more robust in the challenge sce-nario. On the contrary, squeeze-excitation networks over-fitted to the training data, mostly in English. We also proposed a novel PLDA mixture and k-NN PLDA back-ends to handle the multilingual trials. The former clusters the x-vector space ex-pecting that each cluster will correspond to a language fam-ily. The latter trains a PLDA model adapted to each enrollment speaker using the nearest speakers–i.e., those with similar language/channel. The k-NN back-end improved Act. Cprimary (Cp) by 68% in SRE16-19 and 22% in SRE20 Progress w.r.t. a single adapted PLDA back-end. Our best single system achieved Act. Cp=0.110 in SRE20 progress. Meanwhile, our best fusion obtained Act. Cp=0.110 in the progress–8% better than single– and Cp=0.087 in the eval set.","PeriodicalId":315750,"journal":{"name":"The Speaker and Language Recognition Workshop","volume":"169 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The Speaker and Language Recognition Workshop","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.21437/odyssey.2022-47","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

We present a condensed description of the joint effort of JHU-CLSP/HLTCOE and MIT-LL for NIST SRE20. NIST SRE20 CTS consisted of multilingual conversational telephone speech. The set of languages included in the evaluation was not pro-vided, encouraging the participants to develop systems robust to any language. We evaluated x-vector architectures based on ResNet, squeeze-excitation ResNets, Transformers and Ef-ficientNets. Though squeeze-excitation ResNets and Efficient-Nets provide superior performance in in-domain tasks like VoxCeleb, regular ResNet34 was more robust in the challenge sce-nario. On the contrary, squeeze-excitation networks over-fitted to the training data, mostly in English. We also proposed a novel PLDA mixture and k-NN PLDA back-ends to handle the multilingual trials. The former clusters the x-vector space ex-pecting that each cluster will correspond to a language fam-ily. The latter trains a PLDA model adapted to each enrollment speaker using the nearest speakers–i.e., those with similar language/channel. The k-NN back-end improved Act. Cprimary (Cp) by 68% in SRE16-19 and 22% in SRE20 Progress w.r.t. a single adapted PLDA back-end. Our best single system achieved Act. Cp=0.110 in SRE20 progress. Meanwhile, our best fusion obtained Act. Cp=0.110 in the progress–8% better than single– and Cp=0.087 in the eval set.
多语言会话电话语音的说话人识别进展:JHU-MIT系统支持NIST SRE20 CTS挑战
我们提出了JHU-CLSP/HLTCOE和MIT-LL为NIST SRE20共同努力的简要描述。NIST SRE20 CTS由多语言会话电话语音组成。没有提供评估中包括的一套语言,这鼓励参与者开发对任何语言都健壮的系统。我们评估了基于ResNet、挤压激励ResNet、Transformers和ef - efficientnets的x向量架构。尽管挤压激励ResNets和Efficient-Nets在域内任务(如VoxCeleb)中提供了卓越的性能,但常规ResNet34在挑战场景中更加稳健。相反,挤压激励网络过度拟合训练数据,主要是英语。我们还提出了一种新的PLDA混合和k-NN PLDA后端来处理多语言试验。前者对x向量空间进行聚类,期望每个聚类都对应于一个语言族。后者使用最近的演讲者训练一个适应每个注册演讲者的PLDA模型。,语言/频道相近的。k-NN后端改进法案。Cprimary (Cp)在SRE16-19中降低68%,在SRE20 Progress w.r.t.中降低22%。我们最好的单一系统实现了Act。SRE20进度Cp=0.110。同时,我们最好的融合获得了Act。进展组的Cp=0.110,比单组好8%,评估组的Cp=0.087。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信