J. Villalba, B. J. Borgstrom, Saurabh Kataria, Jaejin Cho, P. Torres-Carrasquillo, N. Dehak
{"title":"Advances in Speaker Recognition for Multilingual Conversational Telephone Speech: The JHU-MIT System for NIST SRE20 CTS Challenge","authors":"J. Villalba, B. J. Borgstrom, Saurabh Kataria, Jaejin Cho, P. Torres-Carrasquillo, N. Dehak","doi":"10.21437/odyssey.2022-47","DOIUrl":null,"url":null,"abstract":"We present a condensed description of the joint effort of JHU-CLSP/HLTCOE and MIT-LL for NIST SRE20. NIST SRE20 CTS consisted of multilingual conversational telephone speech. The set of languages included in the evaluation was not pro-vided, encouraging the participants to develop systems robust to any language. We evaluated x-vector architectures based on ResNet, squeeze-excitation ResNets, Transformers and Ef-ficientNets. Though squeeze-excitation ResNets and Efficient-Nets provide superior performance in in-domain tasks like VoxCeleb, regular ResNet34 was more robust in the challenge sce-nario. On the contrary, squeeze-excitation networks over-fitted to the training data, mostly in English. We also proposed a novel PLDA mixture and k-NN PLDA back-ends to handle the multilingual trials. The former clusters the x-vector space ex-pecting that each cluster will correspond to a language fam-ily. The latter trains a PLDA model adapted to each enrollment speaker using the nearest speakers–i.e., those with similar language/channel. The k-NN back-end improved Act. Cprimary (Cp) by 68% in SRE16-19 and 22% in SRE20 Progress w.r.t. a single adapted PLDA back-end. Our best single system achieved Act. Cp=0.110 in SRE20 progress. Meanwhile, our best fusion obtained Act. Cp=0.110 in the progress–8% better than single– and Cp=0.087 in the eval set.","PeriodicalId":315750,"journal":{"name":"The Speaker and Language Recognition Workshop","volume":"169 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The Speaker and Language Recognition Workshop","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.21437/odyssey.2022-47","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
We present a condensed description of the joint effort of JHU-CLSP/HLTCOE and MIT-LL for NIST SRE20. NIST SRE20 CTS consisted of multilingual conversational telephone speech. The set of languages included in the evaluation was not pro-vided, encouraging the participants to develop systems robust to any language. We evaluated x-vector architectures based on ResNet, squeeze-excitation ResNets, Transformers and Ef-ficientNets. Though squeeze-excitation ResNets and Efficient-Nets provide superior performance in in-domain tasks like VoxCeleb, regular ResNet34 was more robust in the challenge sce-nario. On the contrary, squeeze-excitation networks over-fitted to the training data, mostly in English. We also proposed a novel PLDA mixture and k-NN PLDA back-ends to handle the multilingual trials. The former clusters the x-vector space ex-pecting that each cluster will correspond to a language fam-ily. The latter trains a PLDA model adapted to each enrollment speaker using the nearest speakers–i.e., those with similar language/channel. The k-NN back-end improved Act. Cprimary (Cp) by 68% in SRE16-19 and 22% in SRE20 Progress w.r.t. a single adapted PLDA back-end. Our best single system achieved Act. Cp=0.110 in SRE20 progress. Meanwhile, our best fusion obtained Act. Cp=0.110 in the progress–8% better than single– and Cp=0.087 in the eval set.