K. Singh, Nagendra Kumar, R. Sinha, Shreyas Ramoji, Sriram Ganapathy
{"title":"IITG- Indigo Submissions for NIST 2018 Speaker Recognition Evaluation and Post-Challenge Improvements","authors":"K. Singh, Nagendra Kumar, R. Sinha, Shreyas Ramoji, Sriram Ganapathy","doi":"10.1109/NCC48643.2020.9056055","DOIUrl":null,"url":null,"abstract":"This paper describes the submissions of team Indigo at Indian Institute of Technology Guwahati (IITG) to the NIST 2018 Speaker Recognition Evaluation (SRE18) challenge. These speaker verification (SV) systems are developed for the fixed training condition task in SRE18. The evaluation data in SRE18 is derived from two corpora: (i) Call My Net 2 (CMN2), and (ii) Video Annotation for Speech Technology (VAST). The VAST set is obtained by extracting audio from video having high musical/noisy background. Thus, it helps in assessing the robustness of the SV systems. A number of sub-systems are developed which differ in front-end modeling paradigms, backend classifiers, and suppression of repeating pattern in the data. The fusion of sub-systems is submitted as the primary system which achieved actual detection cost function (actDCF) and equal error rate (EER) of 0.77 and 13.79 %, respectively, on the SRE18 evaluation data. Post-challenge efforts include the domain adaptation of the scores and the voice activity detection using deep neural network. With these enhancements, for the VAST trials, the best single sub-system achieves the relative reductions of 38.4% and 11.6% in actDCF and EER, respectively.","PeriodicalId":183772,"journal":{"name":"2020 National Conference on Communications (NCC)","volume":"150 9 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 National Conference on Communications (NCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NCC48643.2020.9056055","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This paper describes the submissions of team Indigo at Indian Institute of Technology Guwahati (IITG) to the NIST 2018 Speaker Recognition Evaluation (SRE18) challenge. These speaker verification (SV) systems are developed for the fixed training condition task in SRE18. The evaluation data in SRE18 is derived from two corpora: (i) Call My Net 2 (CMN2), and (ii) Video Annotation for Speech Technology (VAST). The VAST set is obtained by extracting audio from video having high musical/noisy background. Thus, it helps in assessing the robustness of the SV systems. A number of sub-systems are developed which differ in front-end modeling paradigms, backend classifiers, and suppression of repeating pattern in the data. The fusion of sub-systems is submitted as the primary system which achieved actual detection cost function (actDCF) and equal error rate (EER) of 0.77 and 13.79 %, respectively, on the SRE18 evaluation data. Post-challenge efforts include the domain adaptation of the scores and the voice activity detection using deep neural network. With these enhancements, for the VAST trials, the best single sub-system achieves the relative reductions of 38.4% and 11.6% in actDCF and EER, respectively.
本文描述了印度理工学院古瓦哈蒂(IITG) Indigo团队向NIST 2018年说话人识别评估(SRE18)挑战提交的内容。这些说话人验证(SV)系统是针对SRE18中固定训练条件任务开发的。SRE18中的评价数据来源于两个语料库:(i) Call My Net 2 (CMN2)和(ii) Video Annotation for Speech Technology (VAST)。VAST集合是通过从具有高音乐/噪声背景的视频中提取音频而获得的。因此,它有助于评估SV系统的鲁棒性。开发了许多子系统,这些子系统在前端建模范式、后端分类器和数据中重复模式的抑制方面有所不同。在SRE18评价数据上实现了实际检测成本函数(actDCF)和等错误率(EER)分别为0.77和13.79%,提出了子系统融合作为主要系统。挑战后的工作包括分数的域适应和使用深度神经网络的语音活动检测。通过这些增强,对于VAST试验,最佳单一子系统在actDCF和EER方面分别实现了38.4%和11.6%的相对降低。