Hee-Soo Heo, Il-Ho Yang, Myung-Jae Kim, Sung-Hyun Yoon, Ha-jin Yu
{"title":"Advanced b-vector system based deep neural network as classifier for speaker verification","authors":"Hee-Soo Heo, Il-Ho Yang, Myung-Jae Kim, Sung-Hyun Yoon, Ha-jin Yu","doi":"10.1109/ICASSP.2016.7472722","DOIUrl":null,"url":null,"abstract":"Few studies on speaker verification have directly used a deep neural network (DNN) as a classifier. It is difficult to directly apply a DNN as a discriminative model to speaker-verification tasks because the training data for each speaker are very limited. Therefore, a b-vector has been proposed to solve the problem. However, the DNN with the b-vectors showed lower performance than the conventional i-vector probabilistic linear-discriminant analysis (PLDA) system. In this paper, we propose an improved version of the b-vector DNN system, which incorporates the background speakers' information into the DNN. In this study, each input feature is paired with a representative background speaker's feature vectors, and a b-vector is extracted from each pair; thus, feeding background information into the DNN. We confirmed that the performance improvements of the proposed system compensate for the shortcomings of conventional b-vectors in experiments carried out using the National Institute of Standards and Technology 2008 Speaker-Recognition Evaluation tests.","PeriodicalId":165321,"journal":{"name":"2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICASSP.2016.7472722","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6
Abstract
Few studies on speaker verification have directly used a deep neural network (DNN) as a classifier. It is difficult to directly apply a DNN as a discriminative model to speaker-verification tasks because the training data for each speaker are very limited. Therefore, a b-vector has been proposed to solve the problem. However, the DNN with the b-vectors showed lower performance than the conventional i-vector probabilistic linear-discriminant analysis (PLDA) system. In this paper, we propose an improved version of the b-vector DNN system, which incorporates the background speakers' information into the DNN. In this study, each input feature is paired with a representative background speaker's feature vectors, and a b-vector is extracted from each pair; thus, feeding background information into the DNN. We confirmed that the performance improvements of the proposed system compensate for the shortcomings of conventional b-vectors in experiments carried out using the National Institute of Standards and Technology 2008 Speaker-Recognition Evaluation tests.