{"title":"基于MFCC的互补特征集融合改进闭集文本无关说话人识别","authors":"Sandipan Chakrobortyt, Anindya Royt, G. Saha","doi":"10.1109/ICIT.2006.372388","DOIUrl":null,"url":null,"abstract":"A state of the art speaker identification (SI) system requires a robust feature extraction unit followed by a speaker modeling scheme for generalized representation of these features. Over the years, Mel-frequency cepstral coefficients (MFCC) modeled on the human auditory system have been used as a standard acoustic feature set for SI applications. However, due to the structure of its filter bank, it captures vocal tract characteristics more effectively in the lower frequency regions. This work proposes a new set of features using a complementary filter bank structure which improves distinguishability of speaker specific cues present in the higher frequency zone. Unlike high level features that are difficult to extract, the proposed feature set involves little computational burden during the extraction process. When combined with MFCC via a parallel implementation of speaker models, the proposed feature improves performance baseline of MFCC based system. The proposition is validated by experiments conducted on two different kinds of databases namely YOHO (microphone speech) and POLYCOST (telephone speech) with Gaussian mixture model (GMM) as a classifier for various model orders.","PeriodicalId":103105,"journal":{"name":"2006 IEEE International Conference on Industrial Technology","volume":"25 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2006-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"35","resultStr":"{\"title\":\"Fusion of a Complementary Feature Set with MFCC for Improved Closed Set Text-Independent Speaker Identification\",\"authors\":\"Sandipan Chakrobortyt, Anindya Royt, G. Saha\",\"doi\":\"10.1109/ICIT.2006.372388\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A state of the art speaker identification (SI) system requires a robust feature extraction unit followed by a speaker modeling scheme for generalized representation of these features. Over the years, Mel-frequency cepstral coefficients (MFCC) modeled on the human auditory system have been used as a standard acoustic feature set for SI applications. However, due to the structure of its filter bank, it captures vocal tract characteristics more effectively in the lower frequency regions. This work proposes a new set of features using a complementary filter bank structure which improves distinguishability of speaker specific cues present in the higher frequency zone. Unlike high level features that are difficult to extract, the proposed feature set involves little computational burden during the extraction process. When combined with MFCC via a parallel implementation of speaker models, the proposed feature improves performance baseline of MFCC based system. The proposition is validated by experiments conducted on two different kinds of databases namely YOHO (microphone speech) and POLYCOST (telephone speech) with Gaussian mixture model (GMM) as a classifier for various model orders.\",\"PeriodicalId\":103105,\"journal\":{\"name\":\"2006 IEEE International Conference on Industrial Technology\",\"volume\":\"25 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2006-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"35\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2006 IEEE International Conference on Industrial Technology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICIT.2006.372388\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2006 IEEE International Conference on Industrial Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICIT.2006.372388","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Fusion of a Complementary Feature Set with MFCC for Improved Closed Set Text-Independent Speaker Identification
A state of the art speaker identification (SI) system requires a robust feature extraction unit followed by a speaker modeling scheme for generalized representation of these features. Over the years, Mel-frequency cepstral coefficients (MFCC) modeled on the human auditory system have been used as a standard acoustic feature set for SI applications. However, due to the structure of its filter bank, it captures vocal tract characteristics more effectively in the lower frequency regions. This work proposes a new set of features using a complementary filter bank structure which improves distinguishability of speaker specific cues present in the higher frequency zone. Unlike high level features that are difficult to extract, the proposed feature set involves little computational burden during the extraction process. When combined with MFCC via a parallel implementation of speaker models, the proposed feature improves performance baseline of MFCC based system. The proposition is validated by experiments conducted on two different kinds of databases namely YOHO (microphone speech) and POLYCOST (telephone speech) with Gaussian mixture model (GMM) as a classifier for various model orders.