{"title":"The Adaptation Schemes In PR-SVM Based Language Recognition","authors":"Xu Bing, Yan Song, Lirong Dai","doi":"10.1109/CHINSL.2008.ECP.95","DOIUrl":null,"url":null,"abstract":"Phonetic-based systems usually convert the input speech into token (i.e. word, phone etc.) sequence and determine the target language from the statistics of the token sequences on different languages. Generally, there are two kinds of statistical representation for token sequences, N-gram language model (PR-LM) and support vector machines (PR- SVM) to perform language classification. In this paper we focus on PR-SVM method. One problem of the PR-SVM is that the statistical representation based on utterance is sparse and inaccurate. To tackle this issue, the adaptation schemes in PR-SVM framework are proposed in this paper. There are two schemes to be used: 1) Adaptation from the Universal N-gram Language Model (UNLM) trained on all languages; 2) Adaptation from the Low-Order N-gram Language Model (LONLM). The experimental results on 2007 NIST LRE tasks show that our method achieves significant gains over the unadapted model.","PeriodicalId":291958,"journal":{"name":"2008 6th International Symposium on Chinese Spoken Language Processing","volume":"48 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2008 6th International Symposium on Chinese Spoken Language Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CHINSL.2008.ECP.95","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
Phonetic-based systems usually convert the input speech into token (i.e. word, phone etc.) sequence and determine the target language from the statistics of the token sequences on different languages. Generally, there are two kinds of statistical representation for token sequences, N-gram language model (PR-LM) and support vector machines (PR- SVM) to perform language classification. In this paper we focus on PR-SVM method. One problem of the PR-SVM is that the statistical representation based on utterance is sparse and inaccurate. To tackle this issue, the adaptation schemes in PR-SVM framework are proposed in this paper. There are two schemes to be used: 1) Adaptation from the Universal N-gram Language Model (UNLM) trained on all languages; 2) Adaptation from the Low-Order N-gram Language Model (LONLM). The experimental results on 2007 NIST LRE tasks show that our method achieves significant gains over the unadapted model.