{"title":"Sequential classification of probabilistic independent feature vectors by mixture models","authors":"T. Walkowiak","doi":"10.1109/ISDA.2005.81","DOIUrl":null,"url":null,"abstract":"The paper presents methods of sequential classification with predefined classes. The classification is based on a sequence, assumed to be probabilistic independent, of feature vectors extracted from signal generated by the object. Each feature vector is a base for calculation of a probability density function for each predefined class. The density functions are estimated by the Gaussian mixture model (GMM) and the t-student mixture model. The model parameters are estimated by algorithms based on the expectation-maximization (EM) method. The estimated densities calculated for a sequence of feature vectors are inputs to analyzed classification rules. These rules are derived from Bayes decision theory with some heuristic modifications. The performance of the proposed rules was tested in an automatic, text independent, speaker identification task. Achieved results are presented.","PeriodicalId":345842,"journal":{"name":"5th International Conference on Intelligent Systems Design and Applications (ISDA'05)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2005-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"5th International Conference on Intelligent Systems Design and Applications (ISDA'05)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISDA.2005.81","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The paper presents methods of sequential classification with predefined classes. The classification is based on a sequence, assumed to be probabilistic independent, of feature vectors extracted from signal generated by the object. Each feature vector is a base for calculation of a probability density function for each predefined class. The density functions are estimated by the Gaussian mixture model (GMM) and the t-student mixture model. The model parameters are estimated by algorithms based on the expectation-maximization (EM) method. The estimated densities calculated for a sequence of feature vectors are inputs to analyzed classification rules. These rules are derived from Bayes decision theory with some heuristic modifications. The performance of the proposed rules was tested in an automatic, text independent, speaker identification task. Achieved results are presented.