{"title":"A Sample-based Criterion for Unsupervised Learning of Complex Models beyond Maximum Likelihood and Density Estimation","authors":"Mani Manavalan, Praveen Kumar Donepudi","doi":"10.18034/abcjar.v5i2.581","DOIUrl":null,"url":null,"abstract":"Many unsupervised learning processes have the purpose of aligning two probability distributions. Recoding models like ICA and projection pursuit, as well as generative models like Gaussian mixtures and Boltzmann machines, can be seen in this perspective. For these types of models, we offer a new sample-based error measure that can be used even when maximum likelihood (ML) and probability density estimation-based formulations can't be used, such as when the posteriors are nonlinear or intractable. Furthermore, the challenges of approximating a density function are avoided by our sample-based error measure. We show that with an unconstrained model, (1) our technique converges on the correct solution as the number of samples increases to infinity, and (2) our approach's predicted answer in the generative framework is the ML solution. Finally, simulations of linear and nonlinear models on mixtures of Gaussians and ICA issues are used to evaluate our approach. Our method's applicability and generality are demonstrated by the experiments. \n ","PeriodicalId":130992,"journal":{"name":"ABC Journal of Advanced Research","volume":"22 1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ABC Journal of Advanced Research","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.18034/abcjar.v5i2.581","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 10
Abstract
Many unsupervised learning processes have the purpose of aligning two probability distributions. Recoding models like ICA and projection pursuit, as well as generative models like Gaussian mixtures and Boltzmann machines, can be seen in this perspective. For these types of models, we offer a new sample-based error measure that can be used even when maximum likelihood (ML) and probability density estimation-based formulations can't be used, such as when the posteriors are nonlinear or intractable. Furthermore, the challenges of approximating a density function are avoided by our sample-based error measure. We show that with an unconstrained model, (1) our technique converges on the correct solution as the number of samples increases to infinity, and (2) our approach's predicted answer in the generative framework is the ML solution. Finally, simulations of linear and nonlinear models on mixtures of Gaussians and ICA issues are used to evaluate our approach. Our method's applicability and generality are demonstrated by the experiments.