{"title":"Probabilistic nonlinear dimensionality reduction through gaussian process latent variable models: An overview","authors":"Matteo Bodini","doi":"10.1201/9780429340710-10","DOIUrl":null,"url":null,"abstract":"From an algorithmic complexity point of view, machine learning methods scale and generalize better when using a few key features: using lots is computationally expensive, and overfitting can occur. High dimensional data is often counterintuitive to perceive and process, but unfortunately it is common for observed data to be in a representation of greater dimensionality than it requires. This gives rise to the notion of dimensionality reduction, a sub-field of machine learning that is motivated to find a descriptive low-dimensional representation of data. In this review it is explored a way to perform dimensionality reduction, provided by a class of Latent Variable Models (LVMs). In particular, the aim is to establish the technical foundations required for understanding the Gaussian Process Latent Variable Model (GP-LVM), a probabilistic nonlinear dimensionality reduction model. The review is organized as follows: after an introduction to the problem of dimensionality reduction and LVMs, Principal Component Analysis (PCA) is recalled and it is reviewed its probabilistic equivalent that contributes to the derivation of GP-LVM. Then, GP-LVM is introduced, and briefly a remarkable extension of the latter, the Bayesian Gaussian Process Latent Variable Model (BGP-LVM) is described. Eventually, and the main advantages of using GP-LVM are summarized.","PeriodicalId":231525,"journal":{"name":"Computer-Aided Developments: Electronics and Communication","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2019-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer-Aided Developments: Electronics and Communication","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1201/9780429340710-10","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
From an algorithmic complexity point of view, machine learning methods scale and generalize better when using a few key features: using lots is computationally expensive, and overfitting can occur. High dimensional data is often counterintuitive to perceive and process, but unfortunately it is common for observed data to be in a representation of greater dimensionality than it requires. This gives rise to the notion of dimensionality reduction, a sub-field of machine learning that is motivated to find a descriptive low-dimensional representation of data. In this review it is explored a way to perform dimensionality reduction, provided by a class of Latent Variable Models (LVMs). In particular, the aim is to establish the technical foundations required for understanding the Gaussian Process Latent Variable Model (GP-LVM), a probabilistic nonlinear dimensionality reduction model. The review is organized as follows: after an introduction to the problem of dimensionality reduction and LVMs, Principal Component Analysis (PCA) is recalled and it is reviewed its probabilistic equivalent that contributes to the derivation of GP-LVM. Then, GP-LVM is introduced, and briefly a remarkable extension of the latter, the Bayesian Gaussian Process Latent Variable Model (BGP-LVM) is described. Eventually, and the main advantages of using GP-LVM are summarized.