通过高斯过程潜在变量模型的概率非线性降维:概述

Matteo Bodini
{"title":"通过高斯过程潜在变量模型的概率非线性降维:概述","authors":"Matteo Bodini","doi":"10.1201/9780429340710-10","DOIUrl":null,"url":null,"abstract":"From an algorithmic complexity point of view, machine learning methods scale and generalize better when using a few key features: using lots is computationally expensive, and overfitting can occur. High dimensional data is often counterintuitive to perceive and process, but unfortunately it is common for observed data to be in a representation of greater dimensionality than it requires. This gives rise to the notion of dimensionality reduction, a sub-field of machine learning that is motivated to find a descriptive low-dimensional representation of data. In this review it is explored a way to perform dimensionality reduction, provided by a class of Latent Variable Models (LVMs). In particular, the aim is to establish the technical foundations required for understanding the Gaussian Process Latent Variable Model (GP-LVM), a probabilistic nonlinear dimensionality reduction model. The review is organized as follows: after an introduction to the problem of dimensionality reduction and LVMs, Principal Component Analysis (PCA) is recalled and it is reviewed its probabilistic equivalent that contributes to the derivation of GP-LVM. Then, GP-LVM is introduced, and briefly a remarkable extension of the latter, the Bayesian Gaussian Process Latent Variable Model (BGP-LVM) is described. Eventually, and the main advantages of using GP-LVM are summarized.","PeriodicalId":231525,"journal":{"name":"Computer-Aided Developments: Electronics and Communication","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2019-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Probabilistic nonlinear dimensionality reduction through gaussian process latent variable models: An overview\",\"authors\":\"Matteo Bodini\",\"doi\":\"10.1201/9780429340710-10\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"From an algorithmic complexity point of view, machine learning methods scale and generalize better when using a few key features: using lots is computationally expensive, and overfitting can occur. High dimensional data is often counterintuitive to perceive and process, but unfortunately it is common for observed data to be in a representation of greater dimensionality than it requires. This gives rise to the notion of dimensionality reduction, a sub-field of machine learning that is motivated to find a descriptive low-dimensional representation of data. In this review it is explored a way to perform dimensionality reduction, provided by a class of Latent Variable Models (LVMs). In particular, the aim is to establish the technical foundations required for understanding the Gaussian Process Latent Variable Model (GP-LVM), a probabilistic nonlinear dimensionality reduction model. The review is organized as follows: after an introduction to the problem of dimensionality reduction and LVMs, Principal Component Analysis (PCA) is recalled and it is reviewed its probabilistic equivalent that contributes to the derivation of GP-LVM. Then, GP-LVM is introduced, and briefly a remarkable extension of the latter, the Bayesian Gaussian Process Latent Variable Model (BGP-LVM) is described. Eventually, and the main advantages of using GP-LVM are summarized.\",\"PeriodicalId\":231525,\"journal\":{\"name\":\"Computer-Aided Developments: Electronics and Communication\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-09-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computer-Aided Developments: Electronics and Communication\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1201/9780429340710-10\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer-Aided Developments: Electronics and Communication","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1201/9780429340710-10","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

摘要

从算法复杂性的角度来看,机器学习方法在使用几个关键特征时可以更好地扩展和泛化:使用批量计算成本高,并且可能出现过拟合。高维数据的感知和处理通常是违反直觉的,但不幸的是,观察到的数据通常以比需要的更大的维度表示。这就产生了降维的概念,这是机器学习的一个子领域,其动机是寻找数据的描述性低维表示。在这篇综述中,探讨了一种执行降维的方法,由一类潜在变量模型(lvm)提供。特别是,目的是建立理解高斯过程潜变量模型(GP-LVM)所需的技术基础,这是一种概率非线性降维模型。回顾的组织如下:在介绍了降维和lvm问题之后,回顾了主成分分析(PCA),并回顾了其有助于推导GP-LVM的概率等效。然后,介绍了GP-LVM,并简要介绍了后者的一个显着扩展,即贝叶斯高斯过程潜变量模型(BGP-LVM)。最后总结了GP-LVM的主要优点。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Probabilistic nonlinear dimensionality reduction through gaussian process latent variable models: An overview
From an algorithmic complexity point of view, machine learning methods scale and generalize better when using a few key features: using lots is computationally expensive, and overfitting can occur. High dimensional data is often counterintuitive to perceive and process, but unfortunately it is common for observed data to be in a representation of greater dimensionality than it requires. This gives rise to the notion of dimensionality reduction, a sub-field of machine learning that is motivated to find a descriptive low-dimensional representation of data. In this review it is explored a way to perform dimensionality reduction, provided by a class of Latent Variable Models (LVMs). In particular, the aim is to establish the technical foundations required for understanding the Gaussian Process Latent Variable Model (GP-LVM), a probabilistic nonlinear dimensionality reduction model. The review is organized as follows: after an introduction to the problem of dimensionality reduction and LVMs, Principal Component Analysis (PCA) is recalled and it is reviewed its probabilistic equivalent that contributes to the derivation of GP-LVM. Then, GP-LVM is introduced, and briefly a remarkable extension of the latter, the Bayesian Gaussian Process Latent Variable Model (BGP-LVM) is described. Eventually, and the main advantages of using GP-LVM are summarized.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信