Parametric Dimension Reduction by Preserving Local Structure

Chien-Hsun Lai, M. Kuo, Yun-Hsuan Lien, Kuan-An Su, Yu-Shuen Wang
{"title":"Parametric Dimension Reduction by Preserving Local Structure","authors":"Chien-Hsun Lai, M. Kuo, Yun-Hsuan Lien, Kuan-An Su, Yu-Shuen Wang","doi":"10.1109/VIS54862.2022.00024","DOIUrl":null,"url":null,"abstract":"We extend a well-known dimension reduction method, t-distributed stochastic neighbor embedding (t-SNE), from non-parametric to parametric by training neural networks. The main advantage of a parametric technique is the generalization of handling new data, which is beneficial for streaming data visualization. While previous parametric methods either require a network pre-training by the restricted Boltzmann machine or intermediate results obtained from the traditional non-parametric t-SNE, we found that recent network training skills can enable a direct optimization for the t-SNE objective function. Accordingly, our method achieves high embedding quality while enjoying generalization. Due to mini-batch network training, our parametric dimension reduction method is highly efficient. For evaluation, we compared our method to several baselines on a variety of datasets. Experiment results demonstrate the feasibility of our method. The source code is available at https://github.com/a07458666/parametric_dr.","PeriodicalId":190244,"journal":{"name":"2022 IEEE Visualization and Visual Analytics (VIS)","volume":"79 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE Visualization and Visual Analytics (VIS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/VIS54862.2022.00024","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

We extend a well-known dimension reduction method, t-distributed stochastic neighbor embedding (t-SNE), from non-parametric to parametric by training neural networks. The main advantage of a parametric technique is the generalization of handling new data, which is beneficial for streaming data visualization. While previous parametric methods either require a network pre-training by the restricted Boltzmann machine or intermediate results obtained from the traditional non-parametric t-SNE, we found that recent network training skills can enable a direct optimization for the t-SNE objective function. Accordingly, our method achieves high embedding quality while enjoying generalization. Due to mini-batch network training, our parametric dimension reduction method is highly efficient. For evaluation, we compared our method to several baselines on a variety of datasets. Experiment results demonstrate the feasibility of our method. The source code is available at https://github.com/a07458666/parametric_dr.
保留局部结构的参数降维方法
我们通过训练神经网络,将一种著名的降维方法——t分布随机邻居嵌入(t-SNE)——从非参数扩展到参数。参数化技术的主要优点是处理新数据的泛化,这有利于流数据的可视化。之前的参数方法要么需要使用受限玻尔兹曼机进行网络预训练,要么需要从传统的非参数t-SNE中获得中间结果,但我们发现,最近的网络训练技能可以实现对t-SNE目标函数的直接优化。因此,我们的方法在实现泛化的同时,获得了较高的嵌入质量。由于小批量网络训练,我们的参数降维方法效率很高。为了评估,我们将我们的方法与各种数据集上的几个基线进行了比较。实验结果证明了该方法的可行性。源代码可从https://github.com/a07458666/parametric_dr获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信