Semi-Supervised learning using adversarial networks

Ryosuke Tachibana, Takashi Matsubara, K. Uehara
{"title":"Semi-Supervised learning using adversarial networks","authors":"Ryosuke Tachibana, Takashi Matsubara, K. Uehara","doi":"10.1109/ICIS.2016.7550881","DOIUrl":null,"url":null,"abstract":"Semi-supervised learning is a topic of practical importance because of the difficulty of obtaining numerous labeled data. In this paper, we apply an extension of adversarial autoencoder to semi-supervised learning tasks. In attempt to separate style and content, we divide the latent representation of the autoencoder into two parts. We regularize the autoencoder by imposing a prior distribution on both parts to make them independent. As a result, one of the latent representations is associated with content, which is useful to classify the images. We demonstrate that our method disentangles style and content of the input images and achieves less test error rate than vanilla autoencoder on MNIST semi-supervised classification tasks.","PeriodicalId":336322,"journal":{"name":"2016 IEEE/ACIS 15th International Conference on Computer and Information Science (ICIS)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 IEEE/ACIS 15th International Conference on Computer and Information Science (ICIS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICIS.2016.7550881","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6

Abstract

Semi-supervised learning is a topic of practical importance because of the difficulty of obtaining numerous labeled data. In this paper, we apply an extension of adversarial autoencoder to semi-supervised learning tasks. In attempt to separate style and content, we divide the latent representation of the autoencoder into two parts. We regularize the autoencoder by imposing a prior distribution on both parts to make them independent. As a result, one of the latent representations is associated with content, which is useful to classify the images. We demonstrate that our method disentangles style and content of the input images and achieves less test error rate than vanilla autoencoder on MNIST semi-supervised classification tasks.
使用对抗网络的半监督学习
由于难以获得大量标记数据,半监督学习成为一个具有实际意义的课题。在本文中,我们将对抗性自编码器的扩展应用于半监督学习任务。为了分离风格和内容,我们将自动编码器的潜在表示分为两部分。我们通过在两个部分上施加先验分布来使它们独立来正则化自编码器。结果,其中一个潜在表示与内容相关联,这对图像分类很有用。在MNIST半监督分类任务上,我们证明了我们的方法可以将输入图像的风格和内容分开,并且比普通自动编码器实现更低的测试错误率。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信