{"title":"使用对抗网络的半监督学习","authors":"Ryosuke Tachibana, Takashi Matsubara, K. Uehara","doi":"10.1109/ICIS.2016.7550881","DOIUrl":null,"url":null,"abstract":"Semi-supervised learning is a topic of practical importance because of the difficulty of obtaining numerous labeled data. In this paper, we apply an extension of adversarial autoencoder to semi-supervised learning tasks. In attempt to separate style and content, we divide the latent representation of the autoencoder into two parts. We regularize the autoencoder by imposing a prior distribution on both parts to make them independent. As a result, one of the latent representations is associated with content, which is useful to classify the images. We demonstrate that our method disentangles style and content of the input images and achieves less test error rate than vanilla autoencoder on MNIST semi-supervised classification tasks.","PeriodicalId":336322,"journal":{"name":"2016 IEEE/ACIS 15th International Conference on Computer and Information Science (ICIS)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"Semi-Supervised learning using adversarial networks\",\"authors\":\"Ryosuke Tachibana, Takashi Matsubara, K. Uehara\",\"doi\":\"10.1109/ICIS.2016.7550881\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Semi-supervised learning is a topic of practical importance because of the difficulty of obtaining numerous labeled data. In this paper, we apply an extension of adversarial autoencoder to semi-supervised learning tasks. In attempt to separate style and content, we divide the latent representation of the autoencoder into two parts. We regularize the autoencoder by imposing a prior distribution on both parts to make them independent. As a result, one of the latent representations is associated with content, which is useful to classify the images. We demonstrate that our method disentangles style and content of the input images and achieves less test error rate than vanilla autoencoder on MNIST semi-supervised classification tasks.\",\"PeriodicalId\":336322,\"journal\":{\"name\":\"2016 IEEE/ACIS 15th International Conference on Computer and Information Science (ICIS)\",\"volume\":\"63 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-06-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2016 IEEE/ACIS 15th International Conference on Computer and Information Science (ICIS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICIS.2016.7550881\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 IEEE/ACIS 15th International Conference on Computer and Information Science (ICIS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICIS.2016.7550881","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Semi-Supervised learning using adversarial networks
Semi-supervised learning is a topic of practical importance because of the difficulty of obtaining numerous labeled data. In this paper, we apply an extension of adversarial autoencoder to semi-supervised learning tasks. In attempt to separate style and content, we divide the latent representation of the autoencoder into two parts. We regularize the autoencoder by imposing a prior distribution on both parts to make them independent. As a result, one of the latent representations is associated with content, which is useful to classify the images. We demonstrate that our method disentangles style and content of the input images and achieves less test error rate than vanilla autoencoder on MNIST semi-supervised classification tasks.