{"title":"基于双向反向传播的生成对抗网络训练","authors":"Olaoluwa Adigun, B. Kosko","doi":"10.1109/ICMLA.2018.00190","DOIUrl":null,"url":null,"abstract":"Training generative adversarial networks with the new bidirectional backpropagation algorithm improved performance compared with ordinary unidirectional backpropagation. Bidirectional backpropagation trains a multilayer neural network in the backward direction as well as in the forward direction over the same weights and neurons. The result approximates a set-level inverse mapping that tends to improve the learning of the forward classification mapping. We compared bidirectional backpropagation training of the discriminator with unidirectional training for the standard vanilla GAN on MNIST data and a deep convolutional GAN on CIFAR-10 image data. We also compared B-BP and unidirectional training for a Wasserstein GAN on both MNIST and CIFAR-10 data. Bidirectional training substantially improved the inception score of the vanilla GAN's generated digit images for MNIST data. It increased the vanilla GAN's inception score by 22.3% and greatly reduced the GAN's incidence of mode collapse. Bidirectional training improved the inception score of the deep-convolutional GAN's generated samples by 3.3% on the CIFAR-10 data set. Bidirectional training also increased the Wasserstein GAN's inception score by 4.4% on the MNIST data and by 10.0% on the CIFAR-10 image data.","PeriodicalId":6533,"journal":{"name":"2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA)","volume":"10 1","pages":"1178-1185"},"PeriodicalIF":0.0000,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":"{\"title\":\"Training Generative Adversarial Networks with Bidirectional Backpropagation\",\"authors\":\"Olaoluwa Adigun, B. Kosko\",\"doi\":\"10.1109/ICMLA.2018.00190\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Training generative adversarial networks with the new bidirectional backpropagation algorithm improved performance compared with ordinary unidirectional backpropagation. Bidirectional backpropagation trains a multilayer neural network in the backward direction as well as in the forward direction over the same weights and neurons. The result approximates a set-level inverse mapping that tends to improve the learning of the forward classification mapping. We compared bidirectional backpropagation training of the discriminator with unidirectional training for the standard vanilla GAN on MNIST data and a deep convolutional GAN on CIFAR-10 image data. We also compared B-BP and unidirectional training for a Wasserstein GAN on both MNIST and CIFAR-10 data. Bidirectional training substantially improved the inception score of the vanilla GAN's generated digit images for MNIST data. It increased the vanilla GAN's inception score by 22.3% and greatly reduced the GAN's incidence of mode collapse. Bidirectional training improved the inception score of the deep-convolutional GAN's generated samples by 3.3% on the CIFAR-10 data set. Bidirectional training also increased the Wasserstein GAN's inception score by 4.4% on the MNIST data and by 10.0% on the CIFAR-10 image data.\",\"PeriodicalId\":6533,\"journal\":{\"name\":\"2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA)\",\"volume\":\"10 1\",\"pages\":\"1178-1185\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"8\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICMLA.2018.00190\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICMLA.2018.00190","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Training Generative Adversarial Networks with Bidirectional Backpropagation
Training generative adversarial networks with the new bidirectional backpropagation algorithm improved performance compared with ordinary unidirectional backpropagation. Bidirectional backpropagation trains a multilayer neural network in the backward direction as well as in the forward direction over the same weights and neurons. The result approximates a set-level inverse mapping that tends to improve the learning of the forward classification mapping. We compared bidirectional backpropagation training of the discriminator with unidirectional training for the standard vanilla GAN on MNIST data and a deep convolutional GAN on CIFAR-10 image data. We also compared B-BP and unidirectional training for a Wasserstein GAN on both MNIST and CIFAR-10 data. Bidirectional training substantially improved the inception score of the vanilla GAN's generated digit images for MNIST data. It increased the vanilla GAN's inception score by 22.3% and greatly reduced the GAN's incidence of mode collapse. Bidirectional training improved the inception score of the deep-convolutional GAN's generated samples by 3.3% on the CIFAR-10 data set. Bidirectional training also increased the Wasserstein GAN's inception score by 4.4% on the MNIST data and by 10.0% on the CIFAR-10 image data.