{"title":"规则性对gan学习的影响","authors":"Niladri Shekhar Dutt, S. Patel","doi":"10.1145/3483845.3483874","DOIUrl":null,"url":null,"abstract":"Generative Adversarial Networks (GANs) are algorithmic architectures that use two neural networks, pitting one against the opposite (thus the “adversarial”) so as to come up with new, synthetic instances of data that can pass for real data. GANs have been highly successful on datasets like MNIST, SVHN, CelebA, etc but training a GAN on large scale datasets like ImageNet is a challenging problem because they are deemed as not very regular. In this paper, we perform empirical experiments using parameterized synthetic datasets to probe how regularity of a dataset affects learning in GANs. We emperically show that regular datasets are easier to model for GANs because of their stable training process.","PeriodicalId":134636,"journal":{"name":"Proceedings of the 2021 2nd International Conference on Control, Robotics and Intelligent System","volume":"39 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Effect of regularity on learning in GANs\",\"authors\":\"Niladri Shekhar Dutt, S. Patel\",\"doi\":\"10.1145/3483845.3483874\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Generative Adversarial Networks (GANs) are algorithmic architectures that use two neural networks, pitting one against the opposite (thus the “adversarial”) so as to come up with new, synthetic instances of data that can pass for real data. GANs have been highly successful on datasets like MNIST, SVHN, CelebA, etc but training a GAN on large scale datasets like ImageNet is a challenging problem because they are deemed as not very regular. In this paper, we perform empirical experiments using parameterized synthetic datasets to probe how regularity of a dataset affects learning in GANs. We emperically show that regular datasets are easier to model for GANs because of their stable training process.\",\"PeriodicalId\":134636,\"journal\":{\"name\":\"Proceedings of the 2021 2nd International Conference on Control, Robotics and Intelligent System\",\"volume\":\"39 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-08-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2021 2nd International Conference on Control, Robotics and Intelligent System\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3483845.3483874\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2021 2nd International Conference on Control, Robotics and Intelligent System","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3483845.3483874","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Generative Adversarial Networks (GANs) are algorithmic architectures that use two neural networks, pitting one against the opposite (thus the “adversarial”) so as to come up with new, synthetic instances of data that can pass for real data. GANs have been highly successful on datasets like MNIST, SVHN, CelebA, etc but training a GAN on large scale datasets like ImageNet is a challenging problem because they are deemed as not very regular. In this paper, we perform empirical experiments using parameterized synthetic datasets to probe how regularity of a dataset affects learning in GANs. We emperically show that regular datasets are easier to model for GANs because of their stable training process.