{"title":"最大边际生成对抗网络","authors":"Wanshun Gao, Zhonghao Wang","doi":"10.1109/ICACI.2018.8377529","DOIUrl":null,"url":null,"abstract":"Generative Adversarial Networks (GANs) have recently received a lot attention due to the promising performance in image generation, inpainting and style transfer. However, GANs and their variants still face several challenges, including vanishing gradients, mode collapse and unbalanced training between generator and discriminator, which limits further improvement and application of GANs. In this paper, we propose the Max-Margin Generative Adversarial Networks (MMGANs) to approach these challenges by substituting the sigmoid cross-entropy loss of GANs with a max-margin loss. We present the theoretical guarantee regarding merits of max-margin loss to solve the above problems in GANs. Experiments on MNIST and CelebA have shown that MMGANs have three main advantages compared with regular GANs. Firstly, MMGANs is robust to vanishing gradients and mode collapse. Secondly, MMGANs have good stability and strong balance ability during the training process. Thirdly, MMGANs can be easily expanded to multi-class classification tasks.","PeriodicalId":346930,"journal":{"name":"International Conference on Advanced Computational Intelligence","volume":"92 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Max-margin generative adversarial networks\",\"authors\":\"Wanshun Gao, Zhonghao Wang\",\"doi\":\"10.1109/ICACI.2018.8377529\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Generative Adversarial Networks (GANs) have recently received a lot attention due to the promising performance in image generation, inpainting and style transfer. However, GANs and their variants still face several challenges, including vanishing gradients, mode collapse and unbalanced training between generator and discriminator, which limits further improvement and application of GANs. In this paper, we propose the Max-Margin Generative Adversarial Networks (MMGANs) to approach these challenges by substituting the sigmoid cross-entropy loss of GANs with a max-margin loss. We present the theoretical guarantee regarding merits of max-margin loss to solve the above problems in GANs. Experiments on MNIST and CelebA have shown that MMGANs have three main advantages compared with regular GANs. Firstly, MMGANs is robust to vanishing gradients and mode collapse. Secondly, MMGANs have good stability and strong balance ability during the training process. Thirdly, MMGANs can be easily expanded to multi-class classification tasks.\",\"PeriodicalId\":346930,\"journal\":{\"name\":\"International Conference on Advanced Computational Intelligence\",\"volume\":\"92 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1900-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Conference on Advanced Computational Intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICACI.2018.8377529\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Conference on Advanced Computational Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICACI.2018.8377529","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Generative Adversarial Networks (GANs) have recently received a lot attention due to the promising performance in image generation, inpainting and style transfer. However, GANs and their variants still face several challenges, including vanishing gradients, mode collapse and unbalanced training between generator and discriminator, which limits further improvement and application of GANs. In this paper, we propose the Max-Margin Generative Adversarial Networks (MMGANs) to approach these challenges by substituting the sigmoid cross-entropy loss of GANs with a max-margin loss. We present the theoretical guarantee regarding merits of max-margin loss to solve the above problems in GANs. Experiments on MNIST and CelebA have shown that MMGANs have three main advantages compared with regular GANs. Firstly, MMGANs is robust to vanishing gradients and mode collapse. Secondly, MMGANs have good stability and strong balance ability during the training process. Thirdly, MMGANs can be easily expanded to multi-class classification tasks.