{"title":"Architecture Knowledge Distillation for Evolutionary Generative Adversarial Network.","authors":"Yu Xue, Yan Lin, Ferrante Neri","doi":"10.1142/S0129065725500133","DOIUrl":null,"url":null,"abstract":"<p><p>Generative Adversarial Networks (GANs) are effective for image generation, but their unstable training limits broader applications. Additionally, neural architecture search (NAS) for GANs with one-shot models often leads to insufficient subnet training, where subnets inherit weights from a supernet without proper optimization, further degrading performance. To address both issues, we propose Architecture Knowledge Distillation for Evolutionary GAN (AKD-EGAN). AKD-EGAN operates in two stages. First, architecture knowledge distillation (AKD) is used during supernet training to efficiently optimize subnetworks and accelerate learning. Second, a multi-objective evolutionary algorithm (MOEA) searches for optimal subnet architectures, ensuring efficiency by considering multiple performance metrics. This approach, combined with a strategy for architecture inheritance, enhances GAN stability and image quality. Experiments show that AKD-EGAN surpasses state-of-the-art methods, achieving a Fréchet Inception Distance (FID) of 7.91 and an Inception Score (IS) of 8.97 on CIFAR-10, along with competitive results on STL-10 (FID: 20.32, IS: 10.06). Code and models will be available at https://github.com/njit-ly/AKD-EGAN.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2550013"},"PeriodicalIF":0.0000,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International journal of neural systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1142/S0129065725500133","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/2/19 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Generative Adversarial Networks (GANs) are effective for image generation, but their unstable training limits broader applications. Additionally, neural architecture search (NAS) for GANs with one-shot models often leads to insufficient subnet training, where subnets inherit weights from a supernet without proper optimization, further degrading performance. To address both issues, we propose Architecture Knowledge Distillation for Evolutionary GAN (AKD-EGAN). AKD-EGAN operates in two stages. First, architecture knowledge distillation (AKD) is used during supernet training to efficiently optimize subnetworks and accelerate learning. Second, a multi-objective evolutionary algorithm (MOEA) searches for optimal subnet architectures, ensuring efficiency by considering multiple performance metrics. This approach, combined with a strategy for architecture inheritance, enhances GAN stability and image quality. Experiments show that AKD-EGAN surpasses state-of-the-art methods, achieving a Fréchet Inception Distance (FID) of 7.91 and an Inception Score (IS) of 8.97 on CIFAR-10, along with competitive results on STL-10 (FID: 20.32, IS: 10.06). Code and models will be available at https://github.com/njit-ly/AKD-EGAN.