Re-GAN: Data-Efficient GANs Training via Architectural Reconfiguration.

IF 20.8 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Divya Saxena,Jiannong Cao,Jiahao Xu,Tarun Kulshrestha
{"title":"Re-GAN: Data-Efficient GANs Training via Architectural Reconfiguration.","authors":"Divya Saxena,Jiannong Cao,Jiahao Xu,Tarun Kulshrestha","doi":"10.1109/tpami.2025.3590650","DOIUrl":null,"url":null,"abstract":"The training of Generative Adversarial Networks (GANs) for high-fidelity images has predominantly relied on large-scale datasets. Emerging research, particularly on GANs 'lottery tickets', suggests that dense GANs models have sparse sub-networks capable of superior performance with limited data. However, the conventional process to uncover these 'lottery tickets' involves a resource-intensive train-prune-retrain cycle. Addressing this, our paper introduces Re-GAN, a novel, dataefficient approach for GANs training that dynamically reconfigures the GANs architecture during training. This method focuses on iterative pruning of non-important connections and regrowing them, thereby preventing premature loss of important features and maintaining the model's representational strength. Re-GAN provides a more stable and efficient solution for GANs models with limited data, offering an alternative to existing progressive growing methods and GANs tickets. While Re-GAN has already demonstrated its potential in image generation across diverse datasets, domains, and resolutions, in this paper, we significantly expand our study. We incorporate new applications, notably Image-to-Image translation, include additional datasets, provide in-depth analyses, and explore compatibility with data augmentation techniques. This expansion not only broadens the scope of Re-GAN but also establishes it as a generic training methodology, demonstrating its effectiveness and adaptability in different GANs scenarios. Code is available at https://github.com/IntellicentAI-lab/Re-GAN.","PeriodicalId":13426,"journal":{"name":"IEEE Transactions on Pattern Analysis and Machine Intelligence","volume":"672 1","pages":""},"PeriodicalIF":20.8000,"publicationDate":"2025-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Pattern Analysis and Machine Intelligence","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1109/tpami.2025.3590650","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

The training of Generative Adversarial Networks (GANs) for high-fidelity images has predominantly relied on large-scale datasets. Emerging research, particularly on GANs 'lottery tickets', suggests that dense GANs models have sparse sub-networks capable of superior performance with limited data. However, the conventional process to uncover these 'lottery tickets' involves a resource-intensive train-prune-retrain cycle. Addressing this, our paper introduces Re-GAN, a novel, dataefficient approach for GANs training that dynamically reconfigures the GANs architecture during training. This method focuses on iterative pruning of non-important connections and regrowing them, thereby preventing premature loss of important features and maintaining the model's representational strength. Re-GAN provides a more stable and efficient solution for GANs models with limited data, offering an alternative to existing progressive growing methods and GANs tickets. While Re-GAN has already demonstrated its potential in image generation across diverse datasets, domains, and resolutions, in this paper, we significantly expand our study. We incorporate new applications, notably Image-to-Image translation, include additional datasets, provide in-depth analyses, and explore compatibility with data augmentation techniques. This expansion not only broadens the scope of Re-GAN but also establishes it as a generic training methodology, demonstrating its effectiveness and adaptability in different GANs scenarios. Code is available at https://github.com/IntellicentAI-lab/Re-GAN.
重构gan:基于架构重构的数据高效gan训练。
高保真图像生成对抗网络(GANs)的训练主要依赖于大规模数据集。新兴的研究,特别是关于gan“彩票”的研究表明,密集gan模型具有稀疏的子网络,能够在有限的数据下具有卓越的性能。然而,发现这些“彩票”的传统过程涉及一个资源密集型的火车-修剪-再火车循环。为了解决这个问题,我们的论文介绍了Re-GAN,这是一种新颖的、数据高效的gan训练方法,可以在训练过程中动态地重新配置gan架构。该方法侧重于对不重要的连接进行迭代修剪并重新生长,从而防止重要特征的过早丢失并保持模型的表示强度。Re-GAN为数据有限的gan模型提供了一种更稳定、更有效的解决方案,为现有的渐进式生长方法和gan门票提供了替代方案。虽然Re-GAN已经证明了它在不同数据集、领域和分辨率的图像生成方面的潜力,但在本文中,我们显著扩展了我们的研究。我们整合了新的应用程序,特别是图像到图像的转换,包括额外的数据集,提供深入的分析,并探索与数据增强技术的兼容性。这种扩展不仅拓宽了Re-GAN的范围,而且建立了它作为一种通用的训练方法,证明了它在不同gan场景下的有效性和适应性。代码可从https://github.com/IntellicentAI-lab/Re-GAN获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
28.40
自引率
3.00%
发文量
885
审稿时长
8.5 months
期刊介绍: The IEEE Transactions on Pattern Analysis and Machine Intelligence publishes articles on all traditional areas of computer vision and image understanding, all traditional areas of pattern analysis and recognition, and selected areas of machine intelligence, with a particular emphasis on machine learning for pattern analysis. Areas such as techniques for visual search, document and handwriting analysis, medical image analysis, video and image sequence analysis, content-based retrieval of image and video, face and gesture recognition and relevant specialized hardware and/or software architectures are also covered.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信