{"title":"Generative adversarial networks: A comprehensive survey","authors":"Abdullah Al-Yaari, Youjun Deng","doi":"10.1016/j.sasc.2026.200460","DOIUrl":null,"url":null,"abstract":"<div><div>Generative adversarial networks have become a central framework for learning implicit generative models and producing high-fidelity synthetic data, yet their training dynamics remain fragile, and their design space has expanded rapidly. This survey provides a focused, method-oriented synthesis of the field, organizing key advances by architectural families, objective functions, regularization, optimization, stabilization techniques, and evaluation practice. We summarize representative models from the early formulation to recent large-scale and transformer-based variants, highlight how design choices influence stability, fidelity, diversity, and computational cost, and connect methodological developments to major application areas. We also discuss current limitations and open research directions, including data efficiency, reproducibility, safety, and misuse risks, and the emerging interaction between adversarial learning and other modern generative paradigms.</div></div>","PeriodicalId":101205,"journal":{"name":"Systems and Soft Computing","volume":"8 ","pages":"Article 200460"},"PeriodicalIF":3.6000,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Systems and Soft Computing","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2772941926000244","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2026/2/28 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Generative adversarial networks have become a central framework for learning implicit generative models and producing high-fidelity synthetic data, yet their training dynamics remain fragile, and their design space has expanded rapidly. This survey provides a focused, method-oriented synthesis of the field, organizing key advances by architectural families, objective functions, regularization, optimization, stabilization techniques, and evaluation practice. We summarize representative models from the early formulation to recent large-scale and transformer-based variants, highlight how design choices influence stability, fidelity, diversity, and computational cost, and connect methodological developments to major application areas. We also discuss current limitations and open research directions, including data efficiency, reproducibility, safety, and misuse risks, and the emerging interaction between adversarial learning and other modern generative paradigms.