Muhammad Hamza, S. Bazai, Muhammad Imran Ghafoor, Shafi Ullah, Saira Akram, Muhammad Shahzeb Khan
{"title":"生成对抗网络(GANs)视频框架:系统文献综述","authors":"Muhammad Hamza, S. Bazai, Muhammad Imran Ghafoor, Shafi Ullah, Saira Akram, Muhammad Shahzeb Khan","doi":"10.1109/ICEPECC57281.2023.10209475","DOIUrl":null,"url":null,"abstract":"The content creation industry is rapidly growing in various fields such as entertainment, education, and social media platforms. There has been an increasing trend in recent years to generate content using AI algorithms. Generative Adversarial Networks (GANs) are a powerful method for generating realistic samples to meet the increasing demand for data. Many variations of GANs models have been proposed and are covered in multiple review papers. This paper presents a systematic literature review of GANs video generation models. First, the models are categorized into general GANs, image GANs, Video GANs, and Unconditional and Conditional GANs. Next, the paper summarizes the improvements made in GANs related to image synthesis and identifies areas where video synthesis has not yet been fully explored. A comprehensive systematic review of Video GANs is then presented by categorizing them into unconditional and conditional GANs. The datasets used in video generation are also discussed in the paper. The conditional models are further explained in sections that are categorized as images, audio, and videos. Lastly, the paper concludes with a discussion of the limitations of GANs and the future work needed in this area.","PeriodicalId":102289,"journal":{"name":"2023 International Conference on Energy, Power, Environment, Control, and Computing (ICEPECC)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Generative Adversarial Networks (GANs) Video Framework: A Systematic Literature Review\",\"authors\":\"Muhammad Hamza, S. Bazai, Muhammad Imran Ghafoor, Shafi Ullah, Saira Akram, Muhammad Shahzeb Khan\",\"doi\":\"10.1109/ICEPECC57281.2023.10209475\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The content creation industry is rapidly growing in various fields such as entertainment, education, and social media platforms. There has been an increasing trend in recent years to generate content using AI algorithms. Generative Adversarial Networks (GANs) are a powerful method for generating realistic samples to meet the increasing demand for data. Many variations of GANs models have been proposed and are covered in multiple review papers. This paper presents a systematic literature review of GANs video generation models. First, the models are categorized into general GANs, image GANs, Video GANs, and Unconditional and Conditional GANs. Next, the paper summarizes the improvements made in GANs related to image synthesis and identifies areas where video synthesis has not yet been fully explored. A comprehensive systematic review of Video GANs is then presented by categorizing them into unconditional and conditional GANs. The datasets used in video generation are also discussed in the paper. The conditional models are further explained in sections that are categorized as images, audio, and videos. Lastly, the paper concludes with a discussion of the limitations of GANs and the future work needed in this area.\",\"PeriodicalId\":102289,\"journal\":{\"name\":\"2023 International Conference on Energy, Power, Environment, Control, and Computing (ICEPECC)\",\"volume\":\"22 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-03-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 International Conference on Energy, Power, Environment, Control, and Computing (ICEPECC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICEPECC57281.2023.10209475\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 International Conference on Energy, Power, Environment, Control, and Computing (ICEPECC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICEPECC57281.2023.10209475","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Generative Adversarial Networks (GANs) Video Framework: A Systematic Literature Review
The content creation industry is rapidly growing in various fields such as entertainment, education, and social media platforms. There has been an increasing trend in recent years to generate content using AI algorithms. Generative Adversarial Networks (GANs) are a powerful method for generating realistic samples to meet the increasing demand for data. Many variations of GANs models have been proposed and are covered in multiple review papers. This paper presents a systematic literature review of GANs video generation models. First, the models are categorized into general GANs, image GANs, Video GANs, and Unconditional and Conditional GANs. Next, the paper summarizes the improvements made in GANs related to image synthesis and identifies areas where video synthesis has not yet been fully explored. A comprehensive systematic review of Video GANs is then presented by categorizing them into unconditional and conditional GANs. The datasets used in video generation are also discussed in the paper. The conditional models are further explained in sections that are categorized as images, audio, and videos. Lastly, the paper concludes with a discussion of the limitations of GANs and the future work needed in this area.