{"title":"Research on Text to Image Based on Generative Adversarial Network","authors":"Li Xiaolin, Gao Yuwei","doi":"10.1109/ITCA52113.2020.00077","DOIUrl":null,"url":null,"abstract":"In recent years, Generative Adversarial Network (GAN) has quickly become the most popular deep generative model framework, and it is also the most popular topic in the current deep learning research field. Although the generative adversarial network has achieved remarkable results from text description to image generation, when a complex image containing multiple objects, the position of each object will be blurred and overlapped, and the edges of the generated image will be blurred and local textures will be unclear. Usually given text description can generate the corresponding rough image, but there are still some problems in the image details. In order to solve the above problems, on the basis of Stack GAN, a scene graph-based stacked generative confrontation network model (Scene graph stack GAN, SGS-GAN) is proposed, which converts the text description into The scene graph uses the scene graph as the condition vector and inputs the random noise into the generator model to obtain the result image. The experimental results show that the Inception store of the SGS-GAN model on the Visual Genome and COCO data sets reached 6.64 and 6.52, respectively, which were increased by 0.212 and 0.219 compared to Sg2Im. This proves that the diversity and vividness of the generated samples and the sharpness of the image are obviously improved after the number of times of training and the input of the scene graph.","PeriodicalId":103309,"journal":{"name":"2020 2nd International Conference on Information Technology and Computer Application (ITCA)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 2nd International Conference on Information Technology and Computer Application (ITCA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ITCA52113.2020.00077","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
In recent years, Generative Adversarial Network (GAN) has quickly become the most popular deep generative model framework, and it is also the most popular topic in the current deep learning research field. Although the generative adversarial network has achieved remarkable results from text description to image generation, when a complex image containing multiple objects, the position of each object will be blurred and overlapped, and the edges of the generated image will be blurred and local textures will be unclear. Usually given text description can generate the corresponding rough image, but there are still some problems in the image details. In order to solve the above problems, on the basis of Stack GAN, a scene graph-based stacked generative confrontation network model (Scene graph stack GAN, SGS-GAN) is proposed, which converts the text description into The scene graph uses the scene graph as the condition vector and inputs the random noise into the generator model to obtain the result image. The experimental results show that the Inception store of the SGS-GAN model on the Visual Genome and COCO data sets reached 6.64 and 6.52, respectively, which were increased by 0.212 and 0.219 compared to Sg2Im. This proves that the diversity and vividness of the generated samples and the sharpness of the image are obviously improved after the number of times of training and the input of the scene graph.