{"title":"时尚gan:走向时尚插画一代","authors":"Xingxing Zou, W. Wong","doi":"10.1177/24723444221147972","DOIUrl":null,"url":null,"abstract":"In this article, we propose StylishGAN, a generative adversarial network that generates a fashion illustration sketch given an actual photo of a human model. The generated stylish sketches not only capture the image style from real photos to hand drawings with a cleaner background, but also adjust model’s body into a perfectly proportioned shape. StylishGAN learns proportional transformation and texture information through a proposed body-shaping attentional module. Furthermore, we introduce a contextual fashionable loss that augments the design details, especially the fabric texture, of the clothing. To implement our method, we prepare a new fashion dataset, namely, StylishU, that consists of 3578 paired photo–sketch images. In each pair, we have one real photo collected from the fashion show and one corresponding illustration sketch created by professional fashion illustrators. Extensive experiments show the performance of our method qualitatively and quantitatively.","PeriodicalId":6955,"journal":{"name":"AATCC Journal of Research","volume":" ","pages":""},"PeriodicalIF":0.6000,"publicationDate":"2023-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"StylishGAN: Toward Fashion Illustration Generation\",\"authors\":\"Xingxing Zou, W. Wong\",\"doi\":\"10.1177/24723444221147972\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this article, we propose StylishGAN, a generative adversarial network that generates a fashion illustration sketch given an actual photo of a human model. The generated stylish sketches not only capture the image style from real photos to hand drawings with a cleaner background, but also adjust model’s body into a perfectly proportioned shape. StylishGAN learns proportional transformation and texture information through a proposed body-shaping attentional module. Furthermore, we introduce a contextual fashionable loss that augments the design details, especially the fabric texture, of the clothing. To implement our method, we prepare a new fashion dataset, namely, StylishU, that consists of 3578 paired photo–sketch images. In each pair, we have one real photo collected from the fashion show and one corresponding illustration sketch created by professional fashion illustrators. Extensive experiments show the performance of our method qualitatively and quantitatively.\",\"PeriodicalId\":6955,\"journal\":{\"name\":\"AATCC Journal of Research\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":0.6000,\"publicationDate\":\"2023-02-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"AATCC Journal of Research\",\"FirstCategoryId\":\"88\",\"ListUrlMain\":\"https://doi.org/10.1177/24723444221147972\",\"RegionNum\":4,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"MATERIALS SCIENCE, TEXTILES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"AATCC Journal of Research","FirstCategoryId":"88","ListUrlMain":"https://doi.org/10.1177/24723444221147972","RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"MATERIALS SCIENCE, TEXTILES","Score":null,"Total":0}
In this article, we propose StylishGAN, a generative adversarial network that generates a fashion illustration sketch given an actual photo of a human model. The generated stylish sketches not only capture the image style from real photos to hand drawings with a cleaner background, but also adjust model’s body into a perfectly proportioned shape. StylishGAN learns proportional transformation and texture information through a proposed body-shaping attentional module. Furthermore, we introduce a contextual fashionable loss that augments the design details, especially the fabric texture, of the clothing. To implement our method, we prepare a new fashion dataset, namely, StylishU, that consists of 3578 paired photo–sketch images. In each pair, we have one real photo collected from the fashion show and one corresponding illustration sketch created by professional fashion illustrators. Extensive experiments show the performance of our method qualitatively and quantitatively.
期刊介绍:
AATCC Journal of Research. This textile research journal has a broad scope: from advanced materials, fibers, and textile and polymer chemistry, to color science, apparel design, and sustainability.
Now indexed by Science Citation Index Extended (SCIE) and discoverable in the Clarivate Analytics Web of Science Core Collection! The Journal’s impact factor is available in Journal Citation Reports.