{"title":"UFS-Net: Unsupervised Network For Fashion Style Editing And Generation","authors":"Wanqing Wu, Aihua Mao, W. Yan, Qing Liu","doi":"10.1109/ICME55011.2023.00360","DOIUrl":null,"url":null,"abstract":"AI-aided fashion design has attracted growing interest because it eliminates tedious manual operations. However, existing methods are costly because they require abundant labeled data or paired images for training. In addition, they have low flexibility in attribute editing. To overcome these limitations, we propose UFS-Net, a new unsupervised network for fashion style editing and generation. Specifically, we initially design a coarse-to-fine embedding process to embed the user-defined sketch and the real clothing into the latent space of StyleGAN. Subsequently, we propose a feature fusion scheme to generate clothing with attributes provided by the sketch. In this way, our network requires neither labels nor sketches during the training but can perform flexible attribute editing and conditional generation. Extensive experiments reveal that our method significantly outperforms state-of-the-art approaches. In addition, we introduce a new dataset, Fashion-Top, to address the limitations in the existing fashion datasets.","PeriodicalId":321830,"journal":{"name":"2023 IEEE International Conference on Multimedia and Expo (ICME)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE International Conference on Multimedia and Expo (ICME)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICME55011.2023.00360","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
AI-aided fashion design has attracted growing interest because it eliminates tedious manual operations. However, existing methods are costly because they require abundant labeled data or paired images for training. In addition, they have low flexibility in attribute editing. To overcome these limitations, we propose UFS-Net, a new unsupervised network for fashion style editing and generation. Specifically, we initially design a coarse-to-fine embedding process to embed the user-defined sketch and the real clothing into the latent space of StyleGAN. Subsequently, we propose a feature fusion scheme to generate clothing with attributes provided by the sketch. In this way, our network requires neither labels nor sketches during the training but can perform flexible attribute editing and conditional generation. Extensive experiments reveal that our method significantly outperforms state-of-the-art approaches. In addition, we introduce a new dataset, Fashion-Top, to address the limitations in the existing fashion datasets.