{"title":"Incorporating Human Body Shape Guidance for Cloth Warping in Model to Person Virtual Try-on Problems","authors":"Debapriya Roy, Sanchayan Santra, B. Chanda","doi":"10.1109/IVCNZ51579.2020.9290603","DOIUrl":null,"url":null,"abstract":"The world of retail has witnessed a lot of change in the last few decades and with a size of 2.4 trillion, the fashion industry is way ahead of others in this aspect. With the blessings of technology like virtual try-on (vton), now even online shoppers can virtually try a product before buying. However, the current image-based virtual try-on methods still have a long way to go when it comes to producing realistic outputs. In general, vton methods work in two stages. The first stage warps the source cloth and the second stage merges the cloth with the person image for predicting the final try-on output. While the second stage is comparatively easier to handle using neural networks, predicting an accurate warp is difficult as replicating actual human body deformation is challenging. A fundamental issue in vton domain is data. Although lots of images of cloth are available over the internet in either social media or e-commerce websites, but most of them are in the form of a human wearing the cloth. However, the existing approaches are constrained to take separate cloth images as the input source clothing. To address these problems, we propose a model to person cloth warping strategy, where the objective is to align the segmented cloth from the model image in a way that fits the target person, thus, alleviating the need of separate cloth images. Compared to the existing approaches of warping, our method shows improvement especially in the case of complex patterns of cloth. Rigorous experiments applied on various public domain datasets establish the efficacy of this method compared to benchmark methods.","PeriodicalId":164317,"journal":{"name":"2020 35th International Conference on Image and Vision Computing New Zealand (IVCNZ)","volume":"264 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 35th International Conference on Image and Vision Computing New Zealand (IVCNZ)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IVCNZ51579.2020.9290603","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
The world of retail has witnessed a lot of change in the last few decades and with a size of 2.4 trillion, the fashion industry is way ahead of others in this aspect. With the blessings of technology like virtual try-on (vton), now even online shoppers can virtually try a product before buying. However, the current image-based virtual try-on methods still have a long way to go when it comes to producing realistic outputs. In general, vton methods work in two stages. The first stage warps the source cloth and the second stage merges the cloth with the person image for predicting the final try-on output. While the second stage is comparatively easier to handle using neural networks, predicting an accurate warp is difficult as replicating actual human body deformation is challenging. A fundamental issue in vton domain is data. Although lots of images of cloth are available over the internet in either social media or e-commerce websites, but most of them are in the form of a human wearing the cloth. However, the existing approaches are constrained to take separate cloth images as the input source clothing. To address these problems, we propose a model to person cloth warping strategy, where the objective is to align the segmented cloth from the model image in a way that fits the target person, thus, alleviating the need of separate cloth images. Compared to the existing approaches of warping, our method shows improvement especially in the case of complex patterns of cloth. Rigorous experiments applied on various public domain datasets establish the efficacy of this method compared to benchmark methods.