{"title":"High-quality 3D Clothing Reconstruction and Virtual-Try-On: Pants case","authors":"Thanh Tuan Thai, Youngsik Yun, Heejune Ahn","doi":"10.1109/MAPR56351.2022.9924990","DOIUrl":null,"url":null,"abstract":"Virtual try-on (VTON) is filling the gap between online and offline shopping. This paper extends Cloth3D, which uses top clothing only, and proposes a pipeline for high-resolution virtual try-on for pants based on 3D clothing reconstruction. In-shop pants image is first reconstructed into 3D by finding the SMPL body model fitted to the pants and building the clothing model. Then, the clothing model is reposed to the human reference image and projected to a 2D image to get 3D warped pants. These warped pants and the identities from the reference person image are going through the blending network to get the try-on. Moreover, a target segmentation is also estimated for control input for the blending (in-painting) network. Our experiments and evaluation on a new fashion dataset show natural VTON results for service.","PeriodicalId":138642,"journal":{"name":"2022 International Conference on Multimedia Analysis and Pattern Recognition (MAPR)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Conference on Multimedia Analysis and Pattern Recognition (MAPR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MAPR56351.2022.9924990","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Virtual try-on (VTON) is filling the gap between online and offline shopping. This paper extends Cloth3D, which uses top clothing only, and proposes a pipeline for high-resolution virtual try-on for pants based on 3D clothing reconstruction. In-shop pants image is first reconstructed into 3D by finding the SMPL body model fitted to the pants and building the clothing model. Then, the clothing model is reposed to the human reference image and projected to a 2D image to get 3D warped pants. These warped pants and the identities from the reference person image are going through the blending network to get the try-on. Moreover, a target segmentation is also estimated for control input for the blending (in-painting) network. Our experiments and evaluation on a new fashion dataset show natural VTON results for service.