Graphical ModelsPub Date : 2021-05-01DOI: 10.1016/j.gmod.2021.101106
Min Shi , Yukun Wei , Lan Chen , Dengming Zhu , Tianlu Mao , Zhaoqi Wang
{"title":"Learning a shared deformation space for efficient design-preserving garment transfer","authors":"Min Shi , Yukun Wei , Lan Chen , Dengming Zhu , Tianlu Mao , Zhaoqi Wang","doi":"10.1016/j.gmod.2021.101106","DOIUrl":"10.1016/j.gmod.2021.101106","url":null,"abstract":"<div><p>Garment transfer from a source mannequin to a shape-varying individual is a vital technique in computer graphics. Existing garment transfer methods are either time consuming or lack designed details especially for clothing with complex styles. In this paper, we propose a data-driven approach to efficiently transfer garments between two distinctive bodies while preserving the source design. Given two sets of simulated garments on a source body and a target body, we utilize the deformation gradients as the representation. Since garments in our dataset are with various topologies, we embed cloth deformation to the body. For garment transfer, the deformation is decomposed into two aspects, typically style and shape. An encoder-decoder network is proposed to learn a shared space which is invariant to garment style but related to the deformation of human bodies. For a new garment in a different style worn by the source human, our method can efficiently transfer it to the target body with the shared shape deformation, meanwhile preserving the designed details. We qualitatively and quantitatively evaluate our method on a diverse set of 3D garments that showcase rich wrinkling patterns. Experiments show that the transferred garments can preserve the source design even if the target body is quite different from the source one.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"115 ","pages":"Article 101106"},"PeriodicalIF":1.7,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.gmod.2021.101106","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84648325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Landmark Detection and 3D Face Reconstruction for Caricature using a Nonlinear Parametric Model","authors":"Hongrui Cai, Yudong Guo, Zhuang Peng, Juyong Zhang","doi":"10.1016/j.gmod.2021.101103","DOIUrl":"10.1016/j.gmod.2021.101103","url":null,"abstract":"<div><p><span><span>Caricature is an artistic abstraction of the human face by distorting or exaggerating certain facial features, while still retains a likeness with the given face. Due to the large diversity of geometric and texture variations, automatic landmark detection and 3D face reconstruction for caricature is a challenging problem and has rarely been studied before. In this paper, we propose the first automatic method for this task by a novel 3D approach. To this end, we first build a dataset with various styles of 2D caricatures and their corresponding </span>3D shapes<span><span>, and then build a parametric model on vertex based deformation space for 3D caricature face. Based on the constructed dataset and the nonlinear parametric model, we propose a </span>neural network<span> based method to regress the 3D face shape and orientation from the input 2D caricature image. Ablation studies and comparison with state-of-the-art methods demonstrate the effectiveness of our algorithm design. Extensive experimental results demonstrate that our method works well for various caricatures. Our constructed dataset, source code and trained model are available at </span></span></span><span>https://github.com/Juyong/CaricatureFace</span><svg><path></path></svg>.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"115 ","pages":"Article 101103"},"PeriodicalIF":1.7,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.gmod.2021.101103","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74961659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Graphical ModelsPub Date : 2021-05-01DOI: 10.1016/j.gmod.2021.101107
Jinfeng Jiang , Guiqing Li , Shihao Wu , Huiqian Zhang , Yongwei Nie
{"title":"BPA-GAN: Human motion transfer using body-part-aware generative adversarial networks","authors":"Jinfeng Jiang , Guiqing Li , Shihao Wu , Huiqian Zhang , Yongwei Nie","doi":"10.1016/j.gmod.2021.101107","DOIUrl":"10.1016/j.gmod.2021.101107","url":null,"abstract":"<div><p>Human motion<span><span><span> transfer has many applications in human behavior analysis, training data augmentation, and personalization in mixed reality. We propose a Body-Parts-Aware </span>Generative Adversarial Network (BPA-GAN) for image-based human motion transfer. Our key idea is to take advantage of the human body with segmented parts instead of using the human skeleton like most of existing methods to encode the human motion information. As a result, we improve the reconstruction quality, the training efficiency, and the temporal consistency via training multiple GANs in a local-to-global manner and adding </span>regularization on the source motion. Extensive experiments show that our method outperforms the baseline and the state-of-the-art techniques in preserving the details of body parts.</span></p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"115 ","pages":"Article 101107"},"PeriodicalIF":1.7,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.gmod.2021.101107","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74511771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Graphical ModelsPub Date : 2021-05-01DOI: 10.1016/j.gmod.2021.101105
Chuanfeng Hu , Hongwei Lin
{"title":"Heterogeneous porous scaffold generation using trivariate B-spline solids and triply periodic minimal surfaces","authors":"Chuanfeng Hu , Hongwei Lin","doi":"10.1016/j.gmod.2021.101105","DOIUrl":"10.1016/j.gmod.2021.101105","url":null,"abstract":"<div><p>A porous scaffold is a three-dimensional network structure composed of a large number of pores, and triply periodic minimal surfaces<span> (TPMSs) are one of the conventional tools for designing porous scaffolds. However, discontinuity, incompleteness, and high storage space requirements are the three main shortcomings of porous scaffold design using TPMSs. In this study, we developed an effective method for heterogeneous porous scaffold generation to overcome the abovementioned shortcomings of porous scaffold design. The input of the proposed method is a trivariate B-spline solid with a cubic parametric<span> domain. The proposed method first constructs a threshold distribution field (TDF) in the cubic parametric domain, and then produces a continuous and complete TPMS within it. Finally, by mapping the TPMS in the parametric domain to the trivariate B-spline solid, a continuous and complete porous scaffold is generated. Moreover, we defined a new storage space-saving file format based on the TDF to store porous scaffolds. The experimental results presented in this paper demonstrate the effectiveness and efficiency of the method using a trivariate B-spline solid, as well as the superior space-saving of the proposed storage format.</span></span></p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"115 ","pages":"Article 101105"},"PeriodicalIF":1.7,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.gmod.2021.101105","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87003425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Graphical ModelsPub Date : 2021-05-01DOI: 10.1016/j.gmod.2021.101102
Li Yang , Jing Wu , Jing Huo , Yu-Kun Lai , Yang Gao
{"title":"Learning 3D face reconstruction from a single sketch","authors":"Li Yang , Jing Wu , Jing Huo , Yu-Kun Lai , Yang Gao","doi":"10.1016/j.gmod.2021.101102","DOIUrl":"10.1016/j.gmod.2021.101102","url":null,"abstract":"<div><p><span>3D face reconstruction from a single image is a classic computer vision problem with many applications. However, most works achieve reconstruction from face photos, and little attention has been paid to reconstruction from other portrait forms. In this paper, we propose a learning-based approach to reconstruct a 3D face from a single face sketch. To overcome the problem of no paired sketch-3D data for supervised learning, we introduce a photo-to-sketch synthesis technique to obtain paired training data, and propose a dual-path architecture to achieve synergistic </span>3D reconstruction from both sketches and photos. We further propose a novel line loss function to refine the reconstruction with characteristic details depicted by lines in sketches well preserved. Our method outperforms the state-of-the-art 3D face reconstruction approaches in terms of reconstruction from face sketches. We also demonstrate the use of our method for easy editing of details on 3D face models.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"115 ","pages":"Article 101102"},"PeriodicalIF":1.7,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.gmod.2021.101102","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72440248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Graphical ModelsPub Date : 2021-03-01DOI: 10.1016/j.gmod.2021.101100
Carlile Lavor , Michael Souza , José Luis Aragón
{"title":"Orthogonality of isometries in the conformal model of the 3D space","authors":"Carlile Lavor , Michael Souza , José Luis Aragón","doi":"10.1016/j.gmod.2021.101100","DOIUrl":"10.1016/j.gmod.2021.101100","url":null,"abstract":"<div><p>Motivated by questions on orthogonality<span> of isometries, we present a new construction of the conformal model of the 3D space using just elementary linear algebra. In addition to pictures that can help the readers to understand the conformal model, our approach allows to obtain matrix representation<span> of isometries that can be useful, for example, in applications of computational geometry<span>, including computer graphics, robotics, and molecular geometry.</span></span></span></p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"114 ","pages":"Article 101100"},"PeriodicalIF":1.7,"publicationDate":"2021-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.gmod.2021.101100","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85976487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Graphical ModelsPub Date : 2021-03-01DOI: 10.1016/j.gmod.2021.101098
A. Tereshin , A. Pasko , O. Fryazinov , V. Adzhiev
{"title":"Hybrid function representation for heterogeneous objects","authors":"A. Tereshin , A. Pasko , O. Fryazinov , V. Adzhiev","doi":"10.1016/j.gmod.2021.101098","DOIUrl":"10.1016/j.gmod.2021.101098","url":null,"abstract":"<div><p><span><span>Heterogeneous object modelling is an emerging area where geometric shapes are considered in concert with their internal physically-based attributes. This paper describes a novel theoretical and practical framework for modelling volumetric heterogeneous objects on the basis of a novel unifying functionally-based hybrid representation called HFRep. This new representation allows for obtaining a continuous smooth distance field in </span>Euclidean space and preserves the advantages of the conventional representations based on </span>scalar fields<span> of different kinds without their drawbacks. We systematically describe the mathematical and algorithmic basics of HFRep. The steps of the basic algorithm are presented in detail for both geometry and attributes. To solve some problematic issues, we have suggested several practical solutions, including a new algorithm for solving the eikonal equation on hierarchical grids. Finally, we show the practicality of the approach by modelling several representative heterogeneous objects, including those of a time-variant nature.</span></p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"114 ","pages":"Article 101098"},"PeriodicalIF":1.7,"publicationDate":"2021-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.gmod.2021.101098","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77256678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Graphical ModelsPub Date : 2021-03-01DOI: 10.1016/j.gmod.2021.101099
Zhongping Ji , Xianfang Sun , Yu-Wei Zhang , Weiyin Ma , Mingqiang Wei
{"title":"Normal manipulation for bas-relief modeling","authors":"Zhongping Ji , Xianfang Sun , Yu-Wei Zhang , Weiyin Ma , Mingqiang Wei","doi":"10.1016/j.gmod.2021.101099","DOIUrl":"10.1016/j.gmod.2021.101099","url":null,"abstract":"<div><p><span><span>We introduce a normal-based modeling framework for bas-relief generation and stylization which is motivated by the recent advancement in this topic. Creating bas-relief from normal images has successfully facilitated bas-relief modeling in image space. However, the use of normal images in previous work is restricted to the cut-and-paste or blending operations of layers. These operations simply treat a normal vector as a pixel of a general color image. This paper is intended to extend normal-based methods by processing the normal image from a geometric perspective. Our method can not only generate a new normal image by combining various frequencies of existing normal images and details transferring, but also build bas-reliefs from a single </span>RGB<span> image and its edge-based sketch lines. In addition, we introduce an auxiliary function to represent a smooth base surface or generate a layered global shape. To integrate above considerations into our framework, we formulate the bas-relief generation as a </span></span>variational problem<span> which can be solved by a screened Poisson equation. One important advantage of our method is that it can generate more styles than previous methods and thus it expands the bas-relief shape space. We experimented our method on a range of normal images and it compares favorably to other popular classic and state-of-the-art methods.</span></p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"114 ","pages":"Article 101099"},"PeriodicalIF":1.7,"publicationDate":"2021-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.gmod.2021.101099","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77959613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Graphical ModelsPub Date : 2021-01-01DOI: 10.1016/j.gmod.2021.101096
Jianhui Nie , Wenkai Shi , Ye Liu , Hao Gao , Feng Xu , Zhaochen Zhang
{"title":"Bas-relief generation from point clouds based on normal space compression with real-time adjustment on CPU","authors":"Jianhui Nie , Wenkai Shi , Ye Liu , Hao Gao , Feng Xu , Zhaochen Zhang","doi":"10.1016/j.gmod.2021.101096","DOIUrl":"https://doi.org/10.1016/j.gmod.2021.101096","url":null,"abstract":"<div><p><span>This paper presents a bas-relief generation algorithm from scattered point cloud directly. Compared with the popular gradient domain methods for mesh surface, this algorithm takes normal vectors as the operation object, making it independent of topology connection, thus more suitable for point clouds and easier to implement. By constructing </span>linear equations of the bas-relief height and using the solution strategy of the subspace, this algorithm can adjustment the bas-relief effect in real-time relying on the computing power of a consumer CPU only. In addition, we also propose an iterative solution to generate a bas-relief model of a specified height. The experimental results indicate that our algorithm provides a unified solution for generating different types of bas-relief with good saturation and rich details.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"113 ","pages":"Article 101096"},"PeriodicalIF":1.7,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.gmod.2021.101096","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91721705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Graphical ModelsPub Date : 2020-11-01DOI: 10.1016/j.gmod.2020.101093
Lidija Čomić , Paola Magillo
{"title":"Surface-based computation of the Euler characteristic in the cubical grid","authors":"Lidija Čomić , Paola Magillo","doi":"10.1016/j.gmod.2020.101093","DOIUrl":"10.1016/j.gmod.2020.101093","url":null,"abstract":"<div><p><span>For well-composed (manifold) objects in the 3D cubical grid, the Euler characteristic<span> is equal to half of the Euler characteristic of the object boundary, which in turn is equal to the number of boundary vertices minus the number of boundary faces. We extend this formula to arbitrary objects, not necessarily well-composed, by adjusting the count of boundary cells both for vertex- and for face-adjacency. We prove the correctness of our approach by constructing two well-composed polyhedral complexes </span></span>homotopy equivalent to the given object with the two adjacencies. The proposed formulas for the computation of the Euler characteristic are simple, easy to implement and efficient. Experiments show that our formulas are faster to evaluate than the volume-based ones on realistic inputs, and are faster than the classical surface-based formulas.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"112 ","pages":"Article 101093"},"PeriodicalIF":1.7,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.gmod.2020.101093","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86045947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}