{"title":"RegionSketch: Interactive and Rapid Creation of 3D Models with Rich Details","authors":"Shuairen Liu, Fei Hou, A. Hao, Hong Qin","doi":"10.2312/PG.20191331","DOIUrl":"https://doi.org/10.2312/PG.20191331","url":null,"abstract":"In this paper, we articulate a new approach to interactive generation of 3D models with rich details by way of sketching sparse 2D strokes. Our novel method is a natural extension of Poisson vector graphics (PVG). We design new algorithms that distinguish themselves from other existing sketch-based design systems with three unique features: (1) A novel sketch metaphor to create freeform surface based on Poisson’s equation, which is simple, intuitive, and free of ambiguity; (2) Convenient and flexible user interface that affords the user to add rich details to the surface with simple sketch input; and (3) Rapid model creation with sparse strokes, which enables novice users to enjoy the utilities of our system to create expected 3D models. We validate the proposed method through a large repository of interactively sketched examples. Our experiments and produced results confirm that our new method is a simple yet efficient design tool for modeling free-form shapes with simple and intuitive 2D sketches input. CCS Concepts • Computing methodologies → Sketch-based Modeling;","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"60 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86953724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bin Sheng, Yuxi Jin, Ping Li, Wenxiao Wang, Hongbo Fu, E. Wu
{"title":"InspireMePosing: Learn Pose and Composition from Portrait Examples","authors":"Bin Sheng, Yuxi Jin, Ping Li, Wenxiao Wang, Hongbo Fu, E. Wu","doi":"10.2312/PG.20181274","DOIUrl":"https://doi.org/10.2312/PG.20181274","url":null,"abstract":"Since people tend to build relationship with others by personal photography, capturing high quality photographs on mobile device has become a strong demand. We propose a portrait photography guidance system to guide user's photographing. We consider current scene image as our input and find professional photograph examples with similar aesthetic features for it. Deep residual network is introduced to gather scene classification information and represent common photograph rules by features, and random forest is adopted to establishing mapping relations between extracted features and examples. Besides, we implement our guidance system on a camera application and evaluate it by user study.","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"29 1","pages":"33-35"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78677719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Shape Interpolation via Multiple Curves","authors":"Y. Sahillioğlu, M. Aydinlilar","doi":"10.2312/PG.20181292","DOIUrl":"https://doi.org/10.2312/PG.20181292","url":null,"abstract":"We present a method that interpolates new shapes between a given pair of source and target shapes. To this end, we utilize a database of related shapes that is used to replace the direct transition from the source to the target by a composition of small transitions. This so-called data-driven interpolation scheme proved useful as long as the database is sufficiently large. We advance this idea one step further by processing the database shapes part by part, which in turn enables realistic interpolations with relatively small databases. We obtain promising preliminary results and point out potential improvements that we intend to address in our future work.","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"8 1","pages":"9-11"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90969233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Deep Learned Method for Video Indexing and Retrieval","authors":"X. Men, F. Zhou, Xiaoyong Li","doi":"10.2312/PG.20181287","DOIUrl":"https://doi.org/10.2312/PG.20181287","url":null,"abstract":"In this paper, we proposed a deep neural network based method for content based video retrieval. Our approach leveraged the deep neural network to generate the semantic information and introduced the graph-based storage structure to establish the video indices. We devised the Inception-Single Shot Multibox Detector (ISSD) and RI3D model to extract spatial semantic information (objects) and extract temporal semantic information (actions). Our ISSD model achieved a mAP of 26.7% on MS COCO dataset, increasing 3.2% over the original SSD model, while the RI3D model achieved a top-1 accuracy of 97.7% on dataset UCF-101. And we also introduced the graph structure to build the video index with the temporal and spatial semantic information. Our experiment results showed that the deep learned semantic information is highly effective for video indexing and retrieval.","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"44 1","pages":"85-88"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83025012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yunchi Cen, Xiaohui Liang, Junping Chen, Bailin Yang, Frederick W. B. Li
{"title":"Modeling Detailed Cloud Scene from Multi-source Images","authors":"Yunchi Cen, Xiaohui Liang, Junping Chen, Bailin Yang, Frederick W. B. Li","doi":"10.2312/PG.20181278","DOIUrl":"https://doi.org/10.2312/PG.20181278","url":null,"abstract":"Realistic cloud is essential for enhancing the quality of computer graphics applications, such as flight simulation. Data-driven method is an effective way in cloud modeling, but existing methods typically only utilize one data source as input. For example, natural images are usually used to model small-scale cloud with details, and satellite images and WRF data are used to model large scale cloud without details. To construct large-scale cloud scene with details, we propose a novel method to extract relevant cloud information from both satellite and natural images. Experiments show our method can produce more detailed cloud scene comparing with existing methods.","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"84 1","pages":"49-52"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88113495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ruibin Ma, Qingyu Zhao, Rui Wang, J. Damon, J. Rosenman, S. Pizer
{"title":"Skeleton-based Generalized Cylinder Deformation under the Relative Curvature Condition","authors":"Ruibin Ma, Qingyu Zhao, Rui Wang, J. Damon, J. Rosenman, S. Pizer","doi":"10.2312/PG.20181275","DOIUrl":"https://doi.org/10.2312/PG.20181275","url":null,"abstract":"Deformation of a generalized cylinder with a parameterized shape change of its centerline is a non-trivial task when the surface is represented as a high-resolution triangle mesh, particularly when self-intersection and local distortion are to be avoided. We introduce a deformation approach that satisfies these properties based on the skeleton (densely sampled centerline and cross sections) of a generalized cylinder. Our approach uses the relative curvature condition to extract a reasonable centerline for a generalized cylinder whose orthogonal cross sections will not intersect. Given the desired centerline shape as a parametric curve, the displacements on the cross sections are determined while controlling for twisting effects, and under this constraint a vertex-wise displacement field is calculated by minimizing a quadratic surface bending energy. The method is tested on complicated generalized cylindrical objects. In particular, we discuss one application of the method for human colon (large intestine) visualization.","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"17 12","pages":"37-40"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72374045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"StretchDenoise: Parametric Curve Reconstruction with Guarantees by Separating Connectivity from Residual Uncertainty of Samples","authors":"S. Ohrhallinger, M. Wimmer","doi":"10.2312/pg.20181266","DOIUrl":"https://doi.org/10.2312/pg.20181266","url":null,"abstract":"We reconstruct a closed denoised curve from an unstructured and highly noisy 2D point cloud. Our proposed method uses a two- pass approach: Previously recovered manifold connectivity is used for ordering noisy samples along this manifold and express these as residuals in order to enable parametric denoising. This separates recovering low-frequency features from denoising high frequencies, which avoids over-smoothing. The noise probability density functions (PDFs) at samples are either taken from sensor noise models or from estimates of the connectivity recovered in the first pass. The output curve balances the signed distances (inside/outside) to the samples. Additionally, the angles between edges of the polygon representing the connectivity become minimized in the least-square sense. The movement of the polygon's vertices is restricted to their noise extent, i.e., a cut-off distance corresponding to a maximum variance of the PDFs. We approximate the resulting optimization model, which consists of higher-order functions, by a linear model with good correspondence. Our algorithm is parameter-free and operates fast on the local neighborhoods determined by the connectivity. We augment a least-squares solver constrained by a linear system to also handle bounds. This enables us to guarantee stochastic error bounds for sampled curves corrupted by noise, e.g., silhouettes from sensed data, and we improve on the reconstruction error from ground truth. Open source to reproduce figures and tables in this paper is available at: this https URL","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"13 1","pages":"1-4"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73101146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Japanese Kanji Font Style Transfer based on GAN with Unpaired Training","authors":"Hiroki Sakai, D. Niino, Takashi Ijiri","doi":"10.2312/pg.20181290","DOIUrl":"https://doi.org/10.2312/pg.20181290","url":null,"abstract":"To design a whole package of Japanese font is labor consuming, since it usually contains about 30k kanji characters. To support an efficient design process, this poster attempts to adopt a style transfer algorithm for font package completion. Given two font packages where one contains all characters and the other lacks a large part, we train CycleGAN to perform style transfer between the two packages and transfer the style from the former to the latter. To illustrate the feasibility of our technique, we performed style transfer experiments and achieved visually plausible results by using a relatively small training data set. CCS Concepts •Computing methodologies → Image processing;","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"35 1","pages":"5-6"},"PeriodicalIF":0.0,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74152589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Altenhofen, J. Müller, D. Weber, A. Stork, D. Fellner
{"title":"Direct Limit Volumes: Constant-Time Limit Evaluation for Catmull-Clark Solids","authors":"C. Altenhofen, J. Müller, D. Weber, A. Stork, D. Fellner","doi":"10.2312/PG.20181285","DOIUrl":"https://doi.org/10.2312/PG.20181285","url":null,"abstract":"","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"47 1","pages":"77-80"},"PeriodicalIF":0.0,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74587363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robust Material Graphs for Volume Rendering","authors":"Ojaswa Sharma, Tushar Arora, Apoorv Khattar","doi":"10.2312/PG.20181282","DOIUrl":"https://doi.org/10.2312/PG.20181282","url":null,"abstract":"A good transfer function in volume rendering requires careful consideration of the materials present in a volume. In this work we propose a graph based method that considerably reduces manual effort required in designing a transfer function and provides an easy interface for interaction with the volume. Our novel contribution is in proposing an algorithm for robust deduction of a material graph from a set of disconnected edges. Since we compute material topology of the objects, an enhanced rendering is possible with our method. This also allows us to selectively render objects and depict adjacent materials in a volume. CCS Concepts •Computing methodologies → Machine learning approaches; Rendering; Image segmentation; Volumetric models;","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"10 1","pages":"65-68"},"PeriodicalIF":0.0,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82074300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}