{"title":"Illumination Space: A Feature Space for Radiance Maps","authors":"Andrew Chalmers, Todd Zickler, Taehyun Rhee","doi":"10.2312/pg.20201223","DOIUrl":"https://doi.org/10.2312/pg.20201223","url":null,"abstract":"From red sunsets to blue skies, the natural world contains breathtaking scenery with complex lighting which many computer graphics applications strive to emulate. Achieving such realism is a computationally challenging task and requires proficiency with rendering software. To aid in this process, radiance maps (RM) are a convenient storage structure for representing the real-world. In this form, it can be used to realistically illuminate synthetic objects or for backdrop replacement in chroma key compositing. An artist can also freely change a RM to another that better matches their desired lighting or background conditions. This motivates the need for a large collection of RMs such that an artist has a range of environmental conditions to choose from. Due to the practicality of RMs, databases of RMs have continually grown since its inception. However, a comprehensive collection of RMs is not useful without a method for searching through the collection. This thesis defines a semantic feature space that allows an artist to interactively browse through databases of RMs, with applications for both lighting and backdrop replacement in mind. The set of features are automatically extracted from the RMs in an offline pre-processing step, and are queried in real-time for browsing. Illumination features are defined to concisely describe lighting properties of a RM, allowing an artist to find a RM to illuminate their target scene. Texture features are used to describe visual elements of a RM, allowing an artist to search the database for reflective or backdrop properties for their target scene. A combination of the two sets of features allows an artist to search for RMs with desirable illumination effects which match the background environment.","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"8 1","pages":"7-12"},"PeriodicalIF":0.0,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82384358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Anisotropic Spectral Manifold Wavelet Descriptor for Deformable Shape Analysis and Matching","authors":"Qinsong Li, Shengjun Liu, Ling Hu, Xinru Liu","doi":"10.2312/PG.20181276","DOIUrl":"https://doi.org/10.2312/PG.20181276","url":null,"abstract":"In this paper, we present a novel framework termed Anisotropic Spectral Manifold Wavelet Transform (ASMWT) for shape analysis. ASMWT comprehensively analyzes the signals from multiple directions on local manifold regions of the shape with a series of low-pass and band-pass frequency filters in each direction. Using the ASMWT coefficients of a very simple function, we efficiently construct a localizable and discriminative multiscale point descriptor, named as the Anisotropic Spectral Manifold Wavelet Descriptor (ASMWD). Since the filters used in our descriptor are direction-sensitive and able to robustly reconstruct the signals with a finite number of scales, it makes our descriptor be intrinsic-symmetry unambiguous, compact as well as efficient. The extensive experimental results demonstrate that our method achieves significant performance than several state-of-the-art methods when applied in vertex-wise shape matching.","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"3 1","pages":"41-44"},"PeriodicalIF":0.0,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84216909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Efficient Metropolis Path Sampling for Material Editing and Re-rendering","authors":"Tomoya Yamaguchi, Tatsuya Yatagawa, S. Morishima","doi":"10.2312/pg.20181271","DOIUrl":"https://doi.org/10.2312/pg.20181271","url":null,"abstract":"This paper proposes efficient path sampling for re-rendering scenes after material editing. The proposed sampling method is based on Metropolis light transport (MLT) and distributes more path samples to pixels whose values have been changed significantly by editing. First, we calculate the difference between images before and after editing to estimate the changes in pixel values. In this step, we render the difference image directly rather than calculating the difference in the images by separately rendering the images before and after editing. Then, we sample more paths for pixels with larger difference values and render the scene after editing by reducing variances of Monte Carlo estimators using the control variates. Thus, we can obtain rendering results with a small amount of noise using only a small number of path samples. We examine the proposed sampling method with a range of scenes and demonstrate that it achieves lower estimation errors and variances over the state-of-the-art methods.","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"865 1","pages":"21-24"},"PeriodicalIF":0.0,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86544851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Bottom-up/Top-down Geometric Object Reconstruction with CNN Classification for Mobile Education","authors":"Ting Guo, Rundong Cui, Xiaoran Qin, Yongtao Wang, Zhi Tang","doi":"10.2312/pg.20181269","DOIUrl":"https://doi.org/10.2312/pg.20181269","url":null,"abstract":"Geometric objects in educational materials are often illustrated as 2D line drawings, which results in the loss of depth information. To alleviate the problem of fully understanding the 3D structure of geometric objects, we propose a novel method to reconstruct the 3D shape of a geometric object illustrated in a line drawing image. In contrast to most existing methods, ours directly take a single line drawing image as input and generate a valid sketch for reconstruction. Given a single input line drawing image, we first classify the geometric object in the image with convolution neural network (CNN). More specifically, we pre-train the model with simulated images to alleviate the problems of data collection and unbalanced distribution among different classes. Then, we generate the sketch of the geometric object with our proposed bottom-up and top-down scheme. Finally, we finish reconstruction by minimizing an objective function of reconstruction error. Extensive experimental results demonstrate that our method performs significantly better in both accuracy and efficiency compared with the existing methods.","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"21 1","pages":"13-16"},"PeriodicalIF":0.0,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79462508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"GPU-based Real-time Cloth Simulation for Virtual Try-on","authors":"Tongkui Su, Yan Zhang, Yu Zhou, Yao Yu, S. Du","doi":"10.2312/PG.20181288","DOIUrl":"https://doi.org/10.2312/PG.20181288","url":null,"abstract":"We present a novel real-time approach for dynamic detailed clothing simulation on a moving body. The most distinctive feature of our method is that it divides dynamic simulation into two parts: local driving and static cloth simulation. In local driving, feature points of clothing will be handled between two consecutive frames. And then we apply static cloth simulation for a specific frame. Both parts are ecxuted in an entire parallel way. In practice, our system achieves real-time virtual try-on using a depth camera to capture the moving body model and meanwhile, keeps high-fidelity. Experimental results indicate that our method has significant speedups over prior related techniques.","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"27 1","pages":"1-2"},"PeriodicalIF":0.0,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78674702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Gauss-Seidel Progressive Iterative Approximation (GS-PIA) for Loop Surface Interpolation","authors":"Zhihao Wang, Yajuan Li, Weiyin Ma, Chongyang Deng","doi":"10.2312/pg.20181284","DOIUrl":"https://doi.org/10.2312/pg.20181284","url":null,"abstract":"We propose a Gauss-Seidel progressive iterative approximation (GS-PIA) method for Loop subdivision surface interpolation by combining classical Gauss-Seidel iterative method for linear system and progressive iterative approximation (PIA) for data interpolation. We prove that GS-PIA is convergent by applying matrix theory. GS-PIA algorithm retains the good features of the classical PIA method, such as the resemblance with the given mesh and the advantages of both a local method and a global method. Compared with some existed interpolation methods of subdivision surfaces, GS-PIA algorithm has advantages in three aspects. First, it has a faster convergence rate compared with the PIA and WPIA algorithms. Second, compared with WPIA algorithm, GS-PIA algorithm need not to choose weights. Third, GS-PIA need not to modify the mesh topology compared with other methods with fairness measures. Numerical examples for Loop subdivision surfaces interpolation illustrated in this paper show the efficiency and effectiveness of GS-PIA algorithm. CCS Concepts •Computing methodologies → Parametric curve and surface models;","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"9 1","pages":"73-76"},"PeriodicalIF":0.0,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73319049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Pintore, F. Ganovelli, R. Pintus, Roberto Scopigno, E. Gobbetti
{"title":"Recovering 3D Indoor Floor Plans by Exploiting Low-cost Spherical Photography","authors":"G. Pintore, F. Ganovelli, R. Pintus, Roberto Scopigno, E. Gobbetti","doi":"10.2312/PG.20181277","DOIUrl":"https://doi.org/10.2312/PG.20181277","url":null,"abstract":"We present a novel approach to automatically recover, from a small set of partially overlapping panoramic images, an indoor structure representation in terms of a 3D floor plan registered with a set of 3D environment maps. Our improvements over previous approaches include a new method for geometric context extraction based on a 3D facets representation, which combines color distribution analysis of individual images with sparse multi-view clues, as well as an efficient method to combine the facets from different point-of-view in the same world space, considering the reliability of the facets contribution. The resulting capture and reconstruction pipeline automatically generates 3D multi-room environments where most of the other previous approaches fail, such as in presence of hidden corners, large clutter and sloped ceilings, even without involving additional dense 3D data or tools. We demonstrate the effectiveness and performance of our approach on different real-world indoor scenes.","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"1 1","pages":"45-48"},"PeriodicalIF":0.0,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88153245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robust and Efficient SPH Simulation for High-speed Fluids with the Dynamic Particle Partitioning Method","authors":"Z. Zheng, Yang Gao, Shuai Li, Hong Qin, A. Hao","doi":"10.2312/pg.20181268","DOIUrl":"https://doi.org/10.2312/pg.20181268","url":null,"abstract":"In this paper, our research efforts are devoted to the efficiency issue of the SPH simulation when the ratio of velocities among fluid particles is large. Specifically, we introduce a k-means clustering method into the SPH framework to dynamically partition fluid particles into two disjoint groups based on their velocities, we then use a two-scale time step scheme for these two types of particles. The smaller time steps are for particles with higher speed in order to preserve temporal details and guarantee the numerical stability. In contrast, the larger time steps are used for particles with smaller speeds to reduce the computational expense, and both types of particles are tightly coupled in the simulation. We conduct various experiments which have manifested the advantages of our methods over the conventional SPH technique and its new variants in terms of efficiency and stability. CCS Concepts •Computing methodologies → Animation; Physical simulation;","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"5 4 1","pages":"9-12"},"PeriodicalIF":0.0,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90243075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Extreme Feature Regions for Image Matching","authors":"Baijiang Fan, Yunbo Rao, J. Pu, Jianhua Deng","doi":"10.2312/PG.20181286","DOIUrl":"https://doi.org/10.2312/PG.20181286","url":null,"abstract":"Extreme feature regions are increasingly critical for many image matching applications on affine image-pairs. In this paper, we focus on the time-consumption and accuracy of using extreme feature regions to do the affine-invariant image matching. Specifically, we proposed novel image matching algorithm using three types of critical points in Morse theory to calculate precise extreme feature regions. Furthermore, Random Sample Consensus (RANSAC) method is used to eliminate the features of complex background, and improve the accuracy of the extreme feature regions. Moreover, the saddle regions is used to calculate the covariance matrix for image matching. Extensive experiments on several benchmark image matching databases validate the superiority of the proposed approaches over many recently proposed affine-invariant SIFT algorithms. CCS Concepts •Computing methodologies → Image processing; image-matching; random sample consensus; affine invariant;","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"66 1","pages":"81-84"},"PeriodicalIF":0.0,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81105985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Effects of Surface Anisotropy on Perception of Car Body Attractiveness","authors":"Jirí Filip, M. Kolafová","doi":"10.2312/pg.20181270","DOIUrl":"https://doi.org/10.2312/pg.20181270","url":null,"abstract":"In the automotive industry effect coatings are used to introduce customized product design, visually communicating the unique impression of a car. Industrial effect coatings systems achieve primarily a globally isotropic appearance, i.e., surface appearance that does not change when material rotates around its normal. To the contrary, anisotropic appearance exhibits variable behavior due to oriented structural elements. This paper studies to what extent anisotropic appearance improves a visual impression of a car body beyond a standard isotropic one. We ran several psychophysical studies identifying the proper alignment of an anisotropic axis over a car body, showing that regardless of the illumination conditions, subjects always preferred an anisotropy axis orthogonal to car body orientation. The majority of subjects also found the anisotropic appearance more visually appealing than the isotropic one.","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"129 1","pages":"17-20"},"PeriodicalIF":0.0,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75842074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}