Y. Dobashi, Takashi Ijiri, Hideki Todo, Kei Iwasaki, Makoto Okabe, S. Nishimura
{"title":"Measuring microstructures using confocal laser scanning microscopy for estimating surface roughness","authors":"Y. Dobashi, Takashi Ijiri, Hideki Todo, Kei Iwasaki, Makoto Okabe, S. Nishimura","doi":"10.1145/2945078.2945106","DOIUrl":"https://doi.org/10.1145/2945078.2945106","url":null,"abstract":"Realistic image synthesis is an important research goal in computer graphics. One important factor to achieve this goal is a bidirectional reflectance distribution function (BRDF) that mainly governs an appearance of an object. Many BRDF models have therefore been developed. A physically-based BRDF based on microfacet theory [Cook and Torrance 1982] is widely used in many applications since it can produce highly realistic images. The microfacetbased BRDF consists of three terms; a Fresnel, a normal distribution, and a geometric functions. There are many analytical and approximate models for each of these terms.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115293695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bilal Ahmed, Jong Hun Lee, Yong Yi Lee, Junho Choi, Yong Hwi Kim, M. Son, M. Joo, Kwan H. Lee
{"title":"A method for realistic 3D projection mapping using multiple projectors","authors":"Bilal Ahmed, Jong Hun Lee, Yong Yi Lee, Junho Choi, Yong Hwi Kim, M. Son, M. Joo, Kwan H. Lee","doi":"10.1145/2945078.2945154","DOIUrl":"https://doi.org/10.1145/2945078.2945154","url":null,"abstract":"Recently researchers have shown much interest in 3D projection mapping systems but relatively less work has been done to make the contents look realistic. Much work has been done for multi-projector blending, 3D projection mapping and multi-projector based large displays but existing color compensation based systems still suffer from contrast compression, color inconsistencies and inappropriate luminance over the three dimensional projection surface giving rise to an un-appealing appearance. Until now having a realistic result with projection mapping on 3D objects when compared with a similar original object still remains a challenge. In this paper, we present a framework that optimizes projected images using multiple projectors in order to achieve an appearance that looks close to a real object whose appearance is being regenerated by projection mapping.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"14 8","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114093527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qian Chen, Haiyuan Wu, Shinichi Higashino, R. Sakamoto
{"title":"Camera calibration by recovering projected centers of circle pairs","authors":"Qian Chen, Haiyuan Wu, Shinichi Higashino, R. Sakamoto","doi":"10.1145/2945078.2945117","DOIUrl":"https://doi.org/10.1145/2945078.2945117","url":null,"abstract":"In this paper, we present a convenient method for camera calibration with arbitrary co-planar circle-pairs from one image. This method is based on the accurate recovery of the projected centers of the circle pairs using a closed-form algorithm.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124206586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rodrigo Marques Almeida da Silva, B. Feijó, Pablo B. Gomes, Thiago Frensh, Daniel Monteiro
{"title":"Real time 360° video stitching and streaming","authors":"Rodrigo Marques Almeida da Silva, B. Feijó, Pablo B. Gomes, Thiago Frensh, Daniel Monteiro","doi":"10.1145/2945078.2945148","DOIUrl":"https://doi.org/10.1145/2945078.2945148","url":null,"abstract":"In this paper we propose a real time 360° video stitching and streaming processing methodology focused on GPU. The solution creates a scalable solution for large resolutions, such as 4K and 8K per camera, and supports broadcasting solutions with cloud architectures. The methodology uses a group of deformable meshes, processed using OpenGL (GLSL) and the final image combine the inputs using a robust pixel shader. Moreover, the result can be streamed to a cloud service using h.264 encoding with nVEnc GPU encoding. Finally, we present some results.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121705267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Real-time rendering of high-quality effects using multi-frame sampling","authors":"Daniel Limberger, J. Döllner","doi":"10.1145/2945078.2945157","DOIUrl":"https://doi.org/10.1145/2945078.2945157","url":null,"abstract":"In a rendering environment of comparatively sparse interaction, e.g., digital production tools, image synthesis and its quality do not have to be constrained to single frames. This work analyzes strategies for highly economically rendering of state-of-the-art rendering effects using progressive multi-frame sampling in real-time. By distributing and accumulating samples of sampling-based rendering techniques (e.g., anti-aliasing, order-independent transparency, physically-based depth-of-field and shadowing, ambient occlusion, reflections) over multiple frames, images of very high quality can be synthesized with unequaled resource-efficiency.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124756286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A data-driven BSDF framework","authors":"Murat Kurt, G. Ward, Nicolas Bonneel","doi":"10.1145/2945078.2945109","DOIUrl":"https://doi.org/10.1145/2945078.2945109","url":null,"abstract":"We present a data-driven Bidirectional Scattering Distribution Function (BSDF) representation and a model-free technique that preserves the integrity of the original data and interpolates reflection as well as transmission functions for arbitrary materials. Our interpolation technique employs Radial Basis Functions (RBFs), Radial Basis Systems (RBSs) and displacement techniques to track peaks in the distribution. The proposed data-driven BSDF representation can be used to render arbitrary BSDFs and includes an efficient Monte Carlo importance sampling scheme. We show that our data-driven BSDF framework can be used to represent measured BSDFs that are visually plausible and demonstrably accurate.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115121393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Optimal LED selection for multispectral lighting reproduction","authors":"Chloe LeGendre, Xueming Yu, P. Debevec","doi":"10.1145/2945078.2945150","DOIUrl":"https://doi.org/10.1145/2945078.2945150","url":null,"abstract":"We demonstrate the sufficiency of using as few as five LEDs of distinct spectra for multispectral lighting reproduction and solve for the optimal set of five from 11 such commercially available LEDs. We leverage published spectral reflectance, illuminant, and camera spectral sensitivity datasets to show that two approaches of lighting reproduction, matching illuminant spectra directly and matching material color appearance observed by one or more cameras or a human observer, yield the same LED selections. Our proposed optimal set of five LEDs includes red, green, and blue with narrow emission spectra, along with white and amber with broader spectra.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129558720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tsukasa Nozawa, Takuya Kato, Pavel A. Savkin, N. Nozawa, S. Morishima
{"title":"3D facial geometry reconstruction using patch database","authors":"Tsukasa Nozawa, Takuya Kato, Pavel A. Savkin, N. Nozawa, S. Morishima","doi":"10.1145/2945078.2945102","DOIUrl":"https://doi.org/10.1145/2945078.2945102","url":null,"abstract":"3D facial shape reconstruction in the wild environments is an important research task in the field of CG and CV. This is because it can be applied to a lot of products, such as 3DCG video games and face recognition. One of the most popular 3D facial shape reconstruction techniques is 3D Model-based approach. This approach approximates a facial shape by using 3D face model, which is calculated by principal component analysis. [Blanz and Vetter 1999] performed a 3D facial reconstruction by fitting points from facial feature points of an input of single facial image to vertex of template 3D facial model named 3D Morphable Model. This method can reconstruct a facial shape from a variety of images which include different lighting and face orientation, as long as facial feature points can be detected. However, representation quality of the result depends on the number of 3D model resolution.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"8 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129711362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Serrano, D. Gutierrez, K. Myszkowski, H. Seidel, B. Masiá
{"title":"Intuitive editing of material appearance","authors":"A. Serrano, D. Gutierrez, K. Myszkowski, H. Seidel, B. Masiá","doi":"10.1145/2945078.2945141","DOIUrl":"https://doi.org/10.1145/2945078.2945141","url":null,"abstract":"Many different techniques for measuring material appearance have been proposed in the last few years. These have produced large public datasets, which have been used for accurate, data-driven appearance modeling. However, although these datasets have allowed us to reach an unprecedented level of realism in visual appearance, editing the captured data remains a challenge. In this work, we develop a novel methodology for intuitive and predictable editing of captured BRDF data, which allows for artistic creation of plausible material appearances, bypassing the difficulty of acquiring novel samples. We synthesize novel materials, and extend the existing MERL dataset [Matusik et al. 2003] up to 400 mathematically valid BRDFs. We design a large-scale experiment with 400 participants, gathering 56000 ratings about the perceptual attributes that best describe our extended dataset of materials. Using these ratings, we build and train networks of radial basis functions to act as functionals that map the high-level perceptual attributes to an underlying PCA-based representation of BRDFs. We show how our approach allows for intuitive edits of a wide range of visual properties, and demonstrate through a user study that our functionals are excellent predictors of the perceived attributes of appearance, enabling predictable editing with our framework.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"135 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132610848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Carlos Aliaga, C. Castillo, D. Gutierrez, M. Otaduy, Jorge López-Moreno, A. Jarabo
{"title":"A fiber-level model for predictive cloth rendering","authors":"Carlos Aliaga, C. Castillo, D. Gutierrez, M. Otaduy, Jorge López-Moreno, A. Jarabo","doi":"10.1145/2945078.2945144","DOIUrl":"https://doi.org/10.1145/2945078.2945144","url":null,"abstract":"Rendering realistic fabrics is an active research area with many applications in computer graphics and other fields like textile design. Reproducing the appearance of cloth remains challenging due to the micro-structures found in textiles, and the complex light scattering patterns exhibited at such scales. Recent approaches have reached very realistic results, either by directly modeling the arrangement of the fibers [Schröder et al. 2011], or capturing the structure of small pieces of cloth using Computed Tomography scanners (CT) [Zhao et al. 2011]. However, there is still a need for predictive modeling of cloth appearance; existing methods either rely on manually-set parameter values, or use photographs of real pieces of cloth to guide appearance matching algorithms, often assuming certain simplifications such as considering circular or elliptical cross sections, or assuming an homogeneous volume density, that lead to very different appearances.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133875686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}