{"title":"Face texture synthesis from multiple images via sparse and dense correspondence","authors":"Shugo Yamaguchi, S. Morishima","doi":"10.1145/3005358.3005386","DOIUrl":"https://doi.org/10.1145/3005358.3005386","url":null,"abstract":"We have a desire to edit images for various purposes such as art, entertainment, and film production so texture synthesis methods have been proposed. Especially, PatchMatch algorithm [Barnes et al. 2009] enabled us to easily use many image editing tools. However, these tools are applied to one image. If we can automatically synthesize from various examples, we can create new and higher quality images. Visio-lization [Mohammed et al. 2009] generated average face by synthesis of face image database. However, the synthesis was applied block-wise so there were artifacts on the result and free form features of source images such as wrinkles could not be preserved. We proposed a new synthesis method for multiple images. We applied sparse and dense nearest neighbor search so that we can preserve both input and source database image features. Our method allows us to create a novel image from a number of examples.","PeriodicalId":242138,"journal":{"name":"SIGGRAPH ASIA 2016 Technical Briefs","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129843368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Efficient surface diffraction renderings with Chebyshev approximations","authors":"D. S. Dhillon, A. Ghosh","doi":"10.1145/3005358.3005376","DOIUrl":"https://doi.org/10.1145/3005358.3005376","url":null,"abstract":"We propose an efficient method for reproducing diffraction colours on natural surfaces with complex nanostructures that can be represented as height-fields. Our method employs Chebyshev approximations to accurately model view-dependent iridescences for such a surface into its spectral bidirectional reflectance distribution function (BRDF). As main contribution, our method significantly reduces the runtime memory footprint from precomputed lookup tables without compromising photorealism. Our accuracy is comparable with current state-of-the-art methods and better at equal memory usage. Furthermore, a Chebyshev polynomial basis set with its near-best approximation properties allow for scalable memory-vs-performance trade-offs. We show realistic diffraction effects with just two lookup textures for natural, quasi-periodic surface nanostructures. Performance intensive applications like games and VR can benefit from our method, especially for low-end GPU or mobile platforms.","PeriodicalId":242138,"journal":{"name":"SIGGRAPH ASIA 2016 Technical Briefs","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131096186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Viitanen, M. Koskela, P. Jääskeläinen, J. Takala
{"title":"Multi bounding volume hierarchies for ray tracing pipelines","authors":"T. Viitanen, M. Koskela, P. Jääskeläinen, J. Takala","doi":"10.1145/3005358.3005384","DOIUrl":"https://doi.org/10.1145/3005358.3005384","url":null,"abstract":"High-performance ray tracing on CPU is now largely based on Multi Bounding Volume Hierarchy (MBVH) trees. We apply MBVH to a fixed-function ray tracing accelerator architecture. According to cycle-level simulations and power analysis, MBVH reduces energy per frame by an average of 24% and improves performance per area by 19% in scenes with incoherent rays, due to its compact memory layout which reduces DRAM traffic. With primary rays, energy efficiency improves by 15% and performance per area by 20%.","PeriodicalId":242138,"journal":{"name":"SIGGRAPH ASIA 2016 Technical Briefs","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133729050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A low cost holographic display","authors":"Mark W. Green","doi":"10.1145/3005358.3005373","DOIUrl":"https://doi.org/10.1145/3005358.3005373","url":null,"abstract":"Holography can be viewed as the ultimate display technology since it correctly duplicates all the cues used by our visual system. In the graphics community this technology has largely been ignored in the past due to its computational cost, but this is changing as more powerful parallel processors are becoming available. One of the main challenges in this area is the lack of a commercially available display device at a reasonable cost that can be used for testing and evaluating algorithms. This paper describes a low cost holographic display device that can easily be constructed from standard parts as a solution to this problem. This paper discusses the design considerations for such a device, its construction and an overview of how holograms can be computed for it. It is our hope that this device will stimulate further research on holography within the computer graphics community.","PeriodicalId":242138,"journal":{"name":"SIGGRAPH ASIA 2016 Technical Briefs","volume":"120 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129863929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Su, Yongwei Nie, Zhensong Zhang, Hanqiu Sun, Guiqing Li
{"title":"Video stitching for handheld inputs via combined video stabilization","authors":"T. Su, Yongwei Nie, Zhensong Zhang, Hanqiu Sun, Guiqing Li","doi":"10.1145/3005358.3005383","DOIUrl":"https://doi.org/10.1145/3005358.3005383","url":null,"abstract":"Stitching videos captured by handheld devices is very useful, but also very challenging due to the heavy and independent shakiness in the videos. In this paper, we propose a hand-taken video stitching method which combines the techniques of video stitching and stabilization together into a unified optimization framework. In this way, our method can compute the most optimal stabilization and stitching results with respect to each other, which outperforms previous methods that take stabilization and stitching as separate operations. Our method is based on the framework of bundled camera paths [Liu et al. 2013]. We present a novel unified camera paths optimization formulation which consists of two stabilization terms and one stitching term. We also present a corresponding iterative solver that finds best stitching and stabilization solutions numerically. We compare our method with previous methods, and the experiments demonstrate the effectiveness of our method.","PeriodicalId":242138,"journal":{"name":"SIGGRAPH ASIA 2016 Technical Briefs","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129233498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Physics based boiling bubble simulation","authors":"Yi Gu, Herbert Yang","doi":"10.1145/3005358.3005385","DOIUrl":"https://doi.org/10.1145/3005358.3005385","url":null,"abstract":"Boiling is a daily observed phenomenon. From the physics point of view, boiling is due to the rapid vaporization of liquid like water. In fact, it consists of several complex processes. First, energy transfer, which mainly includes thermal conduction and convection, occurs between the heat source and water, between water molecules and bubbles, and between water molecules and water molecules. Second, there are multiple phases of boiling, each of which has different characteristics, and hence, makes boiling a complicated physical process. In this paper, we propose a new physics based method for simulating bubbles from boiling water with a Smoothed Particle Hydrodynamics (SPH) solver. The proposed work handles \"vapor\" bubbles rather than \"air\" bubbles. The most important difference is that the former can condense and merge with the surrounding water while the latter cannot. With our model, 5 interesting phenomena of boiling bubbles are introduced which have not been fully addressed in previous works.","PeriodicalId":242138,"journal":{"name":"SIGGRAPH ASIA 2016 Technical Briefs","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124367656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Z2 traversal order for VR stereo rendering on tile-based mobile GPUs","authors":"Jae-Ho Nah, Yeongkyu Lim, Sunho Ki, Chulho Shin","doi":"10.1145/3005358.3005374","DOIUrl":"https://doi.org/10.1145/3005358.3005374","url":null,"abstract":"With increasing demands of virtual reality (VR) applications, efficient VR rendering techniques are becoming essential because VR stereo rendering requires increased computational costs to separately render views for the left and right eyes. To reduce the rendering cost in VR applications, we present a novel traversal order for tile-based mobile GPU architectures, called the Z2 traversal order. In tile-based mobile GPU architectures, a tile traversal order that maximizes spatial locality can increase the GPU cache efficiency. For VR applications, our approach improves the traditional Z-curve order; we render two screen tiles in the left and right views by turns or simultaneously, as a result, we can exploit spatial locality between the two tiles. To evaluate our approach, we conducted a trace-driven hardware simulation using Mesa and a hardware simulator. The experimental results show that the Z2 traversal order can reduce external memory bandwidth requirements and can increase rendering performance.","PeriodicalId":242138,"journal":{"name":"SIGGRAPH ASIA 2016 Technical Briefs","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122545404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Rendering kaleidoscopic scenes using orbifold theory","authors":"Francis Williams, E. Zhang","doi":"10.1145/3005358.3005368","DOIUrl":"https://doi.org/10.1145/3005358.3005368","url":null,"abstract":"Kaleidoscopes create fascinating visual effects due to the presence of multiple mirrors placed at carefully designed distances and angles. These effects, such as infinite repeating copies of a single object, are difficult to capture. Moreover, lighting and shadow effects in kaleidoscopic scenes are highly impacted by the interaction between lights and mirrors. Such effects pose challenges to existing rendering techniques such as ray tracing and photon mapping. In this paper, we present a unified framework to render scenes from the perspective of a viewer inside a kaleidoscope based on Orbifold theory, which provides the mathematical foundation to describe the position and orientation of reflected objects (including light sources). Our framework is able to accurately capture the global illumination effects inside a kaleidoscope. We demonstrate the power of our technique with the rendering of a number of scenes including animation.","PeriodicalId":242138,"journal":{"name":"SIGGRAPH ASIA 2016 Technical Briefs","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129881632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Matteo Colaianni, C. Siegl, J. Süßmuth, F. Bauer, G. Greiner
{"title":"Anisotropic surface based deformation","authors":"Matteo Colaianni, C. Siegl, J. Süßmuth, F. Bauer, G. Greiner","doi":"10.1145/3005358.3005361","DOIUrl":"https://doi.org/10.1145/3005358.3005361","url":null,"abstract":"We present a novel approach to mesh deformation that enables simple context sensitive manipulation of 3D geometry. The method is based on locally anisotropic scaling. This allows an intuitive directional modeling within an easy to implement framework. The proposed method ideally complements current intuitive sculpting paradigms by further possibilities of surface based editing without the need of additional host geometries. We also show the anisotropy to be seamlessly transferable to free boundary parameterization methods, which allows to solve the hard problem of flattening compressive garments in the domain of apparel development.","PeriodicalId":242138,"journal":{"name":"SIGGRAPH ASIA 2016 Technical Briefs","volume":"135 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133471254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wan-Chun Ma, M. Lamarre, Etienne Danvoye, Chongyang Ma, Manny Ko, Javier von der Pahlen, Cyrus A. Wilson
{"title":"Semantically-aware blendshape rigs from facial performance measurements","authors":"Wan-Chun Ma, M. Lamarre, Etienne Danvoye, Chongyang Ma, Manny Ko, Javier von der Pahlen, Cyrus A. Wilson","doi":"10.1145/3005358.3005378","DOIUrl":"https://doi.org/10.1145/3005358.3005378","url":null,"abstract":"We present a framework for automatically generating personalized blendshapes from actor performance measurements, while preserving the semantics of a template facial animation rig. Firstly, we capture various poses from the subject with our photogrammetry apparatus. The 3D reconstruction from each pose is then corresponded by an image-based tracking algorithm. The core of our framework is an optimization algorithm which iteratively refines the initial estimation of the blendshapes such that they can fit the performance measurements better. This framework facilitates creation of an ensemble of realistic digital-double face rigs for each individual with consistent behavior across the character set.","PeriodicalId":242138,"journal":{"name":"SIGGRAPH ASIA 2016 Technical Briefs","volume":"10 9","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114018470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}