{"title":"Peripheral Vision in Simulated Driving: Comparing CAVE and Head-mounted Display","authors":"Tana Tanoi, N. Dodgson","doi":"10.2312/PG.20211392","DOIUrl":"https://doi.org/10.2312/PG.20211392","url":null,"abstract":"","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"515 1","pages":"67-68"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86867063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"View-Dependent Impostors for Architectural Shape Grammars","authors":"Chao Jia, Moritz Roth, B. Kerbl, M. Wimmer","doi":"10.2312/pg.20211390","DOIUrl":"https://doi.org/10.2312/pg.20211390","url":null,"abstract":"Procedural generation has become a key component in satisfying a growing demand for ever-larger, highly detailed geometry in realistic, open-world games and simulations. In this paper, we present our work towards a new level-of-detail mechanism for procedural geometry shape grammars. Our approach automatically identifies and adds suitable surrogate rules to a shape grammar’s derivation tree. Opportunities for surrogates are detected in a dedicated pre-processing stage. Where suitable, textured impostors are then used for rendering based on the current viewpoint at runtime. Our proposed methods generate simplified geometry with superior visual quality to the state-of-the-art and roughly the same rendering performance.","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"188 1","pages":"63-64"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83052156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"CSLF: Cube Surface Light Field and Its Sampling, Compression, Real-Time Rendering","authors":"Xiao Ai, Yigang Wang, Simin Kou","doi":"10.2312/pg.20211381","DOIUrl":"https://doi.org/10.2312/pg.20211381","url":null,"abstract":"Light field is gaining both research and commercial interests since it has the potential to produce view-dependent and photorealistic effects for virtual and augmented reality. In this paper, we further explore the light field and presents a novel parameterization that permits 1) effectively sampling the light field of an object with unknown geometry, 2) efficiently compressing and 3) real-time rendering from arbitrary viewpoints. A novel, key element in our parameterization is that we use the intersections of the light rays and a general cube surface to parameterize the four-dimensional light field, constructing the cube surface light field (CSLF). We resolve the huge data amount problem in CSLF by uniformly decimating the viewpoint space to form a set of key views which are then converted into a pseudo video sequence and compressed using the high efficiency video coding encoder. To render the CSLF, we employ a ray casting approach and draw a polygonal mesh, enabling real-time generating arbitrary views from the outside of the cube surface. We build the CSLF datasets and extensively evaluate our parameterization from the sampling, compression and rendering. Results show that the cube surface parameterization can simultaneously achieve the above three characteristics, indicating the potentiality in practical virtual and augmented reality. CCS Concepts • Computing methodologies → Image-based rendering; Ray tracing; Image compression;","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"62 1","pages":"13-18"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79650445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"User-centred Depth Estimation Benchmarking for VR Content Creation from Single Images","authors":"Anthony Dickson, Alistair Knott, S. Zollmann","doi":"10.2312/pg.20211394","DOIUrl":"https://doi.org/10.2312/pg.20211394","url":null,"abstract":"The capture and creation of 3D content from a device equipped with just a single RGB camera has a wide range of applications ranging from 3D photographs and panoramas to 3D video. Many of these methods rely on depth estimation models to provide the necessary 3D data, mainly neural network models. However, the metrics used to evaluate these models can be difficult to interpret and to relate to the quality of 3D/VR content derived from these models. In this work, we explore the relationship between the widely used depth estimation metrics, image similarly metrics applied to synthesised novel viewpoints, and user perception of quality and similarity on these novel viewpoints. Our results indicate that the standard metrics are indeed a good indicator of 3D quality, and that they correlate with human judgements and other metrics that are designed to follow human judgements.","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"10 1","pages":"71-72"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83508082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Hierarchical Link and Code: Efficient Similarity Search for Billion-Scale Image Sets","authors":"Kaixiang Yang, Hongya Wang, Ming-han Du, Zhizheng Wang, Zongyuan Tan, Yingyuan Xiao","doi":"10.2312/PG.20211397","DOIUrl":"https://doi.org/10.2312/PG.20211397","url":null,"abstract":"Similarity search is an indispensable component in many computer vision applications. To index billions of images on a single commodity server, Douze et al. introduced L&C that works on operating points considering 64–128 bytes per vector. While the idea is inspiring, we observe that L&C still suffers the accuracy saturation problem, which it is aimed to solve. To this end, we propose a simple yet effective two-layer graph index structure, together with dual residual encoding, to attain higher accuracy. Particularly, we partition vectors into multiple clusters and build the top-layer graph using the corresponding centroids. For each cluster, a subgraph is created with compact codes of the first-level vector residuals. Such an index structure provides better graph search precision as well as saves quite a few bytes for compression. We employ the second-level residual quantization to re-rank the candidates obtained through graph traversal, which is more efficient than regression-from-neighbors adopted by L&C. Comprehensive experiments show that our proposal obtains over 30% higher recall@1 than the state-of-thearts, and achieves up to 7.7x and 6.1x speedup over L&C on Deep1B and Sift1B, respectively. CCS Concepts • Information systems → Top-k retrieval in databases;","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"76 1","pages":"81-86"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89688741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Stappen, Johannes Unterguggenberger, B. Kerbl, M. Wimmer
{"title":"Temporally Stable Content-Adaptive and Spatio-Temporal Shading Rate Assignment for Real-Time Applications","authors":"S. Stappen, Johannes Unterguggenberger, B. Kerbl, M. Wimmer","doi":"10.2312/PG.20211391","DOIUrl":"https://doi.org/10.2312/PG.20211391","url":null,"abstract":"We propose two novel methods to improve the efficiency and quality of real-time rendering applications: Texel differential-based content-adaptive shading (TDCAS) and spatio-temporally filtered adaptive shading (STeFAS). Utilizing Variable Rate Shading (VRS)—a hardware feature introduced with NVIDIA’s Turing micro-architecture—and properties derived during rendering or Temporal Anti-Aliasing (TAA), our techniques adapt the resolution to improve the performance and quality of real-time applications. VRS enables different shading resolution for different regions of the screen during a single render pass. In contrast to other techniques, TDCAS and STeFAS have very little overhead for computing the shading rate. STeFAS enables up to 4x higher rendering resolutions for similar frame rates, or a performance increase of 4 × at the same resolution.","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"34 1","pages":"65-66"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88463712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Suppan, Andrew Chalmers, Junhong Zhao, A. Doronin, Taehyun Rhee
{"title":"Neural Screen Space Rendering of Direct Illumination","authors":"C. Suppan, Andrew Chalmers, Junhong Zhao, A. Doronin, Taehyun Rhee","doi":"10.2312/pg.20211385","DOIUrl":"https://doi.org/10.2312/pg.20211385","url":null,"abstract":"Neural rendering is a class of methods that use deep learning to produce novel images of scenes from more limited information than traditional rendering methods. This is useful for information scarce applications like mixed reality or semantic photo synthesis but comes at the cost of control over the final appearance. We introduce the Neural Direct-illumination Renderer (NDR), a neural screen space renderer capable of rendering direct-illumination images of any geometry, with opaque materials, under distant illuminant. The NDR uses screen space buffers describing material, geometry, and illumination as inputs to provide direct control over the output. We introduce the use of intrinsic image decomposition to allow a Convolutional Neural Network (CNN) to learn a mapping from a large number of pixel buffers to rendered images. The NDR predicts shading maps, which are subsequently combined with albedo maps to create a rendered image. We show that the NDR produces plausible images that can be edited by modifying the input maps and marginally outperforms the state of the art while also providing more functionality. CCS Concepts • Computing methodologies → Rendering; Neural networks; Supervised learning by regression;","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"22 1","pages":"37-42"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90461582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Youxin Pang, Mengke Yuan, Yuchun Chang, Dong‐Ming Yan
{"title":"SDALIE-GAN: Structure and Detail Aware GAN for Low-light Image Enhancement","authors":"Youxin Pang, Mengke Yuan, Yuchun Chang, Dong‐Ming Yan","doi":"10.2312/PG.20211393","DOIUrl":"https://doi.org/10.2312/PG.20211393","url":null,"abstract":"We present a GAN-based network architecture for low-light image enhancement, called Structure and Detail Aware Low-light Image Enhancement GAN (SDALIE-GAN), which is trained with unpaired low/normal-light images. Specifically, complementary Structure Aware Generator (SAG) and Detail Aware Generator (DAG) are designed respectively to generate an enhanced low-light image. Besides, intermediate features from SAG and DAG are integrated through guided map supervised feature attention fusion module, and regularizes the generated samples with an appended intensity adjusting module. We demonstrate the advantages of the proposed approach by comparing it with state-of-the-art low-light image enhancement methods. CCS Concepts • Computing methodologies → Computational photography;","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"55 1","pages":"69-70"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85308515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Maximum-Clearance Planar Motion Planning Based on Recent Developments in Computing Minkowski Sums and Voronoi Diagrams","authors":"M. Jung, Myung-Soo Kim","doi":"10.2312/PG.20211382","DOIUrl":"https://doi.org/10.2312/PG.20211382","url":null,"abstract":"We present a maximum-clearance motion planning algorithm for planar geometric models with three degrees of freedom (translation and rotation). This work is based on recent developments in real-time algorithms for computing the Minkowski sums and Voronoi diagrams of planar geometric models bounded by G1-continuous sequences of circular arcs. Compared with their counterparts using polygons with no G1-continuity at vertices, the circle-based approach greatly simplifies the Voronoi structure of the collision-free space for the motion planning in a plane with three degrees of freedom. We demonstrate the effectiveness of the proposed approach by test sets of maximum-clearance motion planning through narrow passages in a plane. CCS Concepts • Computing methodologies → Motion planning; planar geometric models; circle-based algorithm; maximum-clearance; Minkowski sum; Voronoi diagram; medial axis;","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"31 1","pages":"19-24"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78114043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Constraint Synthesis for Parametric CAD","authors":"A. Mathur, D. Zufferey","doi":"10.2312/PG.20211396","DOIUrl":"https://doi.org/10.2312/PG.20211396","url":null,"abstract":"Parametric CAD, in conjunction with 3D-printing, is democratizing design and production pipelines. End-users can easily change parameters of publicly available designs, and 3D-print the customized objects. In research and industry, parametric designs are being used to find optimal, or unique final objects. Unfortunately, for most designs, many combinations of parameter values are invalid. Restricting the parameter space of designs to only the valid configurations is a difficult problem. Most publicly available designs do not contain this information. Using ideas from program analysis, we synthesize constraints on parameters of parametric designs. Some constraints are synthesized statically, by exploiting implicit assumptions of the design process. Several others are inferred by evaluating the design on many different samples, and then constructing and solving hypotheses. Our approach is effective at finding constraints on parameter values for a wide variety of parametric designs, with a very small runtime cost, in the order of seconds. CCS Concepts • Computing methodologies → Shape analysis;","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"919 1","pages":"75-80"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77908509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}