Proceedings. Pacific Conference on Computer Graphics and Applications最新文献

筛选
英文 中文
Peripheral Vision in Simulated Driving: Comparing CAVE and Head-mounted Display 模拟驾驶中的周边视觉:比较CAVE和头戴式显示器
Proceedings. Pacific Conference on Computer Graphics and Applications Pub Date : 2021-01-01 DOI: 10.2312/PG.20211392
Tana Tanoi, N. Dodgson
{"title":"Peripheral Vision in Simulated Driving: Comparing CAVE and Head-mounted Display","authors":"Tana Tanoi, N. Dodgson","doi":"10.2312/PG.20211392","DOIUrl":"https://doi.org/10.2312/PG.20211392","url":null,"abstract":"","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"515 1","pages":"67-68"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86867063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
View-Dependent Impostors for Architectural Shape Grammars 建筑形状语法的视图依赖的冒名顶替者
Proceedings. Pacific Conference on Computer Graphics and Applications Pub Date : 2021-01-01 DOI: 10.2312/pg.20211390
Chao Jia, Moritz Roth, B. Kerbl, M. Wimmer
{"title":"View-Dependent Impostors for Architectural Shape Grammars","authors":"Chao Jia, Moritz Roth, B. Kerbl, M. Wimmer","doi":"10.2312/pg.20211390","DOIUrl":"https://doi.org/10.2312/pg.20211390","url":null,"abstract":"Procedural generation has become a key component in satisfying a growing demand for ever-larger, highly detailed geometry in realistic, open-world games and simulations. In this paper, we present our work towards a new level-of-detail mechanism for procedural geometry shape grammars. Our approach automatically identifies and adds suitable surrogate rules to a shape grammar’s derivation tree. Opportunities for surrogates are detected in a dedicated pre-processing stage. Where suitable, textured impostors are then used for rendering based on the current viewpoint at runtime. Our proposed methods generate simplified geometry with superior visual quality to the state-of-the-art and roughly the same rendering performance.","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"188 1","pages":"63-64"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83052156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
CSLF: Cube Surface Light Field and Its Sampling, Compression, Real-Time Rendering 立方体表面光场及其采样、压缩、实时渲染
Proceedings. Pacific Conference on Computer Graphics and Applications Pub Date : 2021-01-01 DOI: 10.2312/pg.20211381
Xiao Ai, Yigang Wang, Simin Kou
{"title":"CSLF: Cube Surface Light Field and Its Sampling, Compression, Real-Time Rendering","authors":"Xiao Ai, Yigang Wang, Simin Kou","doi":"10.2312/pg.20211381","DOIUrl":"https://doi.org/10.2312/pg.20211381","url":null,"abstract":"Light field is gaining both research and commercial interests since it has the potential to produce view-dependent and photorealistic effects for virtual and augmented reality. In this paper, we further explore the light field and presents a novel parameterization that permits 1) effectively sampling the light field of an object with unknown geometry, 2) efficiently compressing and 3) real-time rendering from arbitrary viewpoints. A novel, key element in our parameterization is that we use the intersections of the light rays and a general cube surface to parameterize the four-dimensional light field, constructing the cube surface light field (CSLF). We resolve the huge data amount problem in CSLF by uniformly decimating the viewpoint space to form a set of key views which are then converted into a pseudo video sequence and compressed using the high efficiency video coding encoder. To render the CSLF, we employ a ray casting approach and draw a polygonal mesh, enabling real-time generating arbitrary views from the outside of the cube surface. We build the CSLF datasets and extensively evaluate our parameterization from the sampling, compression and rendering. Results show that the cube surface parameterization can simultaneously achieve the above three characteristics, indicating the potentiality in practical virtual and augmented reality. CCS Concepts • Computing methodologies → Image-based rendering; Ray tracing; Image compression;","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"62 1","pages":"13-18"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79650445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
User-centred Depth Estimation Benchmarking for VR Content Creation from Single Images 基于单个图像的VR内容创作的以用户为中心的深度估计基准
Proceedings. Pacific Conference on Computer Graphics and Applications Pub Date : 2021-01-01 DOI: 10.2312/pg.20211394
Anthony Dickson, Alistair Knott, S. Zollmann
{"title":"User-centred Depth Estimation Benchmarking for VR Content Creation from Single Images","authors":"Anthony Dickson, Alistair Knott, S. Zollmann","doi":"10.2312/pg.20211394","DOIUrl":"https://doi.org/10.2312/pg.20211394","url":null,"abstract":"The capture and creation of 3D content from a device equipped with just a single RGB camera has a wide range of applications ranging from 3D photographs and panoramas to 3D video. Many of these methods rely on depth estimation models to provide the necessary 3D data, mainly neural network models. However, the metrics used to evaluate these models can be difficult to interpret and to relate to the quality of 3D/VR content derived from these models. In this work, we explore the relationship between the widely used depth estimation metrics, image similarly metrics applied to synthesised novel viewpoints, and user perception of quality and similarity on these novel viewpoints. Our results indicate that the standard metrics are indeed a good indicator of 3D quality, and that they correlate with human judgements and other metrics that are designed to follow human judgements.","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"10 1","pages":"71-72"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83508082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hierarchical Link and Code: Efficient Similarity Search for Billion-Scale Image Sets 层次链接和代码:10亿尺度图像集的高效相似性搜索
Proceedings. Pacific Conference on Computer Graphics and Applications Pub Date : 2021-01-01 DOI: 10.2312/PG.20211397
Kaixiang Yang, Hongya Wang, Ming-han Du, Zhizheng Wang, Zongyuan Tan, Yingyuan Xiao
{"title":"Hierarchical Link and Code: Efficient Similarity Search for Billion-Scale Image Sets","authors":"Kaixiang Yang, Hongya Wang, Ming-han Du, Zhizheng Wang, Zongyuan Tan, Yingyuan Xiao","doi":"10.2312/PG.20211397","DOIUrl":"https://doi.org/10.2312/PG.20211397","url":null,"abstract":"Similarity search is an indispensable component in many computer vision applications. To index billions of images on a single commodity server, Douze et al. introduced L&C that works on operating points considering 64–128 bytes per vector. While the idea is inspiring, we observe that L&C still suffers the accuracy saturation problem, which it is aimed to solve. To this end, we propose a simple yet effective two-layer graph index structure, together with dual residual encoding, to attain higher accuracy. Particularly, we partition vectors into multiple clusters and build the top-layer graph using the corresponding centroids. For each cluster, a subgraph is created with compact codes of the first-level vector residuals. Such an index structure provides better graph search precision as well as saves quite a few bytes for compression. We employ the second-level residual quantization to re-rank the candidates obtained through graph traversal, which is more efficient than regression-from-neighbors adopted by L&C. Comprehensive experiments show that our proposal obtains over 30% higher recall@1 than the state-of-thearts, and achieves up to 7.7x and 6.1x speedup over L&C on Deep1B and Sift1B, respectively. CCS Concepts • Information systems → Top-k retrieval in databases;","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"76 1","pages":"81-86"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89688741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Temporally Stable Content-Adaptive and Spatio-Temporal Shading Rate Assignment for Real-Time Applications 实时应用的时间稳定内容自适应和时空遮阳率分配
Proceedings. Pacific Conference on Computer Graphics and Applications Pub Date : 2021-01-01 DOI: 10.2312/PG.20211391
S. Stappen, Johannes Unterguggenberger, B. Kerbl, M. Wimmer
{"title":"Temporally Stable Content-Adaptive and Spatio-Temporal Shading Rate Assignment for Real-Time Applications","authors":"S. Stappen, Johannes Unterguggenberger, B. Kerbl, M. Wimmer","doi":"10.2312/PG.20211391","DOIUrl":"https://doi.org/10.2312/PG.20211391","url":null,"abstract":"We propose two novel methods to improve the efficiency and quality of real-time rendering applications: Texel differential-based content-adaptive shading (TDCAS) and spatio-temporally filtered adaptive shading (STeFAS). Utilizing Variable Rate Shading (VRS)—a hardware feature introduced with NVIDIA’s Turing micro-architecture—and properties derived during rendering or Temporal Anti-Aliasing (TAA), our techniques adapt the resolution to improve the performance and quality of real-time applications. VRS enables different shading resolution for different regions of the screen during a single render pass. In contrast to other techniques, TDCAS and STeFAS have very little overhead for computing the shading rate. STeFAS enables up to 4x higher rendering resolutions for similar frame rates, or a performance increase of 4 × at the same resolution.","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"34 1","pages":"65-66"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88463712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neural Screen Space Rendering of Direct Illumination 直接照明的神经屏幕空间渲染
Proceedings. Pacific Conference on Computer Graphics and Applications Pub Date : 2021-01-01 DOI: 10.2312/pg.20211385
C. Suppan, Andrew Chalmers, Junhong Zhao, A. Doronin, Taehyun Rhee
{"title":"Neural Screen Space Rendering of Direct Illumination","authors":"C. Suppan, Andrew Chalmers, Junhong Zhao, A. Doronin, Taehyun Rhee","doi":"10.2312/pg.20211385","DOIUrl":"https://doi.org/10.2312/pg.20211385","url":null,"abstract":"Neural rendering is a class of methods that use deep learning to produce novel images of scenes from more limited information than traditional rendering methods. This is useful for information scarce applications like mixed reality or semantic photo synthesis but comes at the cost of control over the final appearance. We introduce the Neural Direct-illumination Renderer (NDR), a neural screen space renderer capable of rendering direct-illumination images of any geometry, with opaque materials, under distant illuminant. The NDR uses screen space buffers describing material, geometry, and illumination as inputs to provide direct control over the output. We introduce the use of intrinsic image decomposition to allow a Convolutional Neural Network (CNN) to learn a mapping from a large number of pixel buffers to rendered images. The NDR predicts shading maps, which are subsequently combined with albedo maps to create a rendered image. We show that the NDR produces plausible images that can be edited by modifying the input maps and marginally outperforms the state of the art while also providing more functionality. CCS Concepts • Computing methodologies → Rendering; Neural networks; Supervised learning by regression;","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"22 1","pages":"37-42"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90461582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
SDALIE-GAN: Structure and Detail Aware GAN for Low-light Image Enhancement SDALIE-GAN:用于弱光图像增强的结构和细节感知GAN
Proceedings. Pacific Conference on Computer Graphics and Applications Pub Date : 2021-01-01 DOI: 10.2312/PG.20211393
Youxin Pang, Mengke Yuan, Yuchun Chang, Dong‐Ming Yan
{"title":"SDALIE-GAN: Structure and Detail Aware GAN for Low-light Image Enhancement","authors":"Youxin Pang, Mengke Yuan, Yuchun Chang, Dong‐Ming Yan","doi":"10.2312/PG.20211393","DOIUrl":"https://doi.org/10.2312/PG.20211393","url":null,"abstract":"We present a GAN-based network architecture for low-light image enhancement, called Structure and Detail Aware Low-light Image Enhancement GAN (SDALIE-GAN), which is trained with unpaired low/normal-light images. Specifically, complementary Structure Aware Generator (SAG) and Detail Aware Generator (DAG) are designed respectively to generate an enhanced low-light image. Besides, intermediate features from SAG and DAG are integrated through guided map supervised feature attention fusion module, and regularizes the generated samples with an appended intensity adjusting module. We demonstrate the advantages of the proposed approach by comparing it with state-of-the-art low-light image enhancement methods. CCS Concepts • Computing methodologies → Computational photography;","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"55 1","pages":"69-70"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85308515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Maximum-Clearance Planar Motion Planning Based on Recent Developments in Computing Minkowski Sums and Voronoi Diagrams 基于Minkowski和和Voronoi图计算新进展的最大间隙平面运动规划
Proceedings. Pacific Conference on Computer Graphics and Applications Pub Date : 2021-01-01 DOI: 10.2312/PG.20211382
M. Jung, Myung-Soo Kim
{"title":"Maximum-Clearance Planar Motion Planning Based on Recent Developments in Computing Minkowski Sums and Voronoi Diagrams","authors":"M. Jung, Myung-Soo Kim","doi":"10.2312/PG.20211382","DOIUrl":"https://doi.org/10.2312/PG.20211382","url":null,"abstract":"We present a maximum-clearance motion planning algorithm for planar geometric models with three degrees of freedom (translation and rotation). This work is based on recent developments in real-time algorithms for computing the Minkowski sums and Voronoi diagrams of planar geometric models bounded by G1-continuous sequences of circular arcs. Compared with their counterparts using polygons with no G1-continuity at vertices, the circle-based approach greatly simplifies the Voronoi structure of the collision-free space for the motion planning in a plane with three degrees of freedom. We demonstrate the effectiveness of the proposed approach by test sets of maximum-clearance motion planning through narrow passages in a plane. CCS Concepts • Computing methodologies → Motion planning; planar geometric models; circle-based algorithm; maximum-clearance; Minkowski sum; Voronoi diagram; medial axis;","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"31 1","pages":"19-24"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78114043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Constraint Synthesis for Parametric CAD 参数化CAD的约束综合
Proceedings. Pacific Conference on Computer Graphics and Applications Pub Date : 2021-01-01 DOI: 10.2312/PG.20211396
A. Mathur, D. Zufferey
{"title":"Constraint Synthesis for Parametric CAD","authors":"A. Mathur, D. Zufferey","doi":"10.2312/PG.20211396","DOIUrl":"https://doi.org/10.2312/PG.20211396","url":null,"abstract":"Parametric CAD, in conjunction with 3D-printing, is democratizing design and production pipelines. End-users can easily change parameters of publicly available designs, and 3D-print the customized objects. In research and industry, parametric designs are being used to find optimal, or unique final objects. Unfortunately, for most designs, many combinations of parameter values are invalid. Restricting the parameter space of designs to only the valid configurations is a difficult problem. Most publicly available designs do not contain this information. Using ideas from program analysis, we synthesize constraints on parameters of parametric designs. Some constraints are synthesized statically, by exploiting implicit assumptions of the design process. Several others are inferred by evaluating the design on many different samples, and then constructing and solving hypotheses. Our approach is effective at finding constraints on parameter values for a wide variety of parametric designs, with a very small runtime cost, in the order of seconds. CCS Concepts • Computing methodologies → Shape analysis;","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"919 1","pages":"75-80"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77908509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信