Proceedings. Pacific Conference on Computer Graphics and Applications最新文献

筛选
英文 中文
Aesthetic Enhancement via Color Area and Location Awareness 通过颜色区域和位置意识增强审美
Proceedings. Pacific Conference on Computer Graphics and Applications Pub Date : 2022-01-01 DOI: 10.2312/pg.20221247
Bailin Yang, Qingxu Wang, Frederick W. B. Li, Xiaohui Liang, T. Wei, Changrui Zhu
{"title":"Aesthetic Enhancement via Color Area and Location Awareness","authors":"Bailin Yang, Qingxu Wang, Frederick W. B. Li, Xiaohui Liang, T. Wei, Changrui Zhu","doi":"10.2312/pg.20221247","DOIUrl":"https://doi.org/10.2312/pg.20221247","url":null,"abstract":"Choosing a suitable color palette can typically improve image aesthetic, where a naive way is choosing harmonious colors from some pre-defined color combinations in color wheels. However, color palettes only consider the usage of color types without specifying their amount in an image. Also, it is still challenging to automatically assign individual palette colors to suitable image regions for maximizing image aesthetic quality. Motivated by these, we propose to construct a contribution-aware color palette from images with high aesthetic quality, enabling color transfer by matching the coloring and regional characteristics of an input image. We hence exploit public image datasets, extracting color composition and embedded color contribution features from aesthetic images to generate our proposed color palettes. We consider both image area ratio and image location as the color contribution features to extract. We have conducted quantitative experiments to demonstrate that our method outperforms existing methods through SSIM (Structural SIMilarity) and PSNR (Peak Signal to Noise Ratio) for objective image quality measurement and no-reference image assessment (NIMA) for image aesthetic scoring.","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"30 1","pages":"51-56"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79168362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving View Independent Rendering for Multiview Effects 改进多视图效果的视图独立渲染
Proceedings. Pacific Conference on Computer Graphics and Applications Pub Date : 2022-01-01 DOI: 10.2312/pg.20221244
Ajinkya Gavane, B. Watson
{"title":"Improving View Independent Rendering for Multiview Effects","authors":"Ajinkya Gavane, B. Watson","doi":"10.2312/pg.20221244","DOIUrl":"https://doi.org/10.2312/pg.20221244","url":null,"abstract":"This paper describes improvements to view independent rendering (VIR) that make it much more useful for multiview effects. Improved VIR’s (iVIR’s) soft shadows are nearly identical in quality to VIR’s and produced with comparable speed (several times faster than multipass rendering), even when using a simpler bufferless implementation that does not risk overflow. iVIR’s omnidirectional shadow results are still better, often nearly twice as fast as VIR’s, even when bufferless. Most impressively, iVIR enables complex environment mapping in real time, producing high-quality reflections up to an order of magnitude faster than VIR, and 2-4 times faster than multipass rendering. CCS Concepts • Computing methodologies → Rendering; Graphics processors; Point-based models;","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"252 1","pages":"35-41"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83494668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
An Interactive Modeling System of Japanese Castles with Decorative Objects 日本城堡装饰物交互建模系统
Proceedings. Pacific Conference on Computer Graphics and Applications Pub Date : 2022-01-01 DOI: 10.2312/pg.20221240
S. Umeyama, Y. Dobashi
{"title":"An Interactive Modeling System of Japanese Castles with Decorative Objects","authors":"S. Umeyama, Y. Dobashi","doi":"10.2312/pg.20221240","DOIUrl":"https://doi.org/10.2312/pg.20221240","url":null,"abstract":"We present an interactive modeling system for Japanese castles. We develop an user interface that can generate the fundamental structure of the castle tower consisting of stone walls, turrets, and roofs. By clicking on the screen with a mouse, relevant parameters for the fundamental structure are automatically calculated to generate 3D models of Japanese-style castles. We use characteristic curves that often appear in ancient Japanese architecture for the realistic modeling of the castles.","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"400 1","pages":"15-16"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84846114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DARC: A Visual Analytics System for Multivariate Applicant Data Aggregation, Reasoning and Comparison 多变量申请人数据聚合、推理和比较的可视化分析系统
Proceedings. Pacific Conference on Computer Graphics and Applications Pub Date : 2022-01-01 DOI: 10.2312/pg.20221248
Yihan Hou, Yu Liu, Heming Wang, Zhichao Zhang, Yue-shan Li, Hai-Ning Liang, Lingyun Yu
{"title":"DARC: A Visual Analytics System for Multivariate Applicant Data Aggregation, Reasoning and Comparison","authors":"Yihan Hou, Yu Liu, Heming Wang, Zhichao Zhang, Yue-shan Li, Hai-Ning Liang, Lingyun Yu","doi":"10.2312/pg.20221248","DOIUrl":"https://doi.org/10.2312/pg.20221248","url":null,"abstract":"People often make decisions based on their comprehensive understanding of various materials, judgement of reasons, and comparison among choices. For instance, when hiring committees review multivariate applicant data, they need to consider and compare different aspects of the applicants’ materials. However, the amount and complexity of multivariate data increase the difficulty to analyze the data, extract the most salient information, and then rapidly form opinions based on the extracted information. Thus, a fast and comprehensive understanding of multivariate data sets is a pressing need in many fields, such as business and education. In this work, we had in-depth interviews with stakeholders and characterized user requirements involved in data-driven decision making in reviewing school applications. Based on these requirements, we propose DARC, a visual analytics system for facilitating decision making on multivariate applicant data. Through the system, users are supported to gain insights of the multivariate data, picture an overview of all data cases, and retrieve original data in a quick and intuitive manner. The effectiveness of DARC is validated through observational user evaluations and interviews.","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"12 4 1","pages":"57-62"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83811422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-time Content Projection onto a Tunnel from a Moving Subway Train 从移动的地铁列车到隧道的实时内容投影
Proceedings. Pacific Conference on Computer Graphics and Applications Pub Date : 2021-01-01 DOI: 10.2312/PG.20211398
Jaedong Kim, Haegwang Eom, Jihwan Kim, Younghui Kim, Jun-yong Noh
{"title":"Real-time Content Projection onto a Tunnel from a Moving Subway Train","authors":"Jaedong Kim, Haegwang Eom, Jihwan Kim, Younghui Kim, Jun-yong Noh","doi":"10.2312/PG.20211398","DOIUrl":"https://doi.org/10.2312/PG.20211398","url":null,"abstract":"In this study, we present the first actual working system that can project content onto a tunnel wall from a moving subway train so that passengers can enjoy the display of digital content through a train window. To effectively estimate the position of the train in a tunnel, we propose counting sleepers, which are installed at regular interval along the railway, using a distance sensor. The tunnel profile is constructed using pointclouds captured by a depth camera installed next to the projector. The tunnel profile is used to identify projectable sections that will not contain too much interference by possible occluders. The tunnel profile is also used to retrieve the depth at a specific location so that a properly warped content can be projected for viewing by passengers through the window when the train is moving at runtime. Here, we show that the proposed system can operate on an actual train. CCS Concepts • Computing methodologies → Mixed / augmented reality;","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"46 1","pages":"87-91"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83243427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
GANST: Gradient-aware Arbitrary Neural Style Transfer 梯度感知任意神经风格迁移
Proceedings. Pacific Conference on Computer Graphics and Applications Pub Date : 2021-01-01 DOI: 10.2312/PG.20211399
Haichao Zhu
{"title":"GANST: Gradient-aware Arbitrary Neural Style Transfer","authors":"Haichao Zhu","doi":"10.2312/PG.20211399","DOIUrl":"https://doi.org/10.2312/PG.20211399","url":null,"abstract":"","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"29 1","pages":"93-98"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89830249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Volumetric Video Streaming Data Reduction Method Using Front-mesh 3D Data 基于前网格三维数据的体积视频流数据约简方法
Proceedings. Pacific Conference on Computer Graphics and Applications Pub Date : 2021-01-01 DOI: 10.2312/pg.20211395
X. Zhao, T. Okuyama
{"title":"Volumetric Video Streaming Data Reduction Method Using Front-mesh 3D Data","authors":"X. Zhao, T. Okuyama","doi":"10.2312/pg.20211395","DOIUrl":"https://doi.org/10.2312/pg.20211395","url":null,"abstract":"Volumetric video contents are attracting much attention across various industries for their six-degrees-of-freedom (6DoF) viewing experience. However, in terms of streaming, volumetric video contents still present challenges such as high data volume and bandwidth consumption, which results in high stress on the network. To solve this issue, we propose a method using frontmesh 3D data to reduce the data size without affecting the visual quality much from a user’s perspective. The proposed method also reduces decoding and import time on the client side, which enables faster playback of 3D data. We evaluated our method in terms of data reduction and computation complexity and conducted a qualitative analysis by comparing rendering results with reference data at different diagonal angles. Our method successfully reduces data volume and computation complexity with minimal visual quality loss. CCS Concepts • Information systems → Multimedia streaming; • Computing methodologies → Image compression;","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"65 1","pages":"73-74"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79413383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neural Proxy: Empowering Neural Volume Rendering for Animation 神经代理:增强动画的神经体渲染
Proceedings. Pacific Conference on Computer Graphics and Applications Pub Date : 2021-01-01 DOI: 10.2312/pg.20211384
Zackary P. T. Sin, P. H. F. Ng, H. Leong
{"title":"Neural Proxy: Empowering Neural Volume Rendering for Animation","authors":"Zackary P. T. Sin, P. H. F. Ng, H. Leong","doi":"10.2312/pg.20211384","DOIUrl":"https://doi.org/10.2312/pg.20211384","url":null,"abstract":"","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"25 1","pages":"31-36"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85547911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
3D-CariNet: End-to-end 3D Caricature Generation from Natural Face Images with Differentiable Renderer 3D- carinet:端到端的3D漫画生成从自然面孔图像与可微分渲染
Proceedings. Pacific Conference on Computer Graphics and Applications Pub Date : 2021-01-01 DOI: 10.2312/PG.20211387
Meijia Huang, Ju Dai, Junjun Pan, Junxuan Bai, Hong Qin
{"title":"3D-CariNet: End-to-end 3D Caricature Generation from Natural Face Images with Differentiable Renderer","authors":"Meijia Huang, Ju Dai, Junjun Pan, Junxuan Bai, Hong Qin","doi":"10.2312/PG.20211387","DOIUrl":"https://doi.org/10.2312/PG.20211387","url":null,"abstract":"Caricatures are an artistic representation of human faces to express satire and humor. Caricature generation of human faces is a hotspot in CG research. Previous work mainly focuses on 2D caricatures generation from face photos or 3D caricature reconstruction from caricature images. In this paper, we propose a novel end-to-end method to directly generate personalized 3D caricatures from a single natural face image. It can create not only exaggerated geometric shapes, but also heterogeneous texture styles. Firstly, we construct a synthetic dataset containing matched data pairs composed of face photos, caricature images, and 3D caricatures. Then, we design a graph convolutional autoencoder to build a non-linear colored mesh model to learn the shape and texture of 3D caricatures. To make the network end-to-end trainable, we incorporate a differentiable renderer to render 3D caricatures into caricature images inversely. Experiments demonstrate that our method can achieve 3D caricature generation with various texture styles from face images while maintaining personality characteristics.","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"267 1","pages":"49-54"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79816767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast and Lightweight Path Guiding Algorithm on GPU 基于GPU的快速轻量级路径引导算法
Proceedings. Pacific Conference on Computer Graphics and Applications Pub Date : 2021-01-01 DOI: 10.2312/pg.20211379
Juhyeon Kim, Y. Kim
{"title":"Fast and Lightweight Path Guiding Algorithm on GPU","authors":"Juhyeon Kim, Y. Kim","doi":"10.2312/pg.20211379","DOIUrl":"https://doi.org/10.2312/pg.20211379","url":null,"abstract":"We propose a simple, yet practical path guiding algorithm that runs on GPU. Path guiding renders photo-realistic images by simulating the iterative bounces of rays, which are sampled from the radiance distribution. The radiance distribution is often learned by serially updating the hierarchical data structure to represent complex scene geometry, which is not easily implemented with GPU. In contrast, we employ a regular data structure and allow fast updates by processing a significant number of rays with GPU. We further increase the efficiency of radiance learning by employing SARSA [SB18] used in reinforcement learning. SARSA does not include aggregation of incident radiance from all directions nor storing all of the previous paths. The learned distribution is then sampled with an optimized rejection sampling, which adapts the current surface normal to reflect finer geometry than the grid resolution. All of the algorithms have been implemented on GPU using megakernal architecture with NVIDIA OptiX [PBD*10]. Through numerous experiments on complex scenes, we demonstrate that our proposed path guiding algorithm works efficiently on GPU, drastically reducing the number of wasted paths. CCS Concepts • Computing methodologies → Ray tracing; Reinforcement learning; Massively parallel algorithms;","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"104 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80677378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信