Proceedings. Pacific Conference on Computer Graphics and Applications最新文献

筛选
英文 中文
An Energy-Conserving Hair Shading Model Based on Neural Style Transfer 基于神经风格迁移的节能头发遮阳模型
Proceedings. Pacific Conference on Computer Graphics and Applications Pub Date : 2020-01-01 DOI: 10.2312/pg.20201222
Zhi Qiao, T. Kanai
{"title":"An Energy-Conserving Hair Shading Model Based on Neural Style Transfer","authors":"Zhi Qiao, T. Kanai","doi":"10.2312/pg.20201222","DOIUrl":"https://doi.org/10.2312/pg.20201222","url":null,"abstract":"We present a novel approach for shading photorealistic hair animation, which is the essential visual element for depicting realistic hairs of virtual characters. Our model is able to shade high-quality hairs quickly by extending the conditional Generative Adversarial Networks. Furthermore, our method is much faster than the previous onerous rendering algorithms and produces fewer artifacts than other neural image translation methods. In this work, we provide a novel energy-conserving hair shading model, which retains the vast majority of semi-transparent appearances and exactly produces the interaction with lights of the scene. Our method is effortless to implement, faster and computationally more efficient than previous algorithms. CCS Concepts • Computing methodologies → Image-based rendering; Neural networks;","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"80 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75467702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interactive Video Completion with SiamMask 交互式视频完成与SiamMask
Proceedings. Pacific Conference on Computer Graphics and Applications Pub Date : 2020-01-01 DOI: 10.2312/pg.20201229
Satsuki Tsubota, Makoto Okabe
{"title":"Interactive Video Completion with SiamMask","authors":"Satsuki Tsubota, Makoto Okabe","doi":"10.2312/pg.20201229","DOIUrl":"https://doi.org/10.2312/pg.20201229","url":null,"abstract":"","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"14 1","pages":"43-44"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86047899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reconstructing Monte Carlo Errors as a Blue-noise in Screen Space 基于屏幕空间蓝噪声的蒙特卡罗误差重构
Proceedings. Pacific Conference on Computer Graphics and Applications Pub Date : 2020-01-01 DOI: 10.2312/pg.20201230
Hongli Liu, Honglei Han
{"title":"Reconstructing Monte Carlo Errors as a Blue-noise in Screen Space","authors":"Hongli Liu, Honglei Han","doi":"10.2312/pg.20201230","DOIUrl":"https://doi.org/10.2312/pg.20201230","url":null,"abstract":"","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"76 1","pages":"45-46"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82601931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LPaintB: Learning to Paint from Self-SupervisionLPaintB: Learning to Paint from Self-Supervision lpainb:从自我监督中学习绘画
Proceedings. Pacific Conference on Computer Graphics and Applications Pub Date : 2019-06-01 DOI: 10.2312/pg.20191336
Biao Jia, Jonathan Brandt, R. Mech, Byungmoon Kim, Dinesh Manocha
{"title":"LPaintB: Learning to Paint from Self-SupervisionLPaintB: Learning to Paint from Self-Supervision","authors":"Biao Jia, Jonathan Brandt, R. Mech, Byungmoon Kim, Dinesh Manocha","doi":"10.2312/pg.20191336","DOIUrl":"https://doi.org/10.2312/pg.20191336","url":null,"abstract":"We present a novel reinforcement learning-based natural media painting algorithm. Our goal is to reproduce a reference image using brush strokes and we encode the objective through observations. Our formulation takes into account that the distribution of the reward in the action space is sparse and training a reinforcement learning algorithm from scratch can be difficult. We present an approach that combines self-supervised learning and reinforcement learning to effectively transfer negative samples into positive ones and change the reward distribution. We demonstrate the benefits of our painting agent to reproduce reference images with brush strokes. The training phase takes about one hour and the runtime algorithm takes about 30 seconds on a GTX1080 GPU reproducing a 1000x800 image with 20,000 strokes.","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"56 1","pages":"33-39"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72664353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Gaze Attention and Flow Visualization using the Smudge Effect 使用涂抹效果的凝视注意力和流可视化
Proceedings. Pacific Conference on Computer Graphics and Applications Pub Date : 2019-01-01 DOI: 10.2312/pg.20191334
Sangbong Yoo, Seongmin Jeong, Seokyeon Kim, Yun Jang
{"title":"Gaze Attention and Flow Visualization using the Smudge Effect","authors":"Sangbong Yoo, Seongmin Jeong, Seokyeon Kim, Yun Jang","doi":"10.2312/pg.20191334","DOIUrl":"https://doi.org/10.2312/pg.20191334","url":null,"abstract":"Many advanced gaze visualization techniques have been developed continuously based on the fundamental gaze visualizations such as scatter plots, attention map, and scanpath. However, it is not easy to locate challenging visualization techniques that resolve the limitations presented in the conventional gaze visualizations. Therefore, in this paper, we propose a novel visualization applying the smudge technique to the attention map. The proposed visualization intuitively shows the gaze flow and AoIs (Area of Interests) of an observer. Besides, it provides fixation, saccade, and micro-movement information, which allows us to respond to various analytical goals within a single visualization. Finally, we provide two case studies to show the effectiveness of our technique. CCS Concepts • Human-centered computing → Visualization techniques; Heat maps; Human computer interaction (HCI);","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"12 1","pages":"21-26"},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72813014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
External Forces Guided Fluid Surface and Volume Reconstruction from Monocular Video 基于单目视频的外力引导流体表面和体积重建
Proceedings. Pacific Conference on Computer Graphics and Applications Pub Date : 2019-01-01 DOI: 10.2312/pg.20191337
Xiaoying Nie, Yong Hu, Zhiyuan Su, Xukun Shen
{"title":"External Forces Guided Fluid Surface and Volume Reconstruction from Monocular Video","authors":"Xiaoying Nie, Yong Hu, Zhiyuan Su, Xukun Shen","doi":"10.2312/pg.20191337","DOIUrl":"https://doi.org/10.2312/pg.20191337","url":null,"abstract":"We propose a novel method to reconstruct fluid’s volume movement and surface details from just a monocular video for the first time. Although many monocular video-based reconstruction methods have been developed, the reconstructed results are merely one layer of geometry surface and lack physically correct volume particles’ attribute and movement. To reconstruct 3D fluid volume, we define two kinds of particles, the target particles and the fluid particles. The target particles are extracted from the height field of water surface which is recovered by Shape from Shading (SFS) method. The fluid particles represent the discrete form of the 3D fluid volume and conform to the flow hydrodynamic properties. The target particles are used to guide the physical simulation of fluid particles based on the Smoothed Particle Hydrodynamics (SPH) model. To formulate this guidance, a new external force scheme is designed based on distance and relative motion between target particles and fluid particles. Additionally, in order to integrate and maintain geometric and physical features simultaneously, we adopt a two-scale decomposition strategy for the height field, and only apply the low frequency coarse-scale component to estimate the volumetric motion of liquid, while serve high frequency fine-scale component as noise to preserve fluid surface details in the stage of rendering. Our experimental results compare favorably to the state-of-the-art in terms of global fluid volume motion features and fluid surface details and demonstrate our approach can achieve desirable and pleasing effects. CCS Concepts • Computing methodologies → Physical simulation; Volumetric models;","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"89 1","pages":"41-46"},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83883550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Towards Biomechanically and Visually Plausible Volumetric Cutting Simulation of Deformable Bodies 变形体的生物力学和视觉上似是而非的体积切割模拟
Proceedings. Pacific Conference on Computer Graphics and Applications Pub Date : 2019-01-01 DOI: 10.2312/pg.20191335
Yinling Qian, Wen-shinn Huang, Weixin Si, Xiangyun Liao, Qiong Wang, P. Heng
{"title":"Towards Biomechanically and Visually Plausible Volumetric Cutting Simulation of Deformable Bodies","authors":"Yinling Qian, Wen-shinn Huang, Weixin Si, Xiangyun Liao, Qiong Wang, P. Heng","doi":"10.2312/pg.20191335","DOIUrl":"https://doi.org/10.2312/pg.20191335","url":null,"abstract":"Due to the simplicity and high efficiency, composited finite element method(CFEM) based virtual cutting attracted much attention in the field of virtual surgery in recent years. Even great progress has been made in volumetric cutting of deformable bodies, there are still several open problems restricting its applications in practical surgical simulator. First among them is cutting fracture modelling. Recent methods would produce cutting surface immediately after an intersection between the cutting plane and the object. But in real cutting, biological tissue would first deform under the external force induced by scalpel and then fracture occurs when the stress exceeds a threshold. Secondly, it’s computation-intensive to reconstruct cutting surface highly consistent with the scalpel trajectory, since reconstructed cutting surface in CFEM-based virtual cutting simulation is grid-dependent and the accuracy of cutting surface is proportional to the grid resolution. This paper propose a virtual cutting method based on CFEM which can effectively simulate cutting fracture in a biomechanically and visually plausible way and generate cutting surface which is consistent with the scalpel trajectory with a low resolution finite element grid. We model this realistic cutting as a deformation-fracture repeating process. In deformation stage, the object will deform along with the scalpel motion, while in the fracture stage cutting happens and a cutting surface will be generated from the scalpel trajectory. A delayed fracturing criteria is proposed to determine when and how the cutting fracture occurs and an influence domain adaptation method is employed to generate accurate cutting surface in both procedures of deformation and fracture. Experiments show that our method can realistically simulate volumetric cutting of deformable bodies and efficiently generate accurate cutting surface thus facilitating interactive applications. CCS Concepts • Human-centered computing → Virtual reality; • Computing methodologies → Physical simulation; Shape modeling;","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"35 1","pages":"27-32"},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77847580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Feature Curve Network Extraction via Quadric Surface Fitting 基于二次曲面拟合的特征曲线网络提取
Proceedings. Pacific Conference on Computer Graphics and Applications Pub Date : 2019-01-01 DOI: 10.2312/PG.20191338
Z. Li, Jianwei Guo, Jun Xiao, Ying Wang, Xiaopeng Zhang, Dong‐Ming Yan
{"title":"Feature Curve Network Extraction via Quadric Surface Fitting","authors":"Z. Li, Jianwei Guo, Jun Xiao, Ying Wang, Xiaopeng Zhang, Dong‐Ming Yan","doi":"10.2312/PG.20191338","DOIUrl":"https://doi.org/10.2312/PG.20191338","url":null,"abstract":"Feature curves on 3D shapes provide a high dimensional representation of the geometry and reveal their underlying structure. In this paper, we present an automatic approach for extracting complete feature curve networks from 3D models, as well as generating a high-quality patch layout. Starting from an initial collection of noisy and fragmented feature curves, we first filter non-salient or noisy feature curves by utilizing a quadric surface fitting technique. We then handle the curve intersections and curve missing by conducting a feature extension step to form a closed feature curve network. Finally, we generate a patch layout to reveal a highly structured representation of the input surfaces. Experimental results demonstrate that our algorithm is robust for extracting complete feature curve networks from complex input meshes and achieves superior quality patch layouts compared with the state-of-the-art approaches. CCS Concepts • Computing methodologies → Shape analysis; Mesh models;","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"37 1","pages":"47-52"},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78056240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Psychophysical Analysis of Fabricated Anisotropic Appearance 制造各向异性外观的心理物理分析
Proceedings. Pacific Conference on Computer Graphics and Applications Pub Date : 2019-01-01 DOI: 10.2312/pg.20191333
J. Filip, M. Kolafová, R. Vávra
{"title":"A Psychophysical Analysis of Fabricated Anisotropic Appearance","authors":"J. Filip, M. Kolafová, R. Vávra","doi":"10.2312/pg.20191333","DOIUrl":"https://doi.org/10.2312/pg.20191333","url":null,"abstract":"Many materials change surface appearance when observed for fixed viewing and lighting directions while rotating around its normal. Such distinct anisotropic behavior manifests itself as changes in textural color and intensity. These effects are due to structural elements introducing azimuthally-dependent behavior. However, each material and finishing technique has its unique anisotropic properties which are often difficult to control. To avoid this problem, we study controlled anisotropic appearance introduced by means of 3D printing. Our work tends to link perception of directionality with perception of anisotropic reflectance effect it causes. We simulate two types of structure-based anisotropic effects, which are related to directional principles found in real-world materials. For each type, we create a set of test surfaces by controlling the printed anisotropy level and assess them in a psychophysical study to identify a perceptual scale of anisotropy. The generality of these scales is then verified by means of anisotropic surfaces appearance capturing using bidirectional texture function and its analysis on 3D objects. Eventually, we relate the perceptual scale of anisotropy to a computational feature obtained directly from anisotropic highlights observed in the captured reflectance data. The feature is validated using a psychophysical study analyzing visibility of anisotropic reflectance effects. CCS Concepts • Computing methodologies → Perception; Reflectance modeling; Texturing;","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"15 1","pages":"15-20"},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79508264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Connectivity-preserving Smooth Surface Filling with Sharp Features 具有尖锐特征的保持连通性的光滑表面填充
Proceedings. Pacific Conference on Computer Graphics and Applications Pub Date : 2019-01-01 DOI: 10.2312/pg.20191332
Thibault Lescoat, Pooran Memari, Jean-Marc Thiery, M. Ovsjanikov, T. Boubekeur
{"title":"Connectivity-preserving Smooth Surface Filling with Sharp Features","authors":"Thibault Lescoat, Pooran Memari, Jean-Marc Thiery, M. Ovsjanikov, T. Boubekeur","doi":"10.2312/pg.20191332","DOIUrl":"https://doi.org/10.2312/pg.20191332","url":null,"abstract":"We present a method for constructing a surface mesh filling gaps between the boundaries of multiple disconnected input components. Unlike previous works, our method pays special attention to preserving both the connectivity and large-scale geometric features of input parts, while maintaining efficiency and scalability w.r.t. mesh complexity. Starting from an implicit surface reconstruction matching the parts’ boundaries, we first introduce a modified dual contouring algorithm which stitches a meshed contour to the input components while preserving their connectivity. We then show how to deform the reconstructed mesh to respect the boundary geometry and preserve sharp feature lines, smoothly blending them when necessary. As a result, our reconstructed surface is smooth and propagates the feature lines of the input. We demonstrate on a wide variety of input shapes that our method is scalable to large input complexity and results in superior mesh quality compared to existing techniques. CCS Concepts • Computing methodologies → Shape modeling; Mesh models; Mesh geometry models;","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"9 1","pages":"7-13"},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90052550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信