Proceedings. Pacific Conference on Computer Graphics and Applications最新文献

筛选
英文 中文
Cloud-Assisted Hybrid Rendering for Thin-Client Games and VR Applications 用于瘦客户端游戏和VR应用的云辅助混合渲染
Proceedings. Pacific Conference on Computer Graphics and Applications Pub Date : 2022-10-11 DOI: 10.2312/pg.20211389
Yuzao Tan, Louiz Kim-Chan, Anthony Halim, A. Bhojan
{"title":"Cloud-Assisted Hybrid Rendering for Thin-Client Games and VR Applications","authors":"Yuzao Tan, Louiz Kim-Chan, Anthony Halim, A. Bhojan","doi":"10.2312/pg.20211389","DOIUrl":"https://doi.org/10.2312/pg.20211389","url":null,"abstract":"We introduce a novel distributed rendering approach to generate high-quality graphics in thin-client games and VR applications. Many mobile devices have limited computational power to achieve ray tracing in real-time. Hence, hardware-accelerated cloud servers can perform ray tracing instead and have their output streamed to clients in remote rendering. Applying the approach of distributed hybrid rendering, we leverage the computational capabilities of both the thin client and powerful server by performing rasterization locally while offloading ray tracing to the server. With advancements in 5G technology, the server and client can communicate effectively over the network and work together to produce a high-quality output while maintaining interactive frame rates. Our approach can achieve better visuals as compared to local rendering but faster performance as compared to remote rendering.","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"66 1","pages":"61-62"},"PeriodicalIF":0.0,"publicationDate":"2022-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87506021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Shadow Removal via Cascade Large Mask Inpainting 阴影去除通过级联大面具油漆
Proceedings. Pacific Conference on Computer Graphics and Applications Pub Date : 2022-01-01 DOI: 10.2312/pg.20221246
Juwan Kim, Seung-Heon Kim, I. Jang
{"title":"Shadow Removal via Cascade Large Mask Inpainting","authors":"Juwan Kim, Seung-Heon Kim, I. Jang","doi":"10.2312/pg.20221246","DOIUrl":"https://doi.org/10.2312/pg.20221246","url":null,"abstract":"We present a novel shadow removal framework based on the image inpainting approach. The proposed method consists of two cascade Large-Mask inpainting(LaMa) networks for shadow inpainting and edge inpainting. Experiments with the ISTD and adjusted ISTD dataset show that our method achieves competitive shadow removal results compared to state-of-the methods. And we also show that shadows are well removed from complex and large shadow images, such as urban aerial images","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"32 1","pages":"49-50"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90816584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DFGA: Digital Human Faces Generation and Animation from the RGB Video using Modern Deep Learning Technology DFGA:利用现代深度学习技术从RGB视频中生成数字人脸和动画
Proceedings. Pacific Conference on Computer Graphics and Applications Pub Date : 2022-01-01 DOI: 10.2312/pg.20221249
Diqiong Jiang, Li You, Jian Chang, Ruofeng Tong
{"title":"DFGA: Digital Human Faces Generation and Animation from the RGB Video using Modern Deep Learning Technology","authors":"Diqiong Jiang, Li You, Jian Chang, Ruofeng Tong","doi":"10.2312/pg.20221249","DOIUrl":"https://doi.org/10.2312/pg.20221249","url":null,"abstract":"High-quality and personalized digital human faces have been widely used in media and entertainment, from film and game production to virtual reality. However, the existing technology of generating digital faces requires extremely intensive labor, which prevents the large-scale popularization of digital face technology. In order to tackle this problem, the proposed research will investigate deep learning-based facial modeling and animation technologies to 1) create personalized face geometry from a single image, including the recognizable neutral face shape and believable personalized blendshapes; (2) generate personalized production-level facial skin textures from a video or image sequence; (3) automatically drive and animate a 3D target avatar by an actor’s 2D facial video or audio. Our innovation is to achieve these tasks both efficiently and precisely by using the end-to-end framework with modern deep learning technology (StyleGAN, Transformer, NeRF).","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"55 1","pages":"63-64"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78904127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-instance Referring Image Segmentation of Scene Sketches based on Global Reference Mechanism 基于全局参考机制的场景草图多实例参考图像分割
Proceedings. Pacific Conference on Computer Graphics and Applications Pub Date : 2022-01-01 DOI: 10.2312/pg.20221238
Pengyang Ling, Haoran Mo, Chengying Gao
{"title":"Multi-instance Referring Image Segmentation of Scene Sketches based on Global Reference Mechanism","authors":"Pengyang Ling, Haoran Mo, Chengying Gao","doi":"10.2312/pg.20221238","DOIUrl":"https://doi.org/10.2312/pg.20221238","url":null,"abstract":"Scene sketch segmentation based on referring expression plays an important role in sketch editing of anime industry. While most existing referring image segmentation approaches are designed for the standard task of generating a binary segmentation mask for a single or a group of target(s), we think it necessary to equip these models with the ability of multi-instance segmentation. To this end, we propose GRM-Net, a one-stage framework tailored for multi-instance referring image segmentation of scene sketches. We extract the language features from the expression and fuse it into a conventional instance segmentation pipeline for filtering out the undesired instances in a coarse-to-fine manner and keeping the matched ones. To model the relative arrangement of the objects and the relationship among them from a global view, we propose a global reference mechanism (GRM) to assign references to each detected candidate to identify its position. We compare with existing methods designed for multi-instance referring image segmentation of scene sketches and for the standard task of referring image segmentation, and the results demonstrate the effectiveness and superiority of our approach.","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"1 1","pages":"7-12"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83026370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human Face Modeling based on Deep Learning through Line-drawing 基于线条绘制的深度学习人脸建模
Proceedings. Pacific Conference on Computer Graphics and Applications Pub Date : 2022-01-01 DOI: 10.2312/pg.20221239
Bin Deng, Y. Kawanaka, S. Sato, K. Sakurai, Shang Gao, Z. Tang
{"title":"Human Face Modeling based on Deep Learning through Line-drawing","authors":"Bin Deng, Y. Kawanaka, S. Sato, K. Sakurai, Shang Gao, Z. Tang","doi":"10.2312/pg.20221239","DOIUrl":"https://doi.org/10.2312/pg.20221239","url":null,"abstract":"This paper presents a deep learning-based method for creating 3D human face models. In recent years, several sketch-based shape modeling methods have been proposed. These methods allow the user to easily model various shapes containing animal, building, vehicle, and so on. However, a few methods have been proposed for human face models. If we can create 3D human face models via line-drawing, models of cartoon or fantasy characters can be easily created. To achieve this, we propose a sketch-based face modeling method. When a single line-drawing image is input to our system, a corresponding 3D face model are generated. Our system is based on a deep learning; many human face models and corresponding images rendered as line-drawing are prepared, and then a network is trained using these datasets. For the network, we use a previous method for reconstructing human bodies from real images, and we propose some extensions to enhance learning accuracy. Several examples are shown to demonstrate usefulness of our system.","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"13 1","pages":"13-14"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82457820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reconstructing Bounding Volume Hierarchies from Memory Traces of Ray Tracers 从光线追踪器的记忆轨迹重建边界体层次结构
Proceedings. Pacific Conference on Computer Graphics and Applications Pub Date : 2022-01-01 DOI: 10.2312/pg.20221243
Max von Bülow, Tobias Stensbeck, V. Knauthe, S. Guthe, D. Fellner
{"title":"Reconstructing Bounding Volume Hierarchies from Memory Traces of Ray Tracers","authors":"Max von Bülow, Tobias Stensbeck, V. Knauthe, S. Guthe, D. Fellner","doi":"10.2312/pg.20221243","DOIUrl":"https://doi.org/10.2312/pg.20221243","url":null,"abstract":"The ongoing race to improve computer graphics leads to more complex GPU hardware and ray tracing techniques whose internal functionality is sometimes hidden to the user. Bounding volume hierarchies and their construction are an important performance aspect of such ray tracing implementations. We propose a novel approach that utilizes binary instrumentation to collect memory traces and then uses them to extract the bounding volume hierarchy (BVH) by analyzing access patters. Our reconstruction allows combining memory traces captured from multiple ray tracing views independently, increasing the reconstruction result. It reaches accuracies of 30% to 45% when comparing against the ground-truth BVH used for ray tracing a single view on a simple scene with one object. With multiple views it is even possible to reconstruct the whole BVH, while we already achieve 98% with just seven views. Because our approach is largely independent of the data structures used in-ternally, these accurate reconstructions serve as a first step into estimation of unknown construction techniques of ray tracing implementations.","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"60 1","pages":"29-34"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81734255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intersection Distance Field Collision for GPU 交叉距离场碰撞GPU
Proceedings. Pacific Conference on Computer Graphics and Applications Pub Date : 2022-01-01 DOI: 10.2312/pg.20221242
Bastian Krayer, Rebekka Görge, Stefan Müller
{"title":"Intersection Distance Field Collision for GPU","authors":"Bastian Krayer, Rebekka Görge, Stefan Müller","doi":"10.2312/pg.20221242","DOIUrl":"https://doi.org/10.2312/pg.20221242","url":null,"abstract":"We present a framework for finding collision points between objects represented by signed distance fields. Particles are used to sample the region where intersections can occur. The distance field representation is used to project the particles onto the surface of the intersection of both objects. From there information, such as collision normals and intersection depth can be extracted. This allows for handling various types of objects in a unified way. Due to the particle approach, the algorithm is well suited to the GPU.","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"91 1","pages":"23-28"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88279658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive and Dynamic Regularization for Rolling Guidance Image Filtering 滚动制导图像滤波的自适应和动态正则化
Proceedings. Pacific Conference on Computer Graphics and Applications Pub Date : 2022-01-01 DOI: 10.2312/pg.20221245
M. Fukatsu, S. Yoshizawa, H. Takemura, H. Yokota
{"title":"Adaptive and Dynamic Regularization for Rolling Guidance Image Filtering","authors":"M. Fukatsu, S. Yoshizawa, H. Takemura, H. Yokota","doi":"10.2312/pg.20221245","DOIUrl":"https://doi.org/10.2312/pg.20221245","url":null,"abstract":"Separating shapes and textures of digital images at different scales is useful in computer graphics. The Rolling Guidance (RG) filter, which removes structures smaller than a specified scale while preserving salient edges, has attracted considerable atten-tion. Conventional RG-based filters have some drawbacks, including smoothness/sharpness quality dependence on scale and non-uniform convergence. This paper proposes a novel RG-based image filter that has more stable filtering quality at varying scales. Our filtering approach is an adaptive and dynamic regularization for a recursive regression model in the RG framework to produce more edge saliency and appropriate scale convergence. Our numerical experiments demonstrated filtering results with uniform convergence and high accuracy for varying scales.","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"323 1","pages":"43-48"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86771391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning a Style Space for Interactive Line Drawing Synthesis from Animated 3D Models 从动画3D模型学习交互式线条合成的风格空间
Proceedings. Pacific Conference on Computer Graphics and Applications Pub Date : 2022-01-01 DOI: 10.2312/pg.20221237
Zeyu Wang, Tuanfeng Y. Wang, Julie Dorsey
{"title":"Learning a Style Space for Interactive Line Drawing Synthesis from Animated 3D Models","authors":"Zeyu Wang, Tuanfeng Y. Wang, Julie Dorsey","doi":"10.2312/pg.20221237","DOIUrl":"https://doi.org/10.2312/pg.20221237","url":null,"abstract":"Most non-photorealistic rendering (NPR) methods for line drawing synthesis operate on a static shape. They are not tailored to process animated 3D models due to extensive per-frame parameter tuning needed to achieve the intended look and natural transition. This paper introduces a framework for interactive line drawing synthesis from animated 3D models based on a learned style space for drawing representation and interpolation. We refer to style as the relationship between stroke placement in a line drawing and its corresponding geometric properties. Starting from a given sequence of an animated 3D character, a user creates drawings for a set of keyframes. Our system embeds the raster drawings into a latent style space after they are disentangled from the underlying geometry. By traversing the latent space, our system enables a smooth transition between the input keyframes. The user may also edit, add, or remove the keyframes interactively, similar to a typical keyframe-based workflow. We implement our system with deep neural networks trained on synthetic line drawings produced by a combination of NPR methods. Our drawing-specific supervision and optimization-based embedding mechanism allow generalization from NPR line drawings to user-created drawings during run time. Experiments show that our approach generates high-quality line drawing animations while allowing interactive control of the drawing style across frames.","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"117 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79373858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interactive Deformable Image Registration with Dual Cursor 交互式可变形图像配准与双光标
Proceedings. Pacific Conference on Computer Graphics and Applications Pub Date : 2022-01-01 DOI: 10.2312/pg.20221241
Bin Deng, T. Igarashi, Tsukasa Koike, Taichi Kin
{"title":"Interactive Deformable Image Registration with Dual Cursor","authors":"Bin Deng, T. Igarashi, Tsukasa Koike, Taichi Kin","doi":"10.2312/pg.20221241","DOIUrl":"https://doi.org/10.2312/pg.20221241","url":null,"abstract":"Deformable image registration is the process of deforming a target image to match corresponding features of a reference image. Fully automatic registration remains difficult; thus, manual registration is dominant in practice. In manual registration, an expert user specifies a set of paired landmarks on the two images; subsequently, the system deforms the target image to match each landmark with its counterpart as a batch process. However, the deformation results are difficult for the user to predict, and moving the cursor back and forth between the two images is time-consuming. To improve the efficiency of this manual process, we propose an interactive method wherein the deformation results are continuously displayed as the user clicks and drags each landmark. Additionally, the system displays two cursors, one on the target image and the other on the reference image, to reduce the amount of mouse movement required. The results of a user study reveal that the proposed interactive method achieves higher accuracy and faster task completion compared to traditional batch landmark placement.","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"2 1","pages":"17-21"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72897178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信