Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games最新文献

筛选
英文 中文
On Ray Reordering Techniques for Faster GPU Ray Tracing 关于更快的GPU光线追踪的光线重排序技术
Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games Pub Date : 2020-05-04 DOI: 10.1145/3384382.3384534
Daniel Meister, Jakub Boksanský, M. Guthe, Jiří Bittner
{"title":"On Ray Reordering Techniques for Faster GPU Ray Tracing","authors":"Daniel Meister, Jakub Boksanský, M. Guthe, Jiří Bittner","doi":"10.1145/3384382.3384534","DOIUrl":"https://doi.org/10.1145/3384382.3384534","url":null,"abstract":"We study ray reordering as a tool for increasing the performance of existing GPU ray tracing implementations. We focus on ray reordering that is fully agnostic to the particular trace kernel. We summarize the existing methods for computing the ray sorting keys and discuss their properties. We propose a novel modification of a previously proposed method using the termination point estimation that is well-suited to tracing secondary rays. We evaluate the ray reordering techniques in the context of the wavefront path tracing using the RTX trace kernels. We show that ray reordering yields significantly higher trace speed on recent GPUs (1.3 − 2.0 ×), but to recover the reordering overhead in the hardware-accelerated trace phase is problematic.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"159 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89630519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Repurposing a Relighting Network for Realistic Compositions of Captured Scenes 重新利用一个重照明网络的现实构图捕获的场景
Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games Pub Date : 2020-05-04 DOI: 10.1145/3384382.3384523
Baptiste Nicolet, J. Philip, G. Drettakis
{"title":"Repurposing a Relighting Network for Realistic Compositions of Captured Scenes","authors":"Baptiste Nicolet, J. Philip, G. Drettakis","doi":"10.1145/3384382.3384523","DOIUrl":"https://doi.org/10.1145/3384382.3384523","url":null,"abstract":"Multi-view stereo can be used to rapidly create realistic virtual content, such as textured meshes or a geometric proxy for free-viewpoint Image-Based Rendering (IBR). These solutions greatly simplify the content creation process compared to traditional methods, but it is difficult to modify the content of the scene. We propose a novel approach to create scenes by composing (parts of) multiple captured scenes. The main difficulty of such compositions is that lighting conditions in each captured scene are different; to obtain a realistic composition we need to make lighting coherent. We propose a two-pass solution, by adapting a multi-view relighting network. We first match the lighting conditions of each scene separately and then synthesize shadows between scenes in a subsequent pass. We also improve the realism of the composition by estimating the change in ambient occlusion in contact areas between parts and compensate for the color balance of the different cameras used for capture. We illustrate our method with results on multiple compositions of outdoor scenes and show its application to multi-view image composition, IBR and textured mesh creation.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"86 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72800460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Contour-based 3D Modeling through Joint Embedding of Shapes and Contours 基于形状和轮廓联合嵌入的基于轮廓的三维建模
Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games Pub Date : 2020-05-04 DOI: 10.1145/3384382.3384518
Aobo Jin, Q. Fu, Z. Deng
{"title":"Contour-based 3D Modeling through Joint Embedding of Shapes and Contours","authors":"Aobo Jin, Q. Fu, Z. Deng","doi":"10.1145/3384382.3384518","DOIUrl":"https://doi.org/10.1145/3384382.3384518","url":null,"abstract":"In this paper, we propose a novel space that jointly embeds both 2D occluding contours and 3D shapes via a variational autoencoder (VAE) and a volumetric autoencoder. Given a dataset of 3D shapes, we extract their occluding contours via projections from random views and use the occluding contours to train the VAE. Then, the obtained continuous embedding space, where each point is a latent vector that represents an occluding contour, can be used to measure the similarity between occluding contours. After that, the volumetric autoencoder is trained to first map 3D shapes onto the embedding space through a supervised learning process and then decode the merged latent vectors of three occluding contours (from three different views) of a 3D shape to its 3D voxel representation. We conduct various experiments and comparisons to demonstrate the usefulness and effectiveness of our method for sketch-based 3D modeling and shape manipulation applications.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"91 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89384206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
The Role of the Field Dependence-independence Construct on the Flow-performance Link in Virtual Reality 虚拟现实中场依赖无关结构在流-性能环节中的作用
Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games Pub Date : 2020-05-04 DOI: 10.1145/3384382.3384529
Yulong Bian, Chao Zhou, Yeqing Chen, Yanshuai Zhao, Juan Liu, Chenglei Yang
{"title":"The Role of the Field Dependence-independence Construct on the Flow-performance Link in Virtual Reality","authors":"Yulong Bian, Chao Zhou, Yeqing Chen, Yanshuai Zhao, Juan Liu, Chenglei Yang","doi":"10.1145/3384382.3384529","DOIUrl":"https://doi.org/10.1145/3384382.3384529","url":null,"abstract":"The flow experience-performance link is commonly found weak in virtual environments (VEs). The weak association model (WAM) suggests that distraction caused by disjointed features may be associated with the weak association. People characterized by field independent (FI) or field dependent (FD) cognitive style have different abilities in handling sustained attention, thus they may perform differently in the flow-performance link. To explore the role of the field dependence-independence (FDI) construct on the flow-performance link in virtual reality (VR), we developed a VR experimental environment, based on which two empirical studies were performed. Study 1 revealed FD individuals have higher dispersion degree of fixations and showed a weaker flow-performance link. Next, we provide visual cues that utilize distractors to achieve more task-oriented attention. Study 2 found it helps strengthen the task performance, as well as the flow-performance link of FD individuals without increasing distraction. This paper helps draw conclusions on the effects of human diversity on the flow-performance link in VEs and found ways to design a VR system according to individual characteristics.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"145 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88443797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Real-time Muscle-based Facial Animation using Shell Elements and Force Decomposition 使用壳元素和力分解的实时基于肌肉的面部动画
Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games Pub Date : 2020-05-04 DOI: 10.1145/3384382.3384531
Jungmin Kim, M. Choi, Young J. Kim
{"title":"Real-time Muscle-based Facial Animation using Shell Elements and Force Decomposition","authors":"Jungmin Kim, M. Choi, Young J. Kim","doi":"10.1145/3384382.3384531","DOIUrl":"https://doi.org/10.1145/3384382.3384531","url":null,"abstract":"We present a novel algorithm for physics-based real-time facial animation driven by muscle deformation. Unlike the previous works using 3D finite elements, we use a 2D shell element to avoid inefficient or undesired tessellation due to the thin structure of facial muscles. To simplify the analysis and achieve real-time performance, we adopt real-time thin shell simulation of [Choi et al. 2007]. Our facial system is composed of four layers of skin, subcutaneous layer, muscles, and skull, based on human facial anatomy. Skin and muscles are composed of shell elements, subcutaneous fatty tissue is assumed as a uniform elastic body, and the fixed part of facial muscles is handled by static position constraint. We control muscles to have stretch deformation using modal analysis and apply mass-spring force to skin mesh which is triggered by the muscle deformation. In our system, only the region of interest for skin can be affected by the muscle. To handle the coupled result of facial animation, we decouple the system according to the type of external forces applied to the skin. We show a series of real-time facial animation caused by selected major muscles that are relevant to expressive skin deformation. Our system has generality for importing new types of muscles and skin mesh when their shape or positions are changed.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"58 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72705436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Real-time Face Video Swapping From A Single Portrait 实时人脸视频交换从一个单一的肖像
Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games Pub Date : 2020-05-04 DOI: 10.1145/3384382.3384519
Luming Ma, Z. Deng
{"title":"Real-time Face Video Swapping From A Single Portrait","authors":"Luming Ma, Z. Deng","doi":"10.1145/3384382.3384519","DOIUrl":"https://doi.org/10.1145/3384382.3384519","url":null,"abstract":"We present a novel high-fidelity real-time method to replace the face in a target video clip by the face from a single source portrait image. Specifically, we first reconstruct the illumination, albedo, camera parameters, and wrinkle-level geometric details from both the source image and the target video. Then, the albedo of the source face is modified by a novel harmonization method to match the target face. Finally, the source face is re-rendered and blended into the target video using the lighting and camera parameters from the target video. Our method runs fully automatically and at real-time rate on any target face captured by cameras or from legacy video. More importantly, unlike existing deep learning based methods, our method does not need to pre-train any models, i.e., pre-collecting a large image/video dataset of the source or target face for model training is not needed. We demonstrate that a high level of video-realism can be achieved by our method on a variety of human faces with different identities, ethnicities, skin colors, and expressions.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90071095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Procedural band patterns 程序波段图
Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games Pub Date : 2020-03-03 DOI: 10.1145/3384382.3384522
Jimmy Etienne, S. Lefebvre
{"title":"Procedural band patterns","authors":"Jimmy Etienne, S. Lefebvre","doi":"10.1145/3384382.3384522","DOIUrl":"https://doi.org/10.1145/3384382.3384522","url":null,"abstract":"We seek to cover a parametric domain with a set of evenly spaced bands which number and width varies according to a density field. We propose an implicit procedural algorithm, that generates the band pattern from a pixel shader and adapts to changes to the control fields in real time. Each band is uniquely identified by an integer. This allows a wide range of texturing effects, including specifying a different appearance in each individual bands. Our technique also affords for progressive gradations of scales, avoiding the abrupt doubling of the number of lines of typical subdivision approaches. This leads to a general approach for drawing bands, drawing splitting and merging curves, and drawing evenly spaced streamlines. Using these base ingredients, we demonstrate a wide variety of texturing effects.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"90 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85541671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
I3D '20: Symposium on Interactive 3D Graphics and Games, San Francisco, CA, USA, September 15-17, 2020 I3D '20:交互式3D图形与游戏研讨会,2020年9月15-17日,美国旧金山
{"title":"I3D '20: Symposium on Interactive 3D Graphics and Games, San Francisco, CA, USA, September 15-17, 2020","authors":"","doi":"10.1145/3384382","DOIUrl":"https://doi.org/10.1145/3384382","url":null,"abstract":"","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"28 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72897390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interactive Continuous Collision Detection for Topology Changing Models Using Dynamic Clustering. 基于动态聚类的拓扑变化模型交互式连续碰撞检测。
Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games Pub Date : 2015-02-01 DOI: 10.1145/2699276.2699286
Liang He, Ricardo Ortiz, Andinet Enquobahrie, Dinesh Manocha
{"title":"Interactive Continuous Collision Detection for Topology Changing Models Using Dynamic Clustering.","authors":"Liang He,&nbsp;Ricardo Ortiz,&nbsp;Andinet Enquobahrie,&nbsp;Dinesh Manocha","doi":"10.1145/2699276.2699286","DOIUrl":"https://doi.org/10.1145/2699276.2699286","url":null,"abstract":"<p><p>We present a fast algorithm for continuous collision detection between deformable models. Our approach performs no precomputation and can handle general triangulated models undergoing topological changes. We present a fast decomposition algorithm that represents the mesh boundary using hierarchical clusters and only needs to perform inter-cluster collision checks. The key idea is to compute such clusters quickly and merge them to generate a dynamic bounding volume hierarchy. The overall approach reduces the overhead of computing the hierarchy and also reduces the number of false positives. We highlight the the algorithm's performance on many complex benchmarks generated from medical simulations and crash analysis. In practice, we observe 1.4 to 5 times speedup over prior CCD algorithms for deformable models in our benchmarks.</p>","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"2015 ","pages":"47-54"},"PeriodicalIF":0.0,"publicationDate":"2015-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/2699276.2699286","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"34303767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Real-time water drops and flows on glass panes 实时水滴和流动的玻璃面板
Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games Pub Date : 2013-03-21 DOI: 10.1145/2448196.2448240
Kai-Chun Chen, Pei-Shan Chen, Sai-Keung Wong
{"title":"Real-time water drops and flows on glass panes","authors":"Kai-Chun Chen, Pei-Shan Chen, Sai-Keung Wong","doi":"10.1145/2448196.2448240","DOIUrl":"https://doi.org/10.1145/2448196.2448240","url":null,"abstract":"Water drops and water flows exhibit interesting motion behaviors and amazing patterns on the surfaces of objects, such as leaves of plants and glass panes. Water drops and water flows are commonly seen in a rainy day. A water drop contains a small amount of water. The motion of a water drop is affected by various factors, including gravity, surface tension, cohesion force and adhesion [Zhang et al. 2012]. The situation is more complicated when we consider the roughness of the surface, surface purities and etc. Kaneda et al. [1993] proposed a discrete model of a glass plate for simulating the streams from the water droplets. The glass plate is divided into a grid. A water droplet is represented as a particle. The law of conservation of momentum is applied for merging droplets. A simple ray tracing technique is adopted for rendering the water droplets that are represented as spheres.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"5 1","pages":"192"},"PeriodicalIF":0.0,"publicationDate":"2013-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75171261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信