Eurographics ... Workshop on 3D Object Retrieval : EG 3DOR. Eurographics Workshop on 3D Object Retrieval最新文献

筛选
英文 中文
Fabric Appearance Benchmark 织物外观基准
S. Merzbach, R. Klein
{"title":"Fabric Appearance Benchmark","authors":"S. Merzbach, R. Klein","doi":"10.2312/egp.20201035","DOIUrl":"https://doi.org/10.2312/egp.20201035","url":null,"abstract":"Appearance modeling is a difficult problem that still receives considerable attention from the graphics and vision communities. Though recent years have brought a growing number of high-quality material databases that have sparked new research, there is a general lack of evaluation benchmarks for performance assessment and fair comparisons between competing works. We therefore release a new dataset and pose a public challenge that will enable standardized evaluations. For this we measured 56 fabric samples with a commercial appearance scanner. We publish the resulting calibrated HDR images, along with baseline SVBRDF fits. The challenge is to recreate, under known light and view sampling, the appearance of a subset of unseen images. User submissions will be automatically evaluated and ranked by a set of standard image metrics. CCS Concepts • Computing methodologies → Reflectance modeling; Appearance and texture representations;","PeriodicalId":72958,"journal":{"name":"Eurographics ... Workshop on 3D Object Retrieval : EG 3DOR. Eurographics Workshop on 3D Object Retrieval","volume":"118 1","pages":"3-4"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89398061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
From Perception to Interaction with Virtual Characters 从感知到与虚拟角色的互动
E. Zell, Katja Zibrek, Xueni Pan, M. Gillies, R. Mcdonnell
{"title":"From Perception to Interaction with Virtual Characters","authors":"E. Zell, Katja Zibrek, Xueni Pan, M. Gillies, R. Mcdonnell","doi":"10.2312/egt.20201001","DOIUrl":"https://doi.org/10.2312/egt.20201001","url":null,"abstract":"This course will introduce students, researchers and digital artists to the recent results in perceptual research on virtual characters. It covers how technical and artistic aspects that constitute the appearance of a virtual character influence human perception, and how to create a plausibility illusion in interactive scenarios with virtual characters. We will report results of studies that addressed the influence of low-level cues like facial proportions, shading or level of detail and higher-level cues such as behavior or artistic stylization. We will place emphasis on aspects that are encountered during character development, animation, interaction design and achieving consistency between the visuals and storytelling. We will close with the relationship between verbal and non-verbal interaction and introduce some concepts which are important for creating convincing character behavior in virtual reality. The insights that we present in this course will serve as an additional toolset to anticipate the effect of certain design decisions and to create more convincing characters, especially in the case where budgets or time are limited. 1. Course Description Virtual humans are finding a growing number of applications, such as in social media apps, Spaces by Facebook, Bitmoji and Genies, as well as computer games and human-computer interfaces. Their use today has also extended from the typical on-screen display applications to immersive and collaborative environments (VR/AR/MR). At the same time, we are also witnessing significant improvements in real-time performance, increased visual fidelity of characters and novel devices. The question of how these developments will be received from the user’s point of view, or which aspects of virtual characters influence the user more, has therefore never been so important. This course will provide an overview of existing perceptual studies related to the topic of virtual characters. To make the course easier to follow, we start with a brief overview of human perception and how perceptual studies are conducted in terms of methods and experiment design. With knowledge of the methods, we continue with artistic and technical aspects which influence the design of character appearance (lighting and shading, facial feature placement, stylization, etc.). Important questions on character design will be addressed such as – if I want my character to be highly appealing, should I render with realistic or stylized shading? What facial features make my character appear more trustworthy? Do dark shadows enhance the emotion my character is portraying? We then dive deeper into the movement of the characters, exploring which information is present in the motion cues and how motion can, in combination with character appearance, guide our perception and even be a foundation of biased perception (stereotypes). Some examples of questions that we will address are – if I want my character to appear extroverted, what movement or app","PeriodicalId":72958,"journal":{"name":"Eurographics ... Workshop on 3D Object Retrieval : EG 3DOR. Eurographics Workshop on 3D Object Retrieval","volume":"26 1","pages":"5-31"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86867025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Procedural 3D Asteroid Surface Detail Synthesis 程序三维小行星表面细节合成
Xizhi Li, René Weller, G. Zachmann
{"title":"Procedural 3D Asteroid Surface Detail Synthesis","authors":"Xizhi Li, René Weller, G. Zachmann","doi":"10.2312/egs.20201020","DOIUrl":"https://doi.org/10.2312/egs.20201020","url":null,"abstract":"We present a novel noise model to procedurally generate volumetric terrain on implicit surfaces. The main idea is to combine a novel Locally Controlled 3D Spot noise (LCSN) for authoring the macro structures and 3D Gabor noise to add micro details. More specifically, a spatially-defined kernel formulation in combination with an impulse distribution enables the LCSN to generate arbitrary size craters and boulders, while the Gabor noise generates stochastic Gaussian details. The corresponding metaball positions in the underlying implicit surface preserve locality to avoid the globality of traditional procedural noise textures, which yields an essential feature that is often missing in procedural texture based terrain generators. Furthermore, different noise-based primitives are integrated through operators, i.e. blending, replacing, or warping into the complex volumetric terrain. The result is a completely implicit representation and, as such, has the advantage of compactness as well as flexible user control. We applied our method to generating high quality asteroid meshes with fine surface details. CCS Concepts • Computing methodologies → Volumetric models;","PeriodicalId":72958,"journal":{"name":"Eurographics ... Workshop on 3D Object Retrieval : EG 3DOR. Eurographics Workshop on 3D Object Retrieval","volume":"1 1","pages":"69-72"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79184996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Practical Male Hair Aging Model 一个实用的男性头发老化模型
D. Volkmann, M. Walter
{"title":"A Practical Male Hair Aging Model","authors":"D. Volkmann, M. Walter","doi":"10.2312/egs.20201017","DOIUrl":"https://doi.org/10.2312/egs.20201017","url":null,"abstract":"The modeling and rendering of hair in Computer Graphics have seen much progress in the last few years. However, modeling and rendering hair aging, visually seen as the loss of pigments, have not attracted the same attention. We introduce in this paper a biologically inspired hair aging system with two main parts: greying of individual hairs, and time evolution of greying over the scalp. The greying of individual hairs is based on current knowledge about melanin loss, whereas the evolution on the scalp is modeled by segmenting the scalp in regions and defining distinct time frames for greying to occur. Our experimental visual results present plausible results despite the relatively simple model. We validate the results by presenting side by side our results with real pictures of men at different ages.","PeriodicalId":72958,"journal":{"name":"Eurographics ... Workshop on 3D Object Retrieval : EG 3DOR. Eurographics Workshop on 3D Object Retrieval","volume":"26 1","pages":"57-60"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91034635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
First Order Signed Distance Fields 一阶带符号距离域
Róbert Bán, Gábor Valasek
{"title":"First Order Signed Distance Fields","authors":"Róbert Bán, Gábor Valasek","doi":"10.2312/egs.20201011","DOIUrl":"https://doi.org/10.2312/egs.20201011","url":null,"abstract":"This paper investigates a first order generalization of signed distance fields. We show that we can improve accuracy and storage efficiency by incorporating the spatial derivatives of the signed distance function into the distance field samples. We show that a representation in power basis remains invariant under barycentric combination, as such, it is interpolated exactly by the GPU. Our construction is applicable in any geometric setting where point-surface distances can be queried. To emphasize the practical advantages of this approach, we apply our results to signed distance field generation from triangular meshes. We propose storage optimization approaches and offer a theoretical and empirical accuracy analysis of our proposed distance field type in relation to traditional, zero order distance fields. We show that the proposed representation may offer an order of magnitude improvement in storage while retaining the same precision as a higher resolution distance field. CCS Concepts • Computing methodologies → Ray tracing; Volumetric models;","PeriodicalId":72958,"journal":{"name":"Eurographics ... Workshop on 3D Object Retrieval : EG 3DOR. Eurographics Workshop on 3D Object Retrieval","volume":"12 1","pages":"33-36"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78223819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Deep-Eyes: Fully Automatic Anime Character Colorization with Painting of Details on Empty Pupils 深眼睛:全自动动画人物着色与绘画的细节对空的瞳孔
Kenta Akita, Yuki Morimoto, R. Tsuruno
{"title":"Deep-Eyes: Fully Automatic Anime Character Colorization with Painting of Details on Empty Pupils","authors":"Kenta Akita, Yuki Morimoto, R. Tsuruno","doi":"10.2312/egs.20201023","DOIUrl":"https://doi.org/10.2312/egs.20201023","url":null,"abstract":"Many studies have recently applied deep learning to the automatic colorization of line drawings. However, it is difficult to paint empty pupils using existing methods because the networks are trained with pupils that have edges, which are generated from color images using image processing. Most actual line drawings have empty pupils that artists must paint in. In this paper, we propose a novel network model that transfers the pupil details in a reference color image to input line drawings with empty pupils. We also propose a method for accurately and automatically coloring eyes. In this method, eye patches are extracted from a reference color image and automatically added to an input line drawing as color hints using our eye position estimation network. CCS Concepts • Computing methodologies → Image processing; • Applied computing → Fine arts;","PeriodicalId":72958,"journal":{"name":"Eurographics ... Workshop on 3D Object Retrieval : EG 3DOR. Eurographics Workshop on 3D Object Retrieval","volume":"107 2 1","pages":"81-84"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85375987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
SHREC 2020 Track: Non-rigid Shape Correspondence of Physically-Based Deformations SHREC 2020轨道:基于物理变形的非刚性形状对应
R. Dyke, F. Zhou, Yu-Kun Lai, Paul L. Rosin, D. Guo, Kun Li, R. Marin, Jingyu Yang
{"title":"SHREC 2020 Track: Non-rigid Shape Correspondence of Physically-Based Deformations","authors":"R. Dyke, F. Zhou, Yu-Kun Lai, Paul L. Rosin, D. Guo, Kun Li, R. Marin, Jingyu Yang","doi":"10.2312/3dor.20201161","DOIUrl":"https://doi.org/10.2312/3dor.20201161","url":null,"abstract":"","PeriodicalId":72958,"journal":{"name":"Eurographics ... Workshop on 3D Object Retrieval : EG 3DOR. Eurographics Workshop on 3D Object Retrieval","volume":"8 1","pages":"19-26"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72822266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Designing a Course on Non-Photorealistic Rendering 设计一门非真实感渲染课程
Ivaylo Ilinkin
{"title":"Designing a Course on Non-Photorealistic Rendering","authors":"Ivaylo Ilinkin","doi":"10.2312/eged.20201028","DOIUrl":"https://doi.org/10.2312/eged.20201028","url":null,"abstract":"This paper presents a course design on Non-Photorealistic Rendering (NPAR). As a sub-field of computer graphics, NPAR aims to model artistic media, styles, and techniques that capture salient characteristics in images to convey particular information or mood. The results can be just as inspiring as the photorealistic scenes produced with the latest ray-tracing techniques even though the goals are fundamentally different. The paper offers ideas for developing a full course on NPAR by presenting a series of assignments that cover a wide range of NPAR techniques and shares experience on teaching such a course at the junior/senior undergraduate level. CCS Concepts • Computing methodologies → Non-photorealistic rendering;","PeriodicalId":72958,"journal":{"name":"Eurographics ... Workshop on 3D Object Retrieval : EG 3DOR. Eurographics Workshop on 3D Object Retrieval","volume":"43 1","pages":"9-16"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73801723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Frequency-Aware Reconstruction of Fluid Simulations with Generative Networks 基于生成网络的流体模拟频率感知重构
Simon Biland, V. C. Azevedo, Byungsoo Kim, B. Solenthaler
{"title":"Frequency-Aware Reconstruction of Fluid Simulations with Generative Networks","authors":"Simon Biland, V. C. Azevedo, Byungsoo Kim, B. Solenthaler","doi":"10.2312/egs.20201019","DOIUrl":"https://doi.org/10.2312/egs.20201019","url":null,"abstract":"Convolutional neural networks were recently employed to fully reconstruct fluid simulation data from a set of reduced parameters. However, since (de-)convolutions traditionally trained with supervised L1-loss functions do not discriminate between low and high frequencies in the data, the error is not minimized efficiently for higher bands. This directly correlates with the quality of the perceived results, since missing high frequency details are easily noticeable. In this paper, we analyze the reconstruction quality of generative networks and present a frequency-aware loss function that is able to focus on specific bands of the dataset during training time. We show that our approach improves reconstruction quality of fluid simulation data in mid-frequency bands, yielding perceptually better results while requiring comparable training time.","PeriodicalId":72958,"journal":{"name":"Eurographics ... Workshop on 3D Object Retrieval : EG 3DOR. Eurographics Workshop on 3D Object Retrieval","volume":"73 1","pages":"65-68"},"PeriodicalIF":0.0,"publicationDate":"2019-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80446032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
VITON-GAN: Virtual Try-on Image Generator Trained with Adversarial Loss 使用对抗损失训练的虚拟试戴图像生成器
Shion Honda
{"title":"VITON-GAN: Virtual Try-on Image Generator Trained with Adversarial Loss","authors":"Shion Honda","doi":"10.2312/egp.20191043","DOIUrl":"https://doi.org/10.2312/egp.20191043","url":null,"abstract":"Generating a virtual try-on image from in-shop clothing images and a model person's snapshot is a challenging task because the human body and clothes have high flexibility in their shapes. In this paper, we develop a Virtual Try-on Generative Adversarial Network (VITON-GAN), that generates virtual try-on images using images of in-shop clothing and a model person. This method enhances the quality of the generated image when occlusion is present in a model person's image (e.g., arms crossed in front of the clothes) by adding an adversarial mechanism in the training pipeline.","PeriodicalId":72958,"journal":{"name":"Eurographics ... Workshop on 3D Object Retrieval : EG 3DOR. Eurographics Workshop on 3D Object Retrieval","volume":"116 1","pages":"9-10"},"PeriodicalIF":0.0,"publicationDate":"2019-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74538896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信