SIGGRAPH Asia 2020 Posters最新文献

筛选
英文 中文
Growth-based 3D modelling using stem-voxels encoded in digital-DNA structures 基于生长的三维建模使用编码在数字dna结构中的茎体素
SIGGRAPH Asia 2020 Posters Pub Date : 2020-12-04 DOI: 10.1145/3415264.3425443
Thomas Raymond, Vladislav Li, V. Argyriou
{"title":"Growth-based 3D modelling using stem-voxels encoded in digital-DNA structures","authors":"Thomas Raymond, Vladislav Li, V. Argyriou","doi":"10.1145/3415264.3425443","DOIUrl":"https://doi.org/10.1145/3415264.3425443","url":null,"abstract":"Leaps in technology are increasingly making the prospect of using biological structures as part of digital models and artwork a tangible reality. In this work a new method heavily inspired by natural biological processes for 3D modelling and animation is proposed. The proposed approach differentiates from the classic assembly or printing methodologies and offers a novel growth based solution for the design and modelling of 3D structures. In order to facilitate the needs of growth based modelling, new terms and graphic primitives such as stem-voxels, muscle-voxels, bone-voxels and digital-DNA are introduced. The core production rules of a novel context free grammar were implemented allowing 3D model designers to build the digital-DNA of a 3D model that the introduced parser will interpret to a full 3D structure. The obtained 3D models support animation using the muscle-voxels, they are able to observe the environment using photoreceptor-voxels and interact with a level of intelligence based on neural networks build with nerve-voxels. The proposed solution was evaluated with a variety of volumetric models demonstrating a strong potential and impact, with many applications and offering a new tool for 3D modelling systems.","PeriodicalId":372541,"journal":{"name":"SIGGRAPH Asia 2020 Posters","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128048727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
I Need to Step Back from It! Modeling Backward Movement from Multimodal Sensors in Virtual Reality 我需要退后一步!虚拟现实中多模态传感器的向后运动建模
SIGGRAPH Asia 2020 Posters Pub Date : 2020-12-04 DOI: 10.1145/3415264.3425469
Seungwon Paik, Kyungsik Han
{"title":"I Need to Step Back from It! Modeling Backward Movement from Multimodal Sensors in Virtual Reality","authors":"Seungwon Paik, Kyungsik Han","doi":"10.1145/3415264.3425469","DOIUrl":"https://doi.org/10.1145/3415264.3425469","url":null,"abstract":"A user’s movement is one of the most important properties that pertain to user experience in a virtual reality (VR) environment. However, little research has focused on examining backward movements. Inappropriate support of such movements could lead to dizziness and disengagement in a VR program. In this paper, we investigate the possibility of detecting forward and backward movements from three different positions of the body (i.e., head, waist, and feet) by conducting a user study. Our machine-learning model yields the detection of forward and backward movements up to 93% accuracy and shows slightly varying performances by the participants. We detail the analysis of our model through the lenses of body position, integration, and sampling rate.","PeriodicalId":372541,"journal":{"name":"SIGGRAPH Asia 2020 Posters","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121222159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detailed 3D Face Reconstruction from Single Images Via Self-supervised Attribute Learning 基于自监督属性学习的单幅图像精细三维人脸重建
SIGGRAPH Asia 2020 Posters Pub Date : 2020-12-04 DOI: 10.1145/3415264.3425455
Mingxin Yang, Jianwei Guo, Juntao Ye, Xiaopeng Zhang
{"title":"Detailed 3D Face Reconstruction from Single Images Via Self-supervised Attribute Learning","authors":"Mingxin Yang, Jianwei Guo, Juntao Ye, Xiaopeng Zhang","doi":"10.1145/3415264.3425455","DOIUrl":"https://doi.org/10.1145/3415264.3425455","url":null,"abstract":"We present a novel approach to reconstruct high-fidelity geometric human face model from a single RGB image. The main idea is to add details into a coarse 3D Morphable Model (3DMM) based model in a self-supervised way. Our observation is that most of the facial details like wrinkles are driven by expression and intrinsic facial characteristics which here we refer to as the facial attribute. To this end, we propose an expression related details recovery scheme and a facial attribute representation.","PeriodicalId":372541,"journal":{"name":"SIGGRAPH Asia 2020 Posters","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115463395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Creating a Virtual Space Globe Using the Hipparcos Catalog 使用依巴可思目录创建虚拟空间地球仪
SIGGRAPH Asia 2020 Posters Pub Date : 2020-12-04 DOI: 10.1145/3415264.3425460
Yasuo Kawai
{"title":"Creating a Virtual Space Globe Using the Hipparcos Catalog","authors":"Yasuo Kawai","doi":"10.1145/3415264.3425460","DOIUrl":"https://doi.org/10.1145/3415264.3425460","url":null,"abstract":"Star charts and a planetarium can be used to identify objects in the sky. However, these are surfaces viewed from Earth, and we cannot understand the relative distances between celestial objects in three dimensions. Therefore, in this study, we developed a system to determine such distances by placing the stars in a virtual space. The system allows users to move freely in a large scale of space using a game engine and the Hipparcos catalog. We can intuitively perceive the relative distances of stars by understanding the three-dimensional configuration of the stars and constellations as seen from Earth.","PeriodicalId":372541,"journal":{"name":"SIGGRAPH Asia 2020 Posters","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127657349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Creation of Interactive Dollhouse with Projection Mapping and Measurement of Distance and Pressure Sensors 创建交互式玩偶屋与投影映射和测量距离和压力传感器
SIGGRAPH Asia 2020 Posters Pub Date : 2020-12-04 DOI: 10.1145/3415264.3425461
Makoto Jimbu, Minori Yoshida, Hiro Bizen, Yasuo Kawai
{"title":"Creation of Interactive Dollhouse with Projection Mapping and Measurement of Distance and Pressure Sensors","authors":"Makoto Jimbu, Minori Yoshida, Hiro Bizen, Yasuo Kawai","doi":"10.1145/3415264.3425461","DOIUrl":"https://doi.org/10.1145/3415264.3425461","url":null,"abstract":"In the growth process of children, playing with dolls contributes to the formation of the self, creativity, and social development. In general dollhouse play, children move the dolls and imagine the world of the dollhouse. In this study, we installed sensors of distance and pressure in a dollhouse, and projected interactive images according to the measured values by projection mapping. It became possible to expand the doll play by generating the image corresponding to the movement of the dolls and hand in real time and reflecting it in the doll house.","PeriodicalId":372541,"journal":{"name":"SIGGRAPH Asia 2020 Posters","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133830999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Casual Real-World VR using Light Fields 使用光场的休闲现实世界VR
SIGGRAPH Asia 2020 Posters Pub Date : 2020-12-04 DOI: 10.1145/3415264.3425452
Yusuke Tomoto, Srinivas Rao, Tobias Bertel, Krunal Chande, Christian Richardt, Stefan Holzer, Rodrigo Ortiz Cayon
{"title":"Casual Real-World VR using Light Fields","authors":"Yusuke Tomoto, Srinivas Rao, Tobias Bertel, Krunal Chande, Christian Richardt, Stefan Holzer, Rodrigo Ortiz Cayon","doi":"10.1145/3415264.3425452","DOIUrl":"https://doi.org/10.1145/3415264.3425452","url":null,"abstract":"Virtual reality (VR) would benefit from more end-to-end systems centered around a casual capturing procedure, high-quality visual results, and representations that are viewable on multiple platforms. We present an end-to-end system that is designed for casual creation of real-world VR content, using a smartphone. We use an AR app to capture a linear light field of a real-world object by recording a video sweep around the object. We predict multiplane images for a subset of input viewpoints, from which we extract high-quality textured geometry that are used for real-time image-based rendering suitable for VR. The round-trip time of our system, from guided capture to interactive display, is typically 1–2 minutes per scene.","PeriodicalId":372541,"journal":{"name":"SIGGRAPH Asia 2020 Posters","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115263519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Interactive 3D Model Generation from Character Illustration 交互式3D模型生成从角色插图
SIGGRAPH Asia 2020 Posters Pub Date : 2020-12-04 DOI: 10.1145/3415264.3425458
Nozomi Isami, Y. Sakamoto
{"title":"Interactive 3D Model Generation from Character Illustration","authors":"Nozomi Isami, Y. Sakamoto","doi":"10.1145/3415264.3425458","DOIUrl":"https://doi.org/10.1145/3415264.3425458","url":null,"abstract":"Recently, 3D model generation methods from images have been proposed. These methods can generate a realistic 3D human model from an image by focusing on the body shape. However, it may be difficult for these methods to generate a user-desired 3D character model from a character illustration. In character illustrations, each character has unique features especially in the lengths of their body parts. To reflect these unique features on 3D models, in this paper, we propose a novel interactive 3D model generation method from character illustrations. Our method modifies 3D models to match the user’s intentions interactively based on the constraints of input poses and symmetrical bone lengths. The experimental results show the effectiveness of our method.","PeriodicalId":372541,"journal":{"name":"SIGGRAPH Asia 2020 Posters","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116110651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Variable Rate Ray Tracing for Virtual Reality 虚拟现实的可变速率光线追踪
SIGGRAPH Asia 2020 Posters Pub Date : 2020-12-04 DOI: 10.1145/3415264.3425451
Jinyuan Yang, Xiaoli Li, A. Campbell
{"title":"Variable Rate Ray Tracing for Virtual Reality","authors":"Jinyuan Yang, Xiaoli Li, A. Campbell","doi":"10.1145/3415264.3425451","DOIUrl":"https://doi.org/10.1145/3415264.3425451","url":null,"abstract":"This short paper presents a method called Variable Rate Ray Tracing to reduce the performance cost with minimal quality loss when facilitating hardware-accelerated ray tracing for Virtual Reality. This method is applied to the ray generation stage in the ray tracing pipeline to vary the ray tracing rate based on the scene specific information. The method uses 3 different control policies to effectively reduce rays generated per second for various needs. Based on the benchmark, this method can improve more than 30% frames per second(FPS) on current main-stream graphics hardware and virtual reality devices.","PeriodicalId":372541,"journal":{"name":"SIGGRAPH Asia 2020 Posters","volume":"139 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124417377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
SIGGRAPH Asia 2020 Posters SIGGRAPH亚洲2020海报展
SIGGRAPH Asia 2020 Posters Pub Date : 1900-01-01 DOI: 10.1145/3415264
{"title":"SIGGRAPH Asia 2020 Posters","authors":"","doi":"10.1145/3415264","DOIUrl":"https://doi.org/10.1145/3415264","url":null,"abstract":"","PeriodicalId":372541,"journal":{"name":"SIGGRAPH Asia 2020 Posters","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125182716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信