SIGGRAPH Asia 2021 Posters最新文献

筛选
英文 中文
A Robust Display Delay Compensation Technique Considering User’s Head Motion Direction for Cloud XR 考虑用户头部运动方向的云XR鲁棒显示延迟补偿技术
SIGGRAPH Asia 2021 Posters Pub Date : 2021-12-14 DOI: 10.1145/3476124.3488648
Tatsuya Kobayashi, Tomoaki Konno, H. Kato
{"title":"A Robust Display Delay Compensation Technique Considering User’s Head Motion Direction for Cloud XR","authors":"Tatsuya Kobayashi, Tomoaki Konno, H. Kato","doi":"10.1145/3476124.3488648","DOIUrl":"https://doi.org/10.1145/3476124.3488648","url":null,"abstract":"Conventionally, it has been difficult to realize both photorealistic and geometrically consistent 3DCG rendering on head mounted displays (HMDs), due to the trade-off between rendering quality and low motion-to-photon latency (M2PL). In order to solve this problem, we propose a novel rendering framework, where the server renders RGB-D images of a 3D model with the optimally arranged rendering viewpoints, and the client HMD realizes geometric compensation of M2PL by employing depth image based rendering (DIBR). Experiments with real smart glasses show that the proposed method can display binocular images closer to the ground truth than the conventional approaches.","PeriodicalId":199099,"journal":{"name":"SIGGRAPH Asia 2021 Posters","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122941118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Can Shadows Create a Sense of Depth to Mid-air Image? 阴影能给半空图像创造深度感吗?
SIGGRAPH Asia 2021 Posters Pub Date : 2021-12-14 DOI: 10.1145/3476124.3488625
Yutaro Yano, Ayami Hoshi, Naoya Koizumi
{"title":"Can Shadows Create a Sense of Depth to Mid-air Image?","authors":"Yutaro Yano, Ayami Hoshi, Naoya Koizumi","doi":"10.1145/3476124.3488625","DOIUrl":"https://doi.org/10.1145/3476124.3488625","url":null,"abstract":"Shadows are an important factor in the perception of object position and shape. In this study, we investigated the effect of shadows on the perception of the shape of a mid-air image by projecting shadows of different shapes onto a mid-air image that is displayed in real space. Specifically, participants viewed one oval cylinder with a shadow and one oval cylinder without a shadow individually and were forced to choose which had the greater thickness in the depth direction. As a result, we found that shadow shapes possibly changed the perception of the thickness of the mid-air image.","PeriodicalId":199099,"journal":{"name":"SIGGRAPH Asia 2021 Posters","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131557924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Liquid Sound Retrieval using History of Velocities in Physically-based Simulation 基于物理模拟的速度历史的液体声音检索
SIGGRAPH Asia 2021 Posters Pub Date : 2021-12-14 DOI: 10.1145/3476124.3488643
Hyuga Saito, Syuhei Sato, Y. Dobashi
{"title":"A Liquid Sound Retrieval using History of Velocities in Physically-based Simulation","authors":"Hyuga Saito, Syuhei Sato, Y. Dobashi","doi":"10.1145/3476124.3488643","DOIUrl":"https://doi.org/10.1145/3476124.3488643","url":null,"abstract":"This paper presents a novel method for synthesizing sound effects for fluid animation. Previous approaches synthesize the sound for fluids by physically-based simulation, but these approaches need a huge computational cost. To address this, we propose a data-driven method for synthesizing sound effects for fluids. In this paper, we focus on liquid sound. A liquid sound database which consists of a set of recorded sound clips is prepared in advance, and then the most suitable sound clip for an input liquid motion is automatically retrieved from the database. The retrieval is achieved by comparing a waveform of the sound and a history of velocity of the liquid motion computed by the simulation. The velocity history is computed for the regions where the liquid sound is expected to occur. Then, a distance between the velocity history and the waveform of each sound clip in the database is calculated, and our system chooses the clip with the minimum distance. Our method achieves fast synthesis of liquid sound for simulated liquid motion, once the database is prepared. In this paper, we use a small database containing a few sound clips and evaluate the effectiveness of our retrieval approach, as a preliminary experiment of this research.","PeriodicalId":199099,"journal":{"name":"SIGGRAPH Asia 2021 Posters","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131668007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Red versus blue: Slime mold civil war 红色对蓝色:黏菌内战
SIGGRAPH Asia 2021 Posters Pub Date : 2021-12-14 DOI: 10.1145/3476124.3488619
T. McGraw, B. Ferdousi
{"title":"Red versus blue: Slime mold civil war","authors":"T. McGraw, B. Ferdousi","doi":"10.1145/3476124.3488619","DOIUrl":"https://doi.org/10.1145/3476124.3488619","url":null,"abstract":"","PeriodicalId":199099,"journal":{"name":"SIGGRAPH Asia 2021 Posters","volume":"40 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127679752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Development of a Wearable Embedded System providing Tactile and Kinesthetic Haptics Feedback for 3D Interactive Applications 为3D交互应用提供触觉和动觉反馈的可穿戴嵌入式系统的开发
SIGGRAPH Asia 2021 Posters Pub Date : 2021-12-14 DOI: 10.1145/3476124.3488653
M. Roumeliotis, K. Mania
{"title":"Development of a Wearable Embedded System providing Tactile and Kinesthetic Haptics Feedback for 3D Interactive Applications","authors":"M. Roumeliotis, K. Mania","doi":"10.1145/3476124.3488653","DOIUrl":"https://doi.org/10.1145/3476124.3488653","url":null,"abstract":"Existing haptic interfaces providing both tactile and kinesthetic feedback for virtual object manipulation are still bulky, expensive and often grounded, limiting users’ motion. In this work, we present a wearable, lightweight and affordable embedded system aiming to provide both tactile and kinesthetic feedback in 3D applications. We created a PCB for the circuitry and used inexpensive components. The kinesthetic feedback is provided to the user’s hand through a 3D-printed exoskeleton and five servo motors placed on the back of the glove. Tactile feedback is provided to the user’s hand through fifteen coin vibration motors, placed in the inner side of the hand and vibrating at three levels. The system is ideal for prototyping and could be customized, thus, making it scalable and upgradable.","PeriodicalId":199099,"journal":{"name":"SIGGRAPH Asia 2021 Posters","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116788860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
BridgedReality: A Toolkit Connecting Physical and Virtual Spaces through Live Holographic Point Cloud Interaction bridgereality:一个通过实时全息点云交互连接物理和虚拟空间的工具包
SIGGRAPH Asia 2021 Posters Pub Date : 2021-12-14 DOI: 10.1145/3476124.3488656
Mark Armstrong, Lawrence Quest, Yun Suen Pai, K. Kunze, K. Minamizawa
{"title":"BridgedReality: A Toolkit Connecting Physical and Virtual Spaces through Live Holographic Point Cloud Interaction","authors":"Mark Armstrong, Lawrence Quest, Yun Suen Pai, K. Kunze, K. Minamizawa","doi":"10.1145/3476124.3488656","DOIUrl":"https://doi.org/10.1145/3476124.3488656","url":null,"abstract":"The recent emergence of point cloud streaming technologies has spawned new ways to digitally perceive and manipulate live data of users and spaces. The graphical rendering limitations prevent state-of-the-art interaction techniques from achieving segmented bare-body user input to manipulate live point cloud data. We propose BridgedReality, a toolkit that enables users to produce localized virtual effects in live scenes, without the need for an HMD nor any wearable devices or virtual controllers. Our method uses body tracking and an illusory rendering technique to achieve large scale, depth-based, real time interaction with multiple light field projection display interfaces. This toolkit circumvented time-consuming 3D object classification, and packaged multiple proximity effects in a format understandable by middle schoolers. Our work can offer a foundation for multidirectional holographic interfaces, GPU simulated interactions, teleconferencing and gaming activities, as well as live cinematic quality exhibitions.","PeriodicalId":199099,"journal":{"name":"SIGGRAPH Asia 2021 Posters","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123548725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Abstract Drawing Method for Same Shaped but Densely Arranged Many Objects 一种形状相同但排列密集的多物体的抽象绘制方法
SIGGRAPH Asia 2021 Posters Pub Date : 2021-12-14 DOI: 10.1145/3476124.3488651
Reika Yagi, S. Kodama, Tokiichiro Takahashi
{"title":"An Abstract Drawing Method for Same Shaped but Densely Arranged Many Objects","authors":"Reika Yagi, S. Kodama, Tokiichiro Takahashi","doi":"10.1145/3476124.3488651","DOIUrl":"https://doi.org/10.1145/3476124.3488651","url":null,"abstract":"We propose a hierarchical abstract drawing method for many 3D objects which are densely arranged and have almost the same shape. First, an abstract drawing of all the 3D objects is drawn based on a couple of the primal colors, which is one of the global properties of all the 3D objects. Then, an abstract painting is added focusing on a part of each object as local properties. Several results of abstractly rendered scenes are shown, and are confirmed the effectiveness of our method.","PeriodicalId":199099,"journal":{"name":"SIGGRAPH Asia 2021 Posters","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125496911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Decision of Line Structure beyond Junctions Using U-Net-Based CNN for Line Drawing Rendering 基于u - net的CNN线绘制中结点外线结构的确定
SIGGRAPH Asia 2021 Posters Pub Date : 2021-12-14 DOI: 10.1145/3476124.3488641
Ryogo Ito, Mitsuhiro Uchida, S. Saito
{"title":"Decision of Line Structure beyond Junctions Using U-Net-Based CNN for Line Drawing Rendering","authors":"Ryogo Ito, Mitsuhiro Uchida, S. Saito","doi":"10.1145/3476124.3488641","DOIUrl":"https://doi.org/10.1145/3476124.3488641","url":null,"abstract":"This paper introduces a U-Net-based neural network to determine line structure beyond junctions more accurately than the previous work [Guo et al. 2019]. In addition to the input of the previous work, we input 3D information to our neural network. We also propose a method to generate the training dataset automatically. The rendering results by the stylized line rendering [Uchida and Saito 2020] show that our neural network improves the streams of strokes.","PeriodicalId":199099,"journal":{"name":"SIGGRAPH Asia 2021 Posters","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128895661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Procedural MatCap System for Cel-Shaded Japanese Animation Production 用于细胞阴影日本动画制作的程序贴图系统
SIGGRAPH Asia 2021 Posters Pub Date : 2021-12-14 DOI: 10.1145/3476124.3488620
Yuki Koyama, Takeshi Tsuruta, Heisuke Saito, Daisuke Takizawa, Hiroshi Moriguchi
{"title":"A Procedural MatCap System for Cel-Shaded Japanese Animation Production","authors":"Yuki Koyama, Takeshi Tsuruta, Heisuke Saito, Daisuke Takizawa, Hiroshi Moriguchi","doi":"10.1145/3476124.3488620","DOIUrl":"https://doi.org/10.1145/3476124.3488620","url":null,"abstract":"MatCap is an expressive approach to shade 3D models and is promising for the production of typical Japanese-style cel-shaded animations. However, we experienced an asset management problem in our previous short film production; we needed to manually create many MatCap assets to achieve variations shot by shot or even frame by frame. In this work, we identify requirements for shading systems in Japanese animation production and describe our procedural MatCap system, which satisfies the requirements. Procedural MatCap generates customizable MatCap assets fully procedurally in run time, which drastically improves the asset manageability.","PeriodicalId":199099,"journal":{"name":"SIGGRAPH Asia 2021 Posters","volume":"256 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114359144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Color-Normal Residual Networks for Geometry Refinement Extracting Color Consistency and Fine Geometry 深度颜色-正态残差网络用于提取颜色一致性和精细几何
SIGGRAPH Asia 2021 Posters Pub Date : 2021-12-14 DOI: 10.1145/3476124.3488646
MinGeun Park, Seungkyu Lee
{"title":"Deep Color-Normal Residual Networks for Geometry Refinement Extracting Color Consistency and Fine Geometry","authors":"MinGeun Park, Seungkyu Lee","doi":"10.1145/3476124.3488646","DOIUrl":"https://doi.org/10.1145/3476124.3488646","url":null,"abstract":"In recent years, texture mapping in 3D modeling has been remarkably improved for realistic rendering. However, small error in reconstructed 3D geometry makes serious error in texture mapping. To address the problem, most of prior methods are devoted to refinement of 3D geometry without visual clues. In this work, we refine 3D geometry based on color consistency and surface normal using a deep neural network. Our method optimizes the location of each vertex maximizing the quality of related textures.","PeriodicalId":199099,"journal":{"name":"SIGGRAPH Asia 2021 Posters","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116446091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信