SIGGRAPH Asia 2019 Posters最新文献

筛选
英文 中文
Balance-Based Photo Posting 平衡发图
SIGGRAPH Asia 2019 Posters Pub Date : 2019-11-17 DOI: 10.1145/3355056.3364564
Yu Song, Fan Tang, Weiming Dong, Feiyue Huang, Changsheng Xu
{"title":"Balance-Based Photo Posting","authors":"Yu Song, Fan Tang, Weiming Dong, Feiyue Huang, Changsheng Xu","doi":"10.1145/3355056.3364564","DOIUrl":"https://doi.org/10.1145/3355056.3364564","url":null,"abstract":"","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123253347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Reinforcement of Kinesthetic Illusion by Simultaneous Multi-Point Vibratory Stimulation 多点同步振动刺激强化动觉错觉
SIGGRAPH Asia 2019 Posters Pub Date : 2019-11-17 DOI: 10.1145/3355056.3364576
Keigo Ushiyama, Satoshi Tanaka, Akifumi Takahashi, H. Kajimoto
{"title":"Reinforcement of Kinesthetic Illusion by Simultaneous Multi-Point Vibratory Stimulation","authors":"Keigo Ushiyama, Satoshi Tanaka, Akifumi Takahashi, H. Kajimoto","doi":"10.1145/3355056.3364576","DOIUrl":"https://doi.org/10.1145/3355056.3364576","url":null,"abstract":"The kinesthetic sensation is important in terms of creating presence in virtual reality applications. One possible way of presenting the kinesthetic sensation with compact equipment is to use the kinesthetic illusion, which is an illusion of position and movement of the one’s own body, generated by vibration. However, the kinesthetic illusion observed currently is an illusion of neither large nor rapid movement. To resolve this issue, we propose simultaneous stimulation of numerous tendons and muscles related to arm movement. Our investigation of the chest, lower arm, and upper arm finds an intensity change of the illusion when multiple points are stimulated.","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125149424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Human Motion Denoising Using Attention-Based Bidirectional Recurrent Neural Network 基于注意的双向递归神经网络人体运动去噪
SIGGRAPH Asia 2019 Posters Pub Date : 2019-11-17 DOI: 10.1145/3355056.3364577
S. Kim, Hanyoung Jang, Jongmin Kim
{"title":"Human Motion Denoising Using Attention-Based Bidirectional Recurrent Neural Network","authors":"S. Kim, Hanyoung Jang, Jongmin Kim","doi":"10.1145/3355056.3364577","DOIUrl":"https://doi.org/10.1145/3355056.3364577","url":null,"abstract":"In this paper, we propose a novel method of denoising human motion using a bidirectional recurrent neural network (BRNN) with an attention mechanism. The corrupted motion that is captured from a single 3D depth sensor camera is automatically fixed in the well-established smooth motion manifold. Incorporating an attention mechanism into BRNN achieves better optimization results and higher accuracy because a higher weight value is selectively given to the more important input pose at a specific frame for encoding the input motion when compared to other deep learning frameworks. The results show that our approach efficiently handles various types of motion and noise. We also experiment with different features to find the best feature and believe that our method will be sufficiently desirable to be used in motion capture applications as a post-processing step after capturing human motion.","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133403293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Lucciola: Presenting Aerial Images by Generating a Fog Screenat any Point in the Same 3D Space as a User Lucciola:通过在同一3D空间的任何点生成雾幕来呈现航拍图像
SIGGRAPH Asia 2019 Posters Pub Date : 2019-11-17 DOI: 10.1145/3355056.3364566
Takahiro Kusabuka, Shin'ichiro Eitoku
{"title":"Lucciola: Presenting Aerial Images by Generating a Fog Screenat any Point in the Same 3D Space as a User","authors":"Takahiro Kusabuka, Shin'ichiro Eitoku","doi":"10.1145/3355056.3364566","DOIUrl":"https://doi.org/10.1145/3355056.3364566","url":null,"abstract":"In this paper, we propose a method of presenting an aerial image at any point in the same three-dimensional space as a user. In the existing method, presenting an image to an arbitrary point in 3D space is difficult because the presentation position on the device is fixed. Therefore, in this study, particles with scattering properties remain to an arbitrary point in space and used as a screen. Specifically, we focus on the phenomenon of vortex rings that can stably transport particles, and we generate an aerial screen by colliding vortex rings ejected from air cannons at multiple points in the air. In a prototyping experiment, we generated a screen at a specified point in space and confirmed that aerial image presentation was possible by projection. We also determined potential issues.","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129999393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
HinHRob: A Performance Robot for Glove Puppetry HinHRob:手套木偶表演机器人
SIGGRAPH Asia 2019 Posters Pub Date : 2019-11-17 DOI: 10.1145/3355056.3364595
Huahui Liu, Yingying She, Lin Lin, Jin Chen, Xiaomeng Xu, Jiayu Lin
{"title":"HinHRob: A Performance Robot for Glove Puppetry","authors":"Huahui Liu, Yingying She, Lin Lin, Jin Chen, Xiaomeng Xu, Jiayu Lin","doi":"10.1145/3355056.3364595","DOIUrl":"https://doi.org/10.1145/3355056.3364595","url":null,"abstract":"China’s intangible cultural heritage glove puppetry is a thousand-year form of performance. However, this ancient cultural and artistic treasures are faced with difficulties of protection and inheritance. Our approach is to integrate robotics with the glove puppetry, design and develop the glove puppetry robot HinHRob. It simulates the performances of puppeteers and interacts with the audience in real-time. The robot not only can perform with professional puppeteers but also attract interest from audiences of different cultural backgrounds and ages. The creative approach of combining technology and the intangible cultural heritage may open a new mode for the ancient human legacy, the glove puppetry.","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127444068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Code Weaver: A Tangible Programming Learning Tool with Mixed Reality Interface Code Weaver:一个带有混合现实界面的有形编程学习工具
SIGGRAPH Asia 2019 Posters Pub Date : 2019-11-17 DOI: 10.1145/3355056.3364561
Ren Sakamoto, Toshikazu Ohshima
{"title":"Code Weaver: A Tangible Programming Learning Tool with Mixed Reality Interface","authors":"Ren Sakamoto, Toshikazu Ohshima","doi":"10.1145/3355056.3364561","DOIUrl":"https://doi.org/10.1145/3355056.3364561","url":null,"abstract":"In this study, we developed Code Weaver, a tool for learning basic programming concepts designed for elementary school or younger children. A tangible user interface of this tool can be programmed by directly combining parts with a physical form by the users’ hands. In this way, we attempted to resolve several problems, including the typical obstacles encountered in learning programming languages by small children such as text input via a keyboard, strict syntax rule requirements, and difficulties associated with group learning involving multiple participants.","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121335134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Modelling scene data for render time estimation 建模场景数据的渲染时间估计
SIGGRAPH Asia 2019 Posters Pub Date : 2019-11-17 DOI: 10.1145/3355056.3364578
Harsha K Chidambara
{"title":"Modelling scene data for render time estimation","authors":"Harsha K Chidambara","doi":"10.1145/3355056.3364578","DOIUrl":"https://doi.org/10.1145/3355056.3364578","url":null,"abstract":"Rendering a scene is the most repeated and resource intensive task, at the core of a VFX facility. Each render can consume significant compute resources, which are expensive and finite. The number of renders (iterations) required to final a shot varies based on the creative and technical complexity of the scene. Getting a reliable estimate of the render time beforehand could prove useful from a budgeting and scheduling perspective. In this poster we present a novel approach to estimate render time of a scene based on a machine learning model built upon previous renders on a show. Each input vector for training is encoded from direct constituents of a scene like assets, looks, lights and render parameters like number of samples and resolution. Renders are categorized into two buckets: less than an hour and greater than an hour and two models are built for estimation. The estimated results for test scenes are verified against the actual render time for measuring accuracy.","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116803240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PondusHand: Measure User’s Weight Feeling by Photo Sensor Array around Forearm PondusHand:通过前臂周围的光电传感器阵列测量用户的体重感觉
SIGGRAPH Asia 2019 Posters Pub Date : 2019-11-17 DOI: 10.1145/3355056.3364552
Hosono Satoshi, Shoji Nishimura, Ken Iwasaki, E. Tamaki
{"title":"PondusHand: Measure User’s Weight Feeling by Photo Sensor Array around Forearm","authors":"Hosono Satoshi, Shoji Nishimura, Ken Iwasaki, E. Tamaki","doi":"10.1145/3355056.3364552","DOIUrl":"https://doi.org/10.1145/3355056.3364552","url":null,"abstract":"Weight feeling is important for musical instrument training and physical workout training. But it is difficult to convey accurate information about weight feeling to trainer with visual or verbal information.This study measures weight feeling on muscles when playing a piano keyboard or doing push-ups using a wearable device. To do that, the muscle deformation data is measured by a photo-sensor array wrapped around the forearm. This data is input to a trained Support Vector Regression (SVR) classifier that estimates weight feeling as output. As a result of our experiment, the correlation coefficient between the measured value and the estimated value was 0.911 while RMSE and MAE were 236 g and150 g respectively when estimating weights up to 2000 g. In future work, we want to use this technique under many arm posture.","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"134 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128546131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Investigating the Role of Task Complexity in Virtual Immersive Training (VIT) Systems 任务复杂性在虚拟沉浸式训练(VIT)系统中的作用研究
SIGGRAPH Asia 2019 Posters Pub Date : 2019-11-17 DOI: 10.1145/3355056.3364587
Konstantinos Koumaditis, Francesco Chinello
{"title":"Investigating the Role of Task Complexity in Virtual Immersive Training (VIT) Systems","authors":"Konstantinos Koumaditis, Francesco Chinello","doi":"10.1145/3355056.3364587","DOIUrl":"https://doi.org/10.1145/3355056.3364587","url":null,"abstract":"The focus of this research is to introduce the concept of Training Task Complexity (TT) in the design of Virtual Immersive Training (VIT) systems. In this report, we describe design parameters, experimentation design and initial results.","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131009457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Low Cost Multi-Camera Array for Panoramic Light Field Video Capture 用于全景光场视频采集的低成本多摄像头阵列
SIGGRAPH Asia 2019 Posters Pub Date : 2019-11-17 DOI: 10.1145/3355056.3364593
M. Broxton, Jay Busch, Jason Dourgarian, Matthew DuVall, Daniel Erickson, Daniel Evangelakos, John Flynn, R. Overbeck, Matt Whalen, P. Debevec
{"title":"A Low Cost Multi-Camera Array for Panoramic Light Field Video Capture","authors":"M. Broxton, Jay Busch, Jason Dourgarian, Matthew DuVall, Daniel Erickson, Daniel Evangelakos, John Flynn, R. Overbeck, Matt Whalen, P. Debevec","doi":"10.1145/3355056.3364593","DOIUrl":"https://doi.org/10.1145/3355056.3364593","url":null,"abstract":"We present a portable multi-camera system for recording panoramic light field video content. The proposed system captures wide baseline (0.8 meters), high resolution (>15 pixels per degree), large field of view (>220°) light fields at 30 frames per second. The array contains 47 time-synchronized cameras distributed on the surface of a hemispherical, 0.92 meter diameter plastic dome. We use commercially available action sports cameras (Yi 4k) mounted inside the dome using 3D printed brackets. The dome, mounts, triggering hardware and cameras are inexpensive and the array itself is easy to fabricate. Using modern view interpolation algorithms we can render objects as close as 33-cm to the surface of the array.","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"33 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133136678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信