SIGGRAPH Asia 2019 Posters最新文献

筛选
英文 中文
Fast, memory efficient and resolution independent rendering of cubic Bézier curves using tessellation shaders 快速,内存效率和分辨率独立的立方bsamzier曲线渲染使用镶嵌着色器
SIGGRAPH Asia 2019 Posters Pub Date : 2019-11-17 DOI: 10.1145/3355056.3364548
Harish Kumar, Anmol Sud
{"title":"Fast, memory efficient and resolution independent rendering of cubic Bézier curves using tessellation shaders","authors":"Harish Kumar, Anmol Sud","doi":"10.1145/3355056.3364548","DOIUrl":"https://doi.org/10.1145/3355056.3364548","url":null,"abstract":"Cubic Bézier curves are an integral part of vector graphics. Standard formats such as Adobe Postscript, SVG, Font definitions and PDF describe Path objects as a composition of cubic Bézier curves. Drawing cubic Bézier curves often requires drawing strokes which are less than one device pixel in width. Such strokes, commonly referred to as thin strokes, are very common in creative workflows but rendering them, being computationally expensive, slows down creative content process. Conventionally, thin strokes were rendered with CPU techniques. However, the advent of GPU programming in the last decade or so, has led to development of SIMD techniques suitable for rendering thin strokes on GPUs. These GPU","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115624079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BookVIS: Enhancing Browsing Experiences in Bookstores and Libraries BookVIS:增强书店和图书馆的浏览体验
SIGGRAPH Asia 2019 Posters Pub Date : 2019-11-17 DOI: 10.1145/3355056.3364594
Zona Kostic, Nathan Weeks, Johann Philipp Dreessen, Jelena Dowey, Jeffrey Baglioni
{"title":"BookVIS: Enhancing Browsing Experiences in Bookstores and Libraries","authors":"Zona Kostic, Nathan Weeks, Johann Philipp Dreessen, Jelena Dowey, Jeffrey Baglioni","doi":"10.1145/3355056.3364594","DOIUrl":"https://doi.org/10.1145/3355056.3364594","url":null,"abstract":"","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122118754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Method of Making Wound Molds for Prosthetic Makeup using 3D Printer 一种利用3D打印机制作假体化妆伤口模具的方法
SIGGRAPH Asia 2019 Posters Pub Date : 2019-11-17 DOI: 10.1145/3355056.3364573
Yoon-Seok Choi, Soonchul Jung, Jin-Seo Kim
{"title":"A Method of Making Wound Molds for Prosthetic Makeup using 3D Printer","authors":"Yoon-Seok Choi, Soonchul Jung, Jin-Seo Kim","doi":"10.1145/3355056.3364573","DOIUrl":"https://doi.org/10.1145/3355056.3364573","url":null,"abstract":"Conventionally to make wound props, firstly artists carves wound sculpture using oil clay and makes the wound mold by pouring silicon or plaster over the finished sculpture then pours silicone into the wound mold to finish the wound props. This conventional approach takes a lot of time and effort to learn how to handle materials such as oil clay or silicon, or to acquire wound sculpting techniques. Recently, many users are trying to create wound molds using 3D modeling software and 3D printers but it is difficult for non-experts to conduct tasks such as 3D wound modeling or 3D model transformation for 3D printers. This paper suggests a simple and rapid way for users to create a wound mold model from a wound image and to print one using a 3D printer. Our method provides an easy-to-use capabilities for wound molds production so that the makeup artists who are not familiar with 3D modeling can easily create the molds using the software.","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122807845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Computing 3D Clipped Voronoi Diagrams on GPU 在GPU上计算三维剪切Voronoi图
SIGGRAPH Asia 2019 Posters Pub Date : 2019-11-17 DOI: 10.1145/3355056.3364581
Xiaohan Liu, Dong‐Ming Yan
{"title":"Computing 3D Clipped Voronoi Diagrams on GPU","authors":"Xiaohan Liu, Dong‐Ming Yan","doi":"10.1145/3355056.3364581","DOIUrl":"https://doi.org/10.1145/3355056.3364581","url":null,"abstract":"Computing clipped Voronoi diagrams in 3D volume is a challenging problem. In this poster, we propose an efficient GPU implementation to tackle this problem. By discretizing the 3D volume into a tetrahedral mesh, the main idea of our approach is that we use the four planes of each tetrahedron (tet for short in the following) to clip the Voronoi cells, instead of using the bisecting planes of Voronoi cells to clip tets like previous approaches. This strategy reduces computational complexity drastically. Our approach outperforms the state-of-the-art CPU method up to one order of magnitude.","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129994223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Animation Video Resequencing with a Convolutional AutoEncoder 动画视频重排序与卷积自动编码器
SIGGRAPH Asia 2019 Posters Pub Date : 2019-11-17 DOI: 10.1145/3355056.3364550
Shangzhan Zhang, Charles C. Morace, T. Le, Chih-Kuo Yeh, Sheng-Yi Yao, Shih-Syun Lin, Tong-Yee Lee
{"title":"Animation Video Resequencing with a Convolutional AutoEncoder","authors":"Shangzhan Zhang, Charles C. Morace, T. Le, Chih-Kuo Yeh, Sheng-Yi Yao, Shih-Syun Lin, Tong-Yee Lee","doi":"10.1145/3355056.3364550","DOIUrl":"https://doi.org/10.1145/3355056.3364550","url":null,"abstract":"Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). SA '19 Posters, November 17-20, 2019, Brisbane, QLD, Australia © 2019 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-6943-5/19/11. https://doi.org/10.1145/3355056.3364550 animators commonly utilize a set of principles, including natural movement, as a model and will incorporate other principles for dramatic effects and emotional impact. There have been many techniques developed to ease the computer animation pipeline, production is still an arduous process that involves the creation of many image sequences depicting the motion of complex characters and their environments. If a single image is out of place, the whole animation may be ruined by an unnatural movement, which is not only visually displeasing but also distracts from the narrative.","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124196596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Sense of non-presence:Visualization of invisible presence 不存在感:无形存在的形象化
SIGGRAPH Asia 2019 Posters Pub Date : 2019-11-17 DOI: 10.1145/3355056.3364591
Takuya Mikami, Min Xu, Kaori Yoshida, Kousuke Matsunaga, Jun Fujiki
{"title":"Sense of non-presence:Visualization of invisible presence","authors":"Takuya Mikami, Min Xu, Kaori Yoshida, Kousuke Matsunaga, Jun Fujiki","doi":"10.1145/3355056.3364591","DOIUrl":"https://doi.org/10.1145/3355056.3364591","url":null,"abstract":"Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). SA '19 Posters, November 17-20, 2019, Brisbane, QLD, Australia © 2019 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-6943-5/19/11. https://doi.org/10.1145/3355056.3364591 visualization devices have been developed[Piper et al. 2002], but what is unique to this instrument is the use of apparent movement, a phenomenon of human perception in which we perceive that certain objects are in motion when in fact they are not moving. The apparent movement – which makes viewers feel as if stimulus objects in a fixed position are moving by making them appear or disappear instantaneously – serves as a basic principle in animation. We use the apparent movement created by controlling particles blown up in the air to get viewers to recognize specific movement sequences.","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124802140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-time Table Tennis Forecasting System based on Long Short-term Pose Prediction Network 基于长短期姿态预测网络的乒乓球实时预测系统
SIGGRAPH Asia 2019 Posters Pub Date : 2019-11-17 DOI: 10.1145/3355056.3364555
Erwin Wu, Florian Perteneder, H. Koike
{"title":"Real-time Table Tennis Forecasting System based on Long Short-term Pose Prediction Network","authors":"Erwin Wu, Florian Perteneder, H. Koike","doi":"10.1145/3355056.3364555","DOIUrl":"https://doi.org/10.1145/3355056.3364555","url":null,"abstract":"Humans’ ability to forecast motions and trajectories, are one of the most important abilities in many sports. With the development of deep learning and computer vision, it is becoming possible to do the same thing with real-time computing. In this paper, we present a real-time table tennis forecasting system using a long short-term pose prediction network. Our system can predict the trajectory of a serve before the pingpong ball is even hit based on the previous and present motions of a player, which is captured only using a single RGB camera. The system can be either used for training beginner’s prediction skill, or used for practitioners to train a conceal serve.","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131543533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Gamification in a Physical Rehabilitation Setting: Developing a Proprioceptive Training Exercise for a Wrist Robot 游戏化在物理康复设置:开发腕部机器人本体感觉训练练习
SIGGRAPH Asia 2019 Posters Pub Date : 2019-11-17 DOI: 10.1145/3355056.3364572
C. Curry, Naveen Elangovan, Reuben Gardos Reid, Jiapeng Xu, J. Konczak
{"title":"Gamification in a Physical Rehabilitation Setting: Developing a Proprioceptive Training Exercise for a Wrist Robot","authors":"C. Curry, Naveen Elangovan, Reuben Gardos Reid, Jiapeng Xu, J. Konczak","doi":"10.1145/3355056.3364572","DOIUrl":"https://doi.org/10.1145/3355056.3364572","url":null,"abstract":"Proprioception or body awareness is an essential sense that aids in the neural control of movement. Proprioceptive impairments are commonly found in people with neurological conditions such as stroke and Parkinson’s disease. Such impairments are known to impact the patient’s quality of life. Robot-aided proprioceptive training has been proposed and tested to improve sensorimotor performance. However, such robot-aided exercises are implemented similar to many physical rehabilitation exercises, requiring task-specific and repetitive movements from patients. Monotonous nature of such repetitive exercises can result in reduced patient motivation, thereby, impacting treatment adherence and therapy gains. Gamification of exercises can make physical rehabilitation more engaging and rewarding. In this work, we discuss our ongoing efforts to develop a game that can accompany a robot-aided wrist proprioceptive training exercise.","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131643481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Method for estimating display lag in the Oculus Rift S and CV1 估计Oculus Rift S和CV1显示延迟的方法
SIGGRAPH Asia 2019 Posters Pub Date : 2019-11-17 DOI: 10.1145/3355056.3364590
Jason Feng, Juno Kim, Wilson Luu, S. Palmisano
{"title":"Method for estimating display lag in the Oculus Rift S and CV1","authors":"Jason Feng, Juno Kim, Wilson Luu, S. Palmisano","doi":"10.1145/3355056.3364590","DOIUrl":"https://doi.org/10.1145/3355056.3364590","url":null,"abstract":"We validated an optical method for measuring the display lag of modern head-mounted displays (HMDs). The method used a high-speed digital camera to track landmarks rendered on a display panel of the Oculus Rift CV1 and S models. We used an Nvidia GeForce RTX 2080 graphics adapter and found that the minimum estimated baseline latency of both the Oculus CV1 and S was extremely short (∼2 ms). Variability in lag was low, even when the lag was systematically inflated. Cybersickness was induced with the small baseline lag and increased as this lag was inflated. These findings indicate the Oculus Rift CV1 and S are capable of extremely low baseline display lag latencies for angular head rotation, which appears to account for their low levels of reported cybersickness.","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132644673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Parallel Adaptive Frameless Rendering with NVIDIA OptiX 并行自适应无帧渲染与NVIDIA OptiX
SIGGRAPH Asia 2019 Posters Pub Date : 2019-11-17 DOI: 10.1145/3355056.3364569
Chung-Che Hsiao, Benjamin Watson
{"title":"Parallel Adaptive Frameless Rendering with NVIDIA OptiX","authors":"Chung-Che Hsiao, Benjamin Watson","doi":"10.1145/3355056.3364569","DOIUrl":"https://doi.org/10.1145/3355056.3364569","url":null,"abstract":"In virtual reality (VR) or augmented reality (AR) systems, latency is one of the most important causes of simulator sickness. Latency is difficult to limit in traditional renderers, which sample time rigidly with a series of frames, each representing a single moment in time, depicted with a fixed amount of latency. Previous researchers proposed adaptive frameless rendering (AFR), which removes frames to sample space and time flexibly, and reduce latency. However, their prototype was neither parallel nor interactive. We implement AFR in NVIDIA OptiX, a concurrent, real–time ray tracing API taking advantage of NVIDIA GPUs, including their latest RTX ray tracing components. With proper tuning, our prototype prioritizes temporal detail when scenes are dynamic (producing rapidly updated, blurry imagery), and spatial detail when scenes are static (producing more slowly updated, sharp imagery). The result is parallel, interactive, low-latency imagery that should reduce simulator sickness.","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132744456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信