ACM SIGGRAPH 2016 Posters最新文献

筛选
英文 中文
GazeSim: simulating foveated rendering using depth in eye gaze for VR GazeSim:在VR中使用眼睛凝视的深度模拟注视点渲染
ACM SIGGRAPH 2016 Posters Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945153
Yun Suen Pai, Benjamin Tag, B. Outram, Noriyasu Vontin, Kazunori Sugiura, K. Kunze
{"title":"GazeSim: simulating foveated rendering using depth in eye gaze for VR","authors":"Yun Suen Pai, Benjamin Tag, B. Outram, Noriyasu Vontin, Kazunori Sugiura, K. Kunze","doi":"10.1145/2945078.2945153","DOIUrl":"https://doi.org/10.1145/2945078.2945153","url":null,"abstract":"We present a novel technique of implementing customized hardware that uses eye gaze focus depth as an input modality for virtual reality applications. By utilizing eye tracking technology, our system can detect the point in depth the viewer focusses on, and therefore promises more natural responses of the eye to stimuli, which will help overcoming VR sickness and nausea. The obtained information for the depth focus of the eye allows the utilization of foveated rendering to keep the computing workload low and create a more natural image that is clear in the focused field, but blurred outside that field.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134023996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Realistic 3D projection mapping using polynomial texture maps 逼真的3D投影映射使用多项式纹理贴图
ACM SIGGRAPH 2016 Posters Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945142
Junho Choi, Jong Hun Lee, Yong Yi Lee, Yong Hwi Kim, Bilal Ahmed, M. Son, M. Joo, Kwan H. Lee
{"title":"Realistic 3D projection mapping using polynomial texture maps","authors":"Junho Choi, Jong Hun Lee, Yong Yi Lee, Yong Hwi Kim, Bilal Ahmed, M. Son, M. Joo, Kwan H. Lee","doi":"10.1145/2945078.2945142","DOIUrl":"https://doi.org/10.1145/2945078.2945142","url":null,"abstract":"Projection mapping has been widely used to efficiently visualize real world objects in various areas such as exhibitions, advertisements, and theatrical performances. To represent the projected content in a realistic manner, the appearance of an object should be taken into consideration. Although there have been various attempts to realistically represent the appearance through digital modeling of appearance materials in computer graphics, it is difficult to combine it with the projection mapping because it takes huge amount of time and requires large space for the measurement. To counteract these challenges of time and space, [Malzbender et al. 2001] present polynomial texture maps (PTM) that can represent the reflectance properties of the surface such as diffuse and shadow artifacts by relighting of the 3D objects according to varying light direction around the object. PTM does not have temporal or spatial constraints requiring only several tens of images of different light directions so that it makes it possible to easily produce an appealing appearance.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134639421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Interactive multi-scale oil paint filtering on mobile devices 移动设备上的交互式多尺度油画过滤
ACM SIGGRAPH 2016 Posters Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945120
Amir Semmo, Matthias Trapp, Tobias Dürschmid, J. Döllner, S. Pasewaldt
{"title":"Interactive multi-scale oil paint filtering on mobile devices","authors":"Amir Semmo, Matthias Trapp, Tobias Dürschmid, J. Döllner, S. Pasewaldt","doi":"10.1145/2945078.2945120","DOIUrl":"https://doi.org/10.1145/2945078.2945120","url":null,"abstract":"This work presents an interactive mobile implementation of a filter that transforms images into an oil paint look. At this, a multi-scale approach that processes image pyramids is introduced that uses flow-based joint bilateral upsampling to achieve deliberate levels of abstraction at multiple scales and interactive frame rates. The approach facilitates the implementation of interactive tools that adjust the appearance of filtering effects at run-time, which is demonstrated by an on-screen painting interface for per-pixel parameterization that fosters the casual creativity of non-artists.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131258177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Error-bounded surface remeshing with minimal angle elimination 最小角度消除的误差边界曲面重网格
ACM SIGGRAPH 2016 Posters Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945138
Kaimo Hu, Dong‐Ming Yan, Bedrich Benes
{"title":"Error-bounded surface remeshing with minimal angle elimination","authors":"Kaimo Hu, Dong‐Ming Yan, Bedrich Benes","doi":"10.1145/2945078.2945138","DOIUrl":"https://doi.org/10.1145/2945078.2945138","url":null,"abstract":"Surface remeshing is a key component in many geometry processing applications. However, existing high quality remeshing methods usually introduce approximation errors that are difficult to control, while error-driven approaches pay little attention to the meshing quality. Moreover, neither of those approaches can guarantee the minimal angle bound in resulting meshes. We propose a novel error-bounded surface remeshing approach that is based on minimal angle elimination. Our method employs a dynamic priority queue that first parameterize triangles who contain angles smaller than a user-specified threshold. Then, those small angles are eliminated by applying several local operators ingeniously. To control the geometric fidelity where local operators are applied, an efficient local error measure scheme is proposed and integrated in our remeshing framework. The initial results show that the proposed approach is able to bound the geometric fidelity strictly, while the minimal angles of the results can be eliminated to be up to 40 degrees.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129850464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Computational swept volume light painting via robotic non-linear motion 通过机器人非线性运动的计算扫描体光绘制
ACM SIGGRAPH 2016 Posters Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945105
Yaozhun Huang, Sze-Chun Tsang, Miu-Ling Lam
{"title":"Computational swept volume light painting via robotic non-linear motion","authors":"Yaozhun Huang, Sze-Chun Tsang, Miu-Ling Lam","doi":"10.1145/2945078.2945105","DOIUrl":"https://doi.org/10.1145/2945078.2945105","url":null,"abstract":"Light painting is a photography technique in which light sources are moved in specific patterns while being captured by long exposure. The movements of lights will result in bright strokes or selectively illuminated and colored areas in the scene being captured, thus decorating the real scene with special visual effects without the need for post-production. Light painting is not only a popular activity for hobbyists to express creativities, but also a practice for professional media artists and photographers to produce aesthetic visual arts and commercial photography. In conventional light paintings, the light sources are usually flashlights or other simple handheld lights made by attaching one or multiple LEDs to a stick or a ring. The patterns created are limited to abstract shapes or freehand strokes.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130279861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Guessing objects in context 在语境中猜测物体
ACM SIGGRAPH 2016 Posters Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945161
Karan Sharma, Arun C. S. Kumar, S. Bhandarkar
{"title":"Guessing objects in context","authors":"Karan Sharma, Arun C. S. Kumar, S. Bhandarkar","doi":"10.1145/2945078.2945161","DOIUrl":"https://doi.org/10.1145/2945078.2945161","url":null,"abstract":"Large scale object classification has seen commendable progress owing, in large part, to recent advances in deep learning. However, generating annotated training datasets is still a significant challenge, especially when training classifiers for large number of object categories. In these situations, generating training datasets is expensive coupled with the fact that training data may not be available for all categories and situations. Such situations are generally resolved using zero-shot learning. However, training zero-shot classifiers entails serious programming effort and is not scalable to very large number of object categories. We propose a novel simple framework that can guess objects in an image. The proposed framework has the advantages of scalability and ease of use with minimal loss in accuracy. The proposed framework answers the following question: How does one guess objects in an image from very few object detections?","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115996284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards real-time insect motion capture 朝着实时昆虫动作捕捉的方向发展
ACM SIGGRAPH 2016 Posters Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945115
Deschanel Li
{"title":"Towards real-time insect motion capture","authors":"Deschanel Li","doi":"10.1145/2945078.2945115","DOIUrl":"https://doi.org/10.1145/2945078.2945115","url":null,"abstract":"It is currently possible to reliably motion-track humans and some animals, but not possible to track insects using standard motion tracking techniques. By programming a virtual prototype rig/skeleton for the insects small scale creatures will be able to be tracked in real time. Possible applications include behavioural research of animals and entertainment industry, e.g., when realistic insect motion simulation is needed and insects cannot be outfitted with sensors like humans for animation in movies or games.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"1 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133037303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Body-part motion synthesis system for contemporary dance creation 当代舞蹈创作的肢体动作合成系统
ACM SIGGRAPH 2016 Posters Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945107
A. Soga, Yuho Yazaki, Bin Umino, M. Hirayama
{"title":"Body-part motion synthesis system for contemporary dance creation","authors":"A. Soga, Yuho Yazaki, Bin Umino, M. Hirayama","doi":"10.1145/2945078.2945107","DOIUrl":"https://doi.org/10.1145/2945078.2945107","url":null,"abstract":"We developed a body-part motion synthesis system (BMSS) that allows users to create short choreographies by synthesizing body-part motions and to simulate them in 3D animation. This system automatically provides various short choreographies. First, users select a base motion and body-part categories. Then the system automatically selects and synthesizes body-part motions to the base motion. The system randomly determined the synthesis timings of the selected motions. Users can use the composed sequences as references for dance creation, learning, and training. We experimentally evaluated our system's effectiveness for supporting dance creation with four professional choreographers of contemporary dance. From our experiment results, we basically verified the usability of BMSS for choreographic creation.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124650402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Video reshuffling: automatic video dubbing without prior knowledge 视频重洗牌:无需事先了解即可自动进行视频配音
ACM SIGGRAPH 2016 Posters Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945097
Shoichi Furukawa, Takuya Kato, Pavel A. Savkin, S. Morishima
{"title":"Video reshuffling: automatic video dubbing without prior knowledge","authors":"Shoichi Furukawa, Takuya Kato, Pavel A. Savkin, S. Morishima","doi":"10.1145/2945078.2945097","DOIUrl":"https://doi.org/10.1145/2945078.2945097","url":null,"abstract":"Numerous video have been translated using \"dubbing,\" spurred by the recent growth of video market. However, it is very difficult to achieve the visual-audio synchronization. That is to say in general a new audio does not synchronize with actor's mouth motion. This discrepancy can disturb comprehension of video contents. There-fore many methods have been researched so far to solve this problem.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122509618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
From drawing to animation-ready vector graphics 从绘图到动画准备矢量图形
ACM SIGGRAPH 2016 Posters Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945130
Even Entem, L. Barthe, Marie-Paule Cani, M. V. D. Panne
{"title":"From drawing to animation-ready vector graphics","authors":"Even Entem, L. Barthe, Marie-Paule Cani, M. V. D. Panne","doi":"10.1145/2945078.2945130","DOIUrl":"https://doi.org/10.1145/2945078.2945130","url":null,"abstract":"We present an automatic method to build a layered vector graphics structure ready for animation from a clean-line vector drawing of an organic, smooth shape. Inspiring from 3D segmentation methods, we introduce a new metric computed on the medial axis of a region to identify and quantify the visual salience of a sub-region relative to the rest. This enables us to recursively separate each region into two closed sub-regions at the location of the most salient junction. The resulting structure, layered in depth, can be used to pose and animate the drawing using a regular 2D skeleton.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124209613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信