Yun Suen Pai, Benjamin Tag, B. Outram, Noriyasu Vontin, Kazunori Sugiura, K. Kunze
{"title":"GazeSim: simulating foveated rendering using depth in eye gaze for VR","authors":"Yun Suen Pai, Benjamin Tag, B. Outram, Noriyasu Vontin, Kazunori Sugiura, K. Kunze","doi":"10.1145/2945078.2945153","DOIUrl":"https://doi.org/10.1145/2945078.2945153","url":null,"abstract":"We present a novel technique of implementing customized hardware that uses eye gaze focus depth as an input modality for virtual reality applications. By utilizing eye tracking technology, our system can detect the point in depth the viewer focusses on, and therefore promises more natural responses of the eye to stimuli, which will help overcoming VR sickness and nausea. The obtained information for the depth focus of the eye allows the utilization of foveated rendering to keep the computing workload low and create a more natural image that is clear in the focused field, but blurred outside that field.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134023996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Junho Choi, Jong Hun Lee, Yong Yi Lee, Yong Hwi Kim, Bilal Ahmed, M. Son, M. Joo, Kwan H. Lee
{"title":"Realistic 3D projection mapping using polynomial texture maps","authors":"Junho Choi, Jong Hun Lee, Yong Yi Lee, Yong Hwi Kim, Bilal Ahmed, M. Son, M. Joo, Kwan H. Lee","doi":"10.1145/2945078.2945142","DOIUrl":"https://doi.org/10.1145/2945078.2945142","url":null,"abstract":"Projection mapping has been widely used to efficiently visualize real world objects in various areas such as exhibitions, advertisements, and theatrical performances. To represent the projected content in a realistic manner, the appearance of an object should be taken into consideration. Although there have been various attempts to realistically represent the appearance through digital modeling of appearance materials in computer graphics, it is difficult to combine it with the projection mapping because it takes huge amount of time and requires large space for the measurement. To counteract these challenges of time and space, [Malzbender et al. 2001] present polynomial texture maps (PTM) that can represent the reflectance properties of the surface such as diffuse and shadow artifacts by relighting of the 3D objects according to varying light direction around the object. PTM does not have temporal or spatial constraints requiring only several tens of images of different light directions so that it makes it possible to easily produce an appealing appearance.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134639421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Amir Semmo, Matthias Trapp, Tobias Dürschmid, J. Döllner, S. Pasewaldt
{"title":"Interactive multi-scale oil paint filtering on mobile devices","authors":"Amir Semmo, Matthias Trapp, Tobias Dürschmid, J. Döllner, S. Pasewaldt","doi":"10.1145/2945078.2945120","DOIUrl":"https://doi.org/10.1145/2945078.2945120","url":null,"abstract":"This work presents an interactive mobile implementation of a filter that transforms images into an oil paint look. At this, a multi-scale approach that processes image pyramids is introduced that uses flow-based joint bilateral upsampling to achieve deliberate levels of abstraction at multiple scales and interactive frame rates. The approach facilitates the implementation of interactive tools that adjust the appearance of filtering effects at run-time, which is demonstrated by an on-screen painting interface for per-pixel parameterization that fosters the casual creativity of non-artists.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131258177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Error-bounded surface remeshing with minimal angle elimination","authors":"Kaimo Hu, Dong‐Ming Yan, Bedrich Benes","doi":"10.1145/2945078.2945138","DOIUrl":"https://doi.org/10.1145/2945078.2945138","url":null,"abstract":"Surface remeshing is a key component in many geometry processing applications. However, existing high quality remeshing methods usually introduce approximation errors that are difficult to control, while error-driven approaches pay little attention to the meshing quality. Moreover, neither of those approaches can guarantee the minimal angle bound in resulting meshes. We propose a novel error-bounded surface remeshing approach that is based on minimal angle elimination. Our method employs a dynamic priority queue that first parameterize triangles who contain angles smaller than a user-specified threshold. Then, those small angles are eliminated by applying several local operators ingeniously. To control the geometric fidelity where local operators are applied, an efficient local error measure scheme is proposed and integrated in our remeshing framework. The initial results show that the proposed approach is able to bound the geometric fidelity strictly, while the minimal angles of the results can be eliminated to be up to 40 degrees.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129850464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Computational swept volume light painting via robotic non-linear motion","authors":"Yaozhun Huang, Sze-Chun Tsang, Miu-Ling Lam","doi":"10.1145/2945078.2945105","DOIUrl":"https://doi.org/10.1145/2945078.2945105","url":null,"abstract":"Light painting is a photography technique in which light sources are moved in specific patterns while being captured by long exposure. The movements of lights will result in bright strokes or selectively illuminated and colored areas in the scene being captured, thus decorating the real scene with special visual effects without the need for post-production. Light painting is not only a popular activity for hobbyists to express creativities, but also a practice for professional media artists and photographers to produce aesthetic visual arts and commercial photography. In conventional light paintings, the light sources are usually flashlights or other simple handheld lights made by attaching one or multiple LEDs to a stick or a ring. The patterns created are limited to abstract shapes or freehand strokes.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130279861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Guessing objects in context","authors":"Karan Sharma, Arun C. S. Kumar, S. Bhandarkar","doi":"10.1145/2945078.2945161","DOIUrl":"https://doi.org/10.1145/2945078.2945161","url":null,"abstract":"Large scale object classification has seen commendable progress owing, in large part, to recent advances in deep learning. However, generating annotated training datasets is still a significant challenge, especially when training classifiers for large number of object categories. In these situations, generating training datasets is expensive coupled with the fact that training data may not be available for all categories and situations. Such situations are generally resolved using zero-shot learning. However, training zero-shot classifiers entails serious programming effort and is not scalable to very large number of object categories. We propose a novel simple framework that can guess objects in an image. The proposed framework has the advantages of scalability and ease of use with minimal loss in accuracy. The proposed framework answers the following question: How does one guess objects in an image from very few object detections?","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115996284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards real-time insect motion capture","authors":"Deschanel Li","doi":"10.1145/2945078.2945115","DOIUrl":"https://doi.org/10.1145/2945078.2945115","url":null,"abstract":"It is currently possible to reliably motion-track humans and some animals, but not possible to track insects using standard motion tracking techniques. By programming a virtual prototype rig/skeleton for the insects small scale creatures will be able to be tracked in real time. Possible applications include behavioural research of animals and entertainment industry, e.g., when realistic insect motion simulation is needed and insects cannot be outfitted with sensors like humans for animation in movies or games.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"1 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133037303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Body-part motion synthesis system for contemporary dance creation","authors":"A. Soga, Yuho Yazaki, Bin Umino, M. Hirayama","doi":"10.1145/2945078.2945107","DOIUrl":"https://doi.org/10.1145/2945078.2945107","url":null,"abstract":"We developed a body-part motion synthesis system (BMSS) that allows users to create short choreographies by synthesizing body-part motions and to simulate them in 3D animation. This system automatically provides various short choreographies. First, users select a base motion and body-part categories. Then the system automatically selects and synthesizes body-part motions to the base motion. The system randomly determined the synthesis timings of the selected motions. Users can use the composed sequences as references for dance creation, learning, and training. We experimentally evaluated our system's effectiveness for supporting dance creation with four professional choreographers of contemporary dance. From our experiment results, we basically verified the usability of BMSS for choreographic creation.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124650402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shoichi Furukawa, Takuya Kato, Pavel A. Savkin, S. Morishima
{"title":"Video reshuffling: automatic video dubbing without prior knowledge","authors":"Shoichi Furukawa, Takuya Kato, Pavel A. Savkin, S. Morishima","doi":"10.1145/2945078.2945097","DOIUrl":"https://doi.org/10.1145/2945078.2945097","url":null,"abstract":"Numerous video have been translated using \"dubbing,\" spurred by the recent growth of video market. However, it is very difficult to achieve the visual-audio synchronization. That is to say in general a new audio does not synchronize with actor's mouth motion. This discrepancy can disturb comprehension of video contents. There-fore many methods have been researched so far to solve this problem.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122509618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Even Entem, L. Barthe, Marie-Paule Cani, M. V. D. Panne
{"title":"From drawing to animation-ready vector graphics","authors":"Even Entem, L. Barthe, Marie-Paule Cani, M. V. D. Panne","doi":"10.1145/2945078.2945130","DOIUrl":"https://doi.org/10.1145/2945078.2945130","url":null,"abstract":"We present an automatic method to build a layered vector graphics structure ready for animation from a clean-line vector drawing of an organic, smooth shape. Inspiring from 3D segmentation methods, we introduce a new metric computed on the medial axis of a region to identify and quantify the visual salience of a sub-region relative to the rest. This enables us to recursively separate each region into two closed sub-regions at the location of the most salient junction. The resulting structure, layered in depth, can be used to pose and animate the drawing using a regular 2D skeleton.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124209613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}