{"title":"Fast indirect illumination using two virtual spherical gaussian lights","authors":"Yusuke Tokuyoshi","doi":"10.1145/2820926.2820929","DOIUrl":"https://doi.org/10.1145/2820926.2820929","url":null,"abstract":"A virtual spherical Gaussian light (VSGL) [Tokuyoshi 2015] is an approximation of a set of virtual point lights (VPLs) for real-time rendering. Thousands of VSGLs can be dynamically generated using mipmapped specialized reflective shadow maps (RSMs) to render glossy indirect illumination at 20-30 ms. Although this approach is efficient compared to VPLs, rendering at 20-30 ms is still expensive for some time-sensitive applications such as video games. This poster demonstrates glossy indirect illumination in 1 ms using only two VSGLs. In this poster, each VSGL has a single spherical Gaussian lobe to represent radiant intensity, and thus diffuse and specular reflections at the second bounce are represented with two VSGLs. This rough approximation is suitable for scenes which are locally lit by a spot light (e.g., flashlight in a cave). To generate these two VSGLs dynamically, this poster presents a specialized implementation using a parallel summation algorithm. Other than RSMs, our implementation uses only small temporary buffers with a resolution that is 1/64 of an RSM. Therefore, the proposed method is not only fast, but also memory saving.","PeriodicalId":432851,"journal":{"name":"SIGGRAPH Asia 2015 Posters","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127211104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Simulating water dispersion in chinese absorbent paper with capillary tubes model","authors":"Yu-Chen Yhang, Qing Zhu, Yuan Liu","doi":"10.1145/2820926.2820947","DOIUrl":"https://doi.org/10.1145/2820926.2820947","url":null,"abstract":"This research proposes a porous model for water dispersion in Chinese absorbent paper, a family of papers for Chinese ink painting. The absorbent paper is regarded as a filter membrane with capillary tubes and Darcy's Law is applied for fluid dynamic. Experiments prove that such simulation provides promising results and can be amended and applied for simulating other capillary action in paper such as Chinese ink dispersion in realtime.","PeriodicalId":432851,"journal":{"name":"SIGGRAPH Asia 2015 Posters","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114862112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Miao Song, Serguei A. Mokhov, Jilson Thomas, S. Mudur
{"title":"A case study of the Illimitable Space System v2 and projection mapping","authors":"Miao Song, Serguei A. Mokhov, Jilson Thomas, S. Mudur","doi":"10.1145/2820926.2820940","DOIUrl":"https://doi.org/10.1145/2820926.2820940","url":null,"abstract":"Illimitable Space System v2 (ISSv2) is a new iteration of the real-time interactive configurable toolbox of visual effects and musical visualizations based on multimodal input. ISSv2 specifically includes support for projection mapping in performing arts. We share our findings in our recent (2015) real-time productions for interactive dance shows that took place in Montreal, Canada as part of the Chinese New Year Gala, District 3 demos, SIGGRAPH International Resources and the corresponding research work on projection mapping, and ongoing experiments on multiple-camera inputs and irregular surface projection. In each production, the multidisciplinary team created and used different subset of the features of the ISSv2 toolbox for interactive artistic performance and did some evaluation of the user experience. The team used more open technologies more popular among computational artists as compared to ISSv1.","PeriodicalId":432851,"journal":{"name":"SIGGRAPH Asia 2015 Posters","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131292413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Direct view manipulation for drone photography","authors":"Yi-Ling Chen, Wei-Tse Lee, Liwei Chan, Rong-Hao Liang, Bing-Yu Chen","doi":"10.1145/2820926.2820945","DOIUrl":"https://doi.org/10.1145/2820926.2820945","url":null,"abstract":"For a long time, photographers hold and move their cameras, and consider how to frame a good shot all at the same time. With the emergence of drones, people start to let the flying carriers to hold their cameras in order to take more compelling pictures. However, the viewports between the photographer and device become decoupled and every single movement needs to be explicitly instructed via a remote controller. Even with the first-person view video streaming, users still have to be very skillful to fluently pilot the drone without causing distraction to photo composition. Inspired by the concept of viewfinder editing [Baek et al. 2013], we propose a more intuitive interface to control the flying camera (i.e., the drone) by direct view manipulation embodied with multi-touch gestures, which allows the users to directly alter and rearrange the visual elements in the picture prior to image capturing. In our proof-of-concept implementation, the viewfinder of a flying camera is mapped to the screen of a mobile device. The physical camera movements are encoded by common photo manipulation operations, such as translation and scaling, with multi-touch gestures.","PeriodicalId":432851,"journal":{"name":"SIGGRAPH Asia 2015 Posters","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130145548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ye Tao, Guanyun Wang, Xiaolian Zhang, Cheng Yao, Fangtian Ying
{"title":"A weaving creation system for bamboo craft-design collaborations","authors":"Ye Tao, Guanyun Wang, Xiaolian Zhang, Cheng Yao, Fangtian Ying","doi":"10.1145/2820926.2820959","DOIUrl":"https://doi.org/10.1145/2820926.2820959","url":null,"abstract":"As one of the most ancient crafts in the handicraft times, bamboo weaving dates back to thousands of years ago. The diversified bamboo weaving methods can be used to fabricate daily necessities, such as fan, basket, dustpan, etc. With the development of modern manufacturing technology, however, bamboo handicrafts are no longer popular, significantly due to their lack of modern aesthetic value. Organizations like the World Crafts Council (WCC) try to integrate traditional crafts into modern life, but the cooperation between designer and craftsman involves repeated communications via a laborious manual process. Therefore an interactive system is proposed herein to support the process from design inspiration to pattern abstraction to product creation. The MetamoCrochet system uses thermochromic ink to customize the patterns on woolen textile [Okazaki, et al. 2014]. Igarashi and Mitani presented a design system for card weaving to support the design of patters [Igarashi and Mitani 2014]. These cases which mainly focus on textile design, are easily modified and created in the process of knitting. However, the composition of patterns on bamboo-weaved products is difficult to modify, and can hardly be recreated during the weaving process, due to the hardness and toughness of bamboo material. Starting from the inspiration source for patterns, our system aims to, by presetting the patterns, simplify the bamboo-weaving craft and integrate modern aesthetic value into the design of daily products in modern times.","PeriodicalId":432851,"journal":{"name":"SIGGRAPH Asia 2015 Posters","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134213205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Cloth switch: configurable touch switch wearable device made with cloth","authors":"Seiya Iwasaki, Saki Sakaguchi, Makoto Abe, Mitsunori Matsushita","doi":"10.1145/2820926.2820932","DOIUrl":"https://doi.org/10.1145/2820926.2820932","url":null,"abstract":"The goal of our research is to realize a configurable wearable device made with cloth. We proposed a wearable device made with cloth as well as multiple cloth switches, each having a different function. Through our proposed system, a user can add or remove functions by simply attaching or removing relevant switches from the proposed cloth based wearable device, as well as switch functions on or off by simply touching the switches.","PeriodicalId":432851,"journal":{"name":"SIGGRAPH Asia 2015 Posters","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131165123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exploring interaction modalities for a selfie drone","authors":"Chien-Fang Chen, Kang-Ping Liu, Neng-Hao Yu","doi":"10.1145/2820926.2820965","DOIUrl":"https://doi.org/10.1145/2820926.2820965","url":null,"abstract":"Taking selfies is a brand new type of photographic behavior and has became a phenomenon on social medias. In Asia, many people love taking and sharing selfies in their daily life and have created several tools to take good selfies such as a selfie stick or a portable tripod In the meantime, camera-equipped drones are getting more and more popular today. We can envision a future where personal flying selfie bots are always with us. Among the previous works and commercial products, the interaction techniques for controlling a drone are mostly designed in drone-centric or map-centric mode that require a longer training and are not easy for taking an anticipated shot. We investigate the user needs in taking seflies and propose a new approach to control a selfie drone in user-centric mode. Users can place a drone in spherical coordinate system by our direct pointing technique and then compose the framing on the smartphone screen. Our goal is to help people taking better selfies more intuitively.","PeriodicalId":432851,"journal":{"name":"SIGGRAPH Asia 2015 Posters","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125941447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Gaze animation optimization based on a viewer's preference","authors":"H. Mori, Tomoya Nakadai, Fubito Toyama, K. Shoji","doi":"10.1145/2820926.2820969","DOIUrl":"https://doi.org/10.1145/2820926.2820969","url":null,"abstract":"The character animation is required to appear as natural as human motion. Toward that goal, there is an approach where the expression appears as natural as the human expression by adding gaze behavior to the contextual situation, and the environment to the general behavior animation [Grillon, H. et al. 2009].","PeriodicalId":432851,"journal":{"name":"SIGGRAPH Asia 2015 Posters","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122212344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ReAR surface: AR-based exchanging system using handheld devices","authors":"Kohsuke Matsuno, Yuki Shimonaka, Saki Sakaguchi, Ryo Shinoki, Mitsunori Matsushita","doi":"10.1145/2820926.2820954","DOIUrl":"https://doi.org/10.1145/2820926.2820954","url":null,"abstract":"This paper proposes a novel method for file exchange between personal handheld devices. Currently, we can overlay virtual objects onto the real world using Augmented Reality (AR). This means that we can realize a novel method for visualizing information that we can see on a PC screen or tablet-type device. We focus on the interaction between multiple users, each with a personal tablet-type device when working face-to-face. In this case, we sometimes show our own tablet-type device to other people in order to share data (e.g., pictures). If a user's tablet-type device has both public and private information, such user only needs to show public information selectively. Kai et al. proposed a system that allows data sharing between facing personal devices[Kai et al. 2013]. In this system, an additional display is attached to the rear of a personal device in order to separate public and personal data. We take a method for visualizing public data as virtual objects arranged in the real world using AR. This paper proposes a system called \"ReAR Surface\" to realize such selective visualization.","PeriodicalId":432851,"journal":{"name":"SIGGRAPH Asia 2015 Posters","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123999553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aghiles Kheffache, Marco Pantaleoni, Bo Zhou, Paolo Berto Durante
{"title":"Multiverse: a next generation data storage for Alembic","authors":"Aghiles Kheffache, Marco Pantaleoni, Bo Zhou, Paolo Berto Durante","doi":"10.1145/2820926.2820939","DOIUrl":"https://doi.org/10.1145/2820926.2820939","url":null,"abstract":"We introduce Multiverse, an open source1 next generation data back-end to the widely used Alembic file format. Our back-end relies on Git, a powerful distributed source control system. We inherit all the features introduced by Git, including: compact history and branching, natural data de-duplication, cryptographic data integrity, ssh internet sharing protocol and collaborative work capabilities. Our scene data representation allows for punctual access to individual scene elements, opening the door to multi-threaded I/O as well as easy scene updates. To our knowledge, it is the first time that such a set of features is available to the production community.","PeriodicalId":432851,"journal":{"name":"SIGGRAPH Asia 2015 Posters","volume":"101 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117275578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}