{"title":"Bionic scope: wearable system for visual extension triggered by bioelectrical signal","authors":"Shota Ekuni, Koichi Murata, Yasunari Asakura, Akira Uehara","doi":"10.1145/2945078.2945119","DOIUrl":"https://doi.org/10.1145/2945078.2945119","url":null,"abstract":"Visual extension has been an essential issue because the visual information accounts for a large part of sensory information which human processes. There are some instruments which are used to watch distant, objects or people, such as a monocle, a binocular, and a telescope. When we use these instruments, we firstly take a general view without them and adjust magnification and focus of them. These operations are complicated and occupy the user's hands. Therefore, a visual extension device that is capable of being used easily without hands is extremely useful. A system developed in the previous work recognizes the movement of the user's eyelid and operating devices by using it [Hideaki et al. 2013]. However, a camera is placed in front of the eye, and that obstructs the field of view. In addition, image recognition needs much calculation cost and it is difficult to be processed in a small computer. When human intends to move his/her muscles, bioelectrical signal (BES) leaks out on the surface of skin. The BES can be measured by small and thin electrodes attached to the surface of the skin. By using the BES, user's operational intentions can be detected promptly without obstructing the user's field of view. Moreover, using BES sensors can reduce electrical power, and contribute to downsizing systems.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125572595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
N. Kawai, Cédric Audras, Sou Tabata, Takahiro Matsubara
{"title":"Panorama image interpolation for real-time walkthrough","authors":"N. Kawai, Cédric Audras, Sou Tabata, Takahiro Matsubara","doi":"10.1145/2945078.2945111","DOIUrl":"https://doi.org/10.1145/2945078.2945111","url":null,"abstract":"We propose a method to generate new views of a scene by capturing a few panorama images in real space and interpolating captured images. We describe a procedure for interpolating panoramas captured at four corners of a rectangle area without geometry, and present experimental results including walkthrough in real time. Our image-based method enables walking through space much more easily than using 3D modeling and rendering.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125573046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Y. Saraiji, Shota Sugimoto, C. Fernando, K. Minamizawa, S. Tachi
{"title":"Layered telepresence: simultaneous multi presence experience using eye gaze based perceptual awareness blending","authors":"M. Y. Saraiji, Shota Sugimoto, C. Fernando, K. Minamizawa, S. Tachi","doi":"10.1145/2945078.2945098","DOIUrl":"https://doi.org/10.1145/2945078.2945098","url":null,"abstract":"We propose \"Layered Telepresence\", a novel method of experiencing simultaneous multi-presence. Users eye gaze and perceptual awareness are blended with real-time audio-visual information received from multiple telepresence robots. The system arranges audio-visual information received through multiple robots into a priority-driven layered stack. A weighted feature map was created based on the objects recognized for each layer, using image-processing techniques, and pushes the most weighted layer around the users gaze in to the foreground. All other layers are pushed back to the background providing an artificial depth-of-field effect. The proposed method not only works with robots, but also each layer could represent any audio-visual content, such as video see-through HMD, television screen or even your PC screen enabling true multitasking.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121027828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ThirdEye: a coaxial feature tracking system for stereoscopic video see-through augmented reality","authors":"Yu-Xiang Wang, Yu-Ju Tsai, Yu-Hsuan Huang, Wan-ling Yang, Tzu-Chieh Yu, Yu-Kai Chiu, M. Ouhyoung","doi":"10.1145/2945078.2945100","DOIUrl":"https://doi.org/10.1145/2945078.2945100","url":null,"abstract":"For stereoscopic augmented reality (AR) system, continuous feature tracking of the observing target is required to generate a virtual object in the real world coordinate. Besides, dual cameras have to be placed with proper distance to obtain correct stereo images for video see-through applications. Both higher resolution and frame rate per second (FPS) can improve the user experience. However, feature tracking could be the bottleneck with high resolution images and the latency would increase if image processing was done before tracking.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133554085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Large-scale rapid-prototyping with zometool","authors":"Chun-Kai Huang, Tsung-Hung Wu, Yi-Ling Chen, Bing-Yu Chen","doi":"10.1145/2945078.2945155","DOIUrl":"https://doi.org/10.1145/2945078.2945155","url":null,"abstract":"In recent years, personalized fabrication has attracted much attention due to the greatly improved accessibility of consumer-level 3D printers. However, 3D printers still suffer from the relatively long production time and limited output size, which are undesirable for large-scale rapid-prototyping. Zometool, which is a popular building block system widely used for education and entertainment, is potentially suitable for providing an alternative solution to the aforementioned scenarios. However, even for 3D models of moderate complexity, novice users may still have difficulty in building visually plausible results by themselves. Therefore, the goal of this work is to develop an automatic system to assist users to realize Zometool rapid prototyping with a specified 3D shape. Compared with the previous work [Zimmer and Kobbelt 2014], our method may achieve the ease of assembly and economic usage of building units since we focus on generating the Zometool structures through a higher level of shape abstraction.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130323046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hiroko Iwasaki, Momoko Kondo, Rei Ito, Saya Sugiura, Yuka Oba, S. Mizuno
{"title":"Interaction with virtual shadow through real shadow using two projectors","authors":"Hiroko Iwasaki, Momoko Kondo, Rei Ito, Saya Sugiura, Yuka Oba, S. Mizuno","doi":"10.1145/2945078.2945121","DOIUrl":"https://doi.org/10.1145/2945078.2945121","url":null,"abstract":"In this paper, we propose a method to interact with virtual shadows through real shadows various physical objects by using two projectors. In our method, the system scans physical objects in front of a projector, generates virtual shadows with CG according to the scan data, and superimposes the virtual shadows to real shadows of the physical objects with the projector. Another projector is used to superimpose virtual light sources inside real shadows. Our method enables us to experience novel interaction with various shadows such as shadows of flower arrangements.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"01 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130435888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"OpenEXR/Id isolate any object with a perfect antialiasing","authors":"Cyril Corvazier, B. Legros, Rachid Chikh","doi":"10.1145/2945078.2945136","DOIUrl":"https://doi.org/10.1145/2945078.2945136","url":null,"abstract":"We present a new storage scheme for computer graphic images based on OpenEXR 2. Using such EXR/Id files, the compositing artist can isolate an object selection (by picking them or using a regular expression to match their names) and color corrects them with no edge artefact, which was not possible to achieve without rendering the object selection on its own layer. Using this file format avoids going back and forth between the rendering and the compositing departments because no mask image or layering are needed anymore. The technique is demonstrated in an open source software suite, including a library to read and write the EXR/Id files and an OpenFX plug-in which generates the images in any compositing software.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116770802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pedro Rossa, Nicolas Hoffman, João Ricardo Bittencourt, Fernando P. Marson, V. Cassol
{"title":"Living the past: the use of VR to provide a historical experience","authors":"Pedro Rossa, Nicolas Hoffman, João Ricardo Bittencourt, Fernando P. Marson, V. Cassol","doi":"10.1145/2945078.2945169","DOIUrl":"https://doi.org/10.1145/2945078.2945169","url":null,"abstract":"In this work we explore the use of games and VR in order to collaborate with History teaching in Brazil. We develop a game and a VR experience based on local technology. In or approach the player is considered as an Indian who lived in the Jesuitical Reductions in the South of Brazil and was requested to practice bow and arrow shooting.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122405936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Charcoal rendering and shading with reflections","authors":"Yuxiao Du, E. Akleman","doi":"10.1145/2945078.2945110","DOIUrl":"https://doi.org/10.1145/2945078.2945110","url":null,"abstract":"In this work, we have developed an approach to include global illumination effects into charcoal drawing (see Figure 1). Our charcoal shader provides a robust computation to obtain charcoal effect for a wide variety of diffuse and specular materials. Our contributions can be summarized as follows: (1) A Barrycentric shader that is based on degree zero B-spline basis functions; (2) A set of hand-drawn charcoal control texture images that naturally provide desired charcoal look-and-feel; and (3) A painter's hierarchy for handling a high number of shading parameters consistent with charcoal drawing.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127848567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mie Sato, Sota Suzuki, Daiki Ebihara, Sho Kato, Sato Ishigaki
{"title":"Pseudo-softness evaluation in grasping a virtual object with a bare hand","authors":"Mie Sato, Sota Suzuki, Daiki Ebihara, Sho Kato, Sato Ishigaki","doi":"10.1145/2945078.2945118","DOIUrl":"https://doi.org/10.1145/2945078.2945118","url":null,"abstract":"Bare hand interaction with a virtual object reduces uncomfortableness with devices mounted on a user's hand. There are some studies on the bare hand interaction[Benko et al. 2012], however a virtual object is supposed to be a hard object or a user touches a physical object during the bare hand interaction. We focus on grasping a virtual object without using any physical object. Grasping is one of the basic movements in manipulating an object and is more difficult than simple movements like touching an object. Because of the bare hand interaction with no physical object, there is no haptic device on a user's hand and so there is no physical feedback to the user. Our challenge is to provide a user with pseudo-softness while grasping a virtual object with a bare hand. We have been developing an AR system that makes it possible for a user to grasp a virtual object with a bare hand[Suzuki et al. 2014]. Using this AR system, we propose visual stimuli that correspond with the user's hand movements, to manipulate the pseudo-softness of a virtual object. Evaluation results show that with the visual stimuli a user feels pseudo-softness while grasping a virtual object with a bare hand.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115778218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}