Shohei Mori, Momoko Maezawa, Naoto Ienaga, H. Saito
{"title":"Detour Light Field Rendering for Diminished Reality Using Unstructured Multiple Views","authors":"Shohei Mori, Momoko Maezawa, Naoto Ienaga, H. Saito","doi":"10.1109/ISMAR-Adjunct.2016.0098","DOIUrl":"https://doi.org/10.1109/ISMAR-Adjunct.2016.0098","url":null,"abstract":"Instructor's perspective videos are useful for presenting intuitive visual instructions for trainees in medical and industrial settings. In such videos, the instructor's arms often obstruct the trainee's view of the work area. In this article, we present a diminished reality method for visualizing the work area hidden by an instructor's arms by capturing the work area with multiple cameras. To achieve such diminished reality, we propose detour light field rendering (DLFR), in which light rays avoid passing through penalty points set in the unstructured light fields reconstructed from multiple viewpoint images. In DLFR, the camera blending field used in an existing freeviewpoint image generation method known as unstructured lumigraph is re-designed based on our use cases. In this re-design, lesser weights are given to light rays as they pass close to given penalty points. Experimental results demonstrate that using DLFR, the appearance of an undesirable object can be removed from an image in real time.","PeriodicalId":171967,"journal":{"name":"2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct)","volume":"2011 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114745877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chang-Gyu Lee, Gregory Lynn Dunn, Ian Oakley, J. Ryu
{"title":"Visual Guidance for Encountered Type Haptic Display: A feasibility study","authors":"Chang-Gyu Lee, Gregory Lynn Dunn, Ian Oakley, J. Ryu","doi":"10.1109/ISMAR-Adjunct.2016.0044","DOIUrl":"https://doi.org/10.1109/ISMAR-Adjunct.2016.0044","url":null,"abstract":"Virtual/mixed reality leveraging an encountered type haptic display will suffer difficulty if virtual and real objects are spatially discrepant. We propose a new method for resolving this issue, visual guidance. The visual guidance algorithm is defined and described in detail, and contrasted with a previously explored approach. The feasibility of the proposed algorithm is experimentally verified.","PeriodicalId":171967,"journal":{"name":"2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122856040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
N. Cooper, F. Milella, Iain Cant, Carlo Pinto, Mark White, G. Meyer
{"title":"Augmented Cues Facilitate Learning Transfer from Virtual to Real Environments","authors":"N. Cooper, F. Milella, Iain Cant, Carlo Pinto, Mark White, G. Meyer","doi":"10.1109/ISMAR-Adjunct.2016.0075","DOIUrl":"https://doi.org/10.1109/ISMAR-Adjunct.2016.0075","url":null,"abstract":"The aim of this study was to investigate whether augmented cues that have previously been shown to enhance performance and user satisfaction in VR training translate into performance improvements in real environments. Subjects were randomly allocated into 3 groups. Group 1 were trained to perform a real tyre change, group 2 were trained in a conventional VR setting, while group 3 were trained in VR with augmented cues. After training participants were tested on a real tyre change task. Overall time to completion was recorded as objective measure; subjective ratings of presence, perceived workload and discomfort were recorded using questionnaires. The performances of the three groups were compared. Overall, participants who received VR training performed significantly faster on the real task than participants who completed the real tyre change only. The difference between the virtual reality training groups was found to be not significant. However, participants who were trained with augmented cues performed the real tyre change with fewer errors than participants in the minimal cues training group. Systematic differences in subjective ratings that reflected objective performance were also observed.","PeriodicalId":171967,"journal":{"name":"2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125222246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On Stage Interactive Spatial AR for Drama Performance","authors":"Yanxiang Zhang, Z. Zhu","doi":"10.1109/ISMAR-Adjunct.2016.0095","DOIUrl":"https://doi.org/10.1109/ISMAR-Adjunct.2016.0095","url":null,"abstract":"The authors fuse the virtual objects and visual effects (VFX) and the performance of actors and actresses on real drama stage by developing an interactive spatial AR system, in which actors and actresses were interacting with virtual objects and VFX by their motion and gesture in real-time performance, and the images of virtual objects and VFX that projected on a transparent projection screen were aligned and matched to calibrate with their body part's position. The audience will see virtual objects and VFX are seamlessly matched on the real performers on stage space as if they are real things that are just under the control of the real performers. The audience will immerse into the drama scenes more deeply, hence offering the audience higher aesthetics feelings also bringing new possibilities to drama art creation.","PeriodicalId":171967,"journal":{"name":"2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129270275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Human Attention and fatigue for AR Head-Up Displays","authors":"H. Okumura, K. Shinohara","doi":"10.1109/ISMAR-Adjunct.2016.0102","DOIUrl":"https://doi.org/10.1109/ISMAR-Adjunct.2016.0102","url":null,"abstract":"We proposed and developed a novel monocular windshield augmented reality projector: WARP for AR and head-up display (HUD) applications. They use monocular vision that eliminates the depth cues caused by binocular parallax information. Our developed WARP system achieved not only a high hyper-reality performance with free depth perception and high visibility but also low eye-fatigue and wide field of attention.","PeriodicalId":171967,"journal":{"name":"2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130841451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"AR Tabletop Interface using a Head-Mounted Projector","authors":"Y. Kemmoku, T. Komuro","doi":"10.1109/ISMAR-Adjunct.2016.0097","DOIUrl":"https://doi.org/10.1109/ISMAR-Adjunct.2016.0097","url":null,"abstract":"In this paper, we propose a tabletop interface in which a user wears a projector with a depth camera on his or her head and can perform touch operations on an image projected on a flat surface. By using the head-mounted projector, images are always projected in front of the user in the direction of the user's gaze. By changing the image to be projected based on the user's head movement, this interface realizes a large effective screen size. The system superimposes an image on the flat surface by performing plane detection, placing the image on the detected plane, performing perspective projection to obtain a 2D image, and projecting the 2D image using the projector. Registration between the real world and the image is performed by estimating the user's head pose using the detected plane information. Furthermore, touch input is recognized by detecting the user's finger on the plane using the depth camera. We implemented some application examples into the system to demonstrate the usefulness of the proposed interface.","PeriodicalId":171967,"journal":{"name":"2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131489795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Long Qian, A. Winkler, B. Fuerst, P. Kazanzides, Nassir Navab
{"title":"Reduction of Interaction Space in Single Point Active Alignment Method for Optical See-Through Head-Mounted Display Calibration","authors":"Long Qian, A. Winkler, B. Fuerst, P. Kazanzides, Nassir Navab","doi":"10.1109/ISMAR-Adjunct.2016.0066","DOIUrl":"https://doi.org/10.1109/ISMAR-Adjunct.2016.0066","url":null,"abstract":"With users always involved in the calibration of optical see-through head-mounted displays, the accuracy of calibration is subject to human-related errors, for example, postural sway, an unstable input medium, and fatigue. In this paper we propose a new calibration approach: Fixed-head 2 degree-of-freedom (DOF) interaction for Single Point Active Alignment Method (SPAAM) reduces the interaction space from a typical 6 DOF head motion to a 2 DOF cursor position on the semi-transparent screen. It uses a mouse as input medium, which is more intuitive and stable, and reduces user fatigue by simplifying and speeding up the calibration procedure.A multi-user study confirmed the significant reduction of humanrelated error by comparing our novel fixed-head 2 DOF interaction to the traditional interaction methods for SPAAM.","PeriodicalId":171967,"journal":{"name":"2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121323625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sei Ikeda, Anh Nguyen Trung, Takumi Komae, F. Shibata, Asako Kimura
{"title":"Randomly Distributed Small Chip Makers","authors":"Sei Ikeda, Anh Nguyen Trung, Takumi Komae, F. Shibata, Asako Kimura","doi":"10.1109/ISMAR-Adjunct.2016.0088","DOIUrl":"https://doi.org/10.1109/ISMAR-Adjunct.2016.0088","url":null,"abstract":"In this paper, we propose a novel marker design and its tracking algorithm for room-sized MR/AR environments. The markers and the algorithm are designed to solve the following practical problems: i) the difficulties in creating and arranging markers and ii) the trade-off between inconspicuousness and robustness of markers. The proposed markers are small chips that are cut off a large paper sheet, and are arranged at random positions in an environment or on objects. This paper shows the design concept and feasibility of the proposed markers.","PeriodicalId":171967,"journal":{"name":"2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127780370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Dalim, Arindam Dey, Thammathip Piumsomboon, M. Billinghurst, M. S. Sunar
{"title":"TeachAR: An Interactive Augmented Reality Tool for Teaching Basic English to Non-native Children","authors":"C. Dalim, Arindam Dey, Thammathip Piumsomboon, M. Billinghurst, M. S. Sunar","doi":"10.1109/ISMAR-Adjunct.2016.0046","DOIUrl":"https://doi.org/10.1109/ISMAR-Adjunct.2016.0046","url":null,"abstract":"TeachAR is an Augmented Reality (AR) tool for teaching English colors, shapes, and spatial relationships to young children aged 4 to 6 years old who are non-native speakers of English. TeachAR utilizes the ARToolkit plugin for the Unity game engine for square marker tracking and game development. The Microsoft Kinect's microphone and speech API is used for isolated word speech recognition, a webcam for image capturing and a desktop monitor for viewing the AR scene. Previous language learning AR applications usually use audio output, however TeachAR uses speech as input for language learning. This paper describes the TeachAR demonstration and user experience with the application.","PeriodicalId":171967,"journal":{"name":"2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133788934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Technical Concept and Technology Choices for Implementing a Tangible Version of the Sokoban Game","authors":"Granit Luzhnica, Christoffer Ojeling, Eduardo Veas, Viktoria Pammer-Schindler","doi":"10.1109/ISMAR-ADJUNCT.2016.46","DOIUrl":"https://doi.org/10.1109/ISMAR-ADJUNCT.2016.46","url":null,"abstract":"This paper presents and discusses the technical concept of a virtual reality version of the Sokoban game with a tangible interface. The underlying rationale is to provide spinal-cord injury patients who are learning to use a neuroprosthesis to restore their capability of grasping with a game environment for training. We describe as relevant elements to be considered in such a gaming concept: input, output, virtual objects, physical objects, activity tracking and personalised level recommender. Finally, we also describe our experiences with instantiating the overall concept with hand-held mobile phones, smart glasses and a head mounted cardboard setup.","PeriodicalId":171967,"journal":{"name":"2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130420333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}