Bilal Ahmed, Jong Hun Lee, Yong Yi Lee, Kwan H. Lee
{"title":"Mimicking an Object Using Multiple Projectors","authors":"Bilal Ahmed, Jong Hun Lee, Yong Yi Lee, Kwan H. Lee","doi":"10.1109/ISMAR-Adjunct.2016.0040","DOIUrl":"https://doi.org/10.1109/ISMAR-Adjunct.2016.0040","url":null,"abstract":"Recently many researchers have focused on 3D projection mapping systems but reproducing high quality appearances has received relatively less attention. A lot of work has been done in areas of blending multiple projector outputs, 3D projection mapping and large tiled projector mosaics but existing color compensation based frameworks still suffer from contrast compression, color inconsistencies and inappropriate luminance over the randomly shaped projection surface to a certain extent resulting in an inappropriate appearance for realism oriented SAR applications. Until now the problem of having a realistic result with minimal contrast compression and acceptable black levels using projection mapping on 3D objects in order to virtually recreate an original object of similar appearance still remains unsolved.Hereby, we propose a method that enables us to use high quality measured data from original objects and then regenerating the same appearance by projecting optimized images using multiple projectors in order to ensure that projection-rendered results look close to the real object.","PeriodicalId":171967,"journal":{"name":"2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct)","volume":"263 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133591445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Srinidhi Hegde, Ramakrishna Perla, R. Hebbalaguppe, Ehtesham Hassan
{"title":"GestAR: Real Time Gesture Interaction for AR with Egocentric View","authors":"Srinidhi Hegde, Ramakrishna Perla, R. Hebbalaguppe, Ehtesham Hassan","doi":"10.1109/ISMAR-Adjunct.2016.0090","DOIUrl":"https://doi.org/10.1109/ISMAR-Adjunct.2016.0090","url":null,"abstract":"The existing, sophisticated AR gadgets1 in the market today are mostly exorbitantly priced. This limits their usage for the upcoming academic research institutes and also their reach to the mass market in general. Among the most popular and frugal head mounts, Google Cardboard (GC) and Wearality2 are video-see-through devices that can provide immersible AR and VR experiences with a smartphone. Stereo-rendering of camera feed and overlaid information on smartphone helps us experience AR with GC. These frugal devices have limited user-input capability, allowing user interactions with GC such as head tilting, magnetic trigger and conductive lever. Our paper proposes a reliable and intuitive gesture based interaction technique for these frugal devices. The hand gesture recognition employs the Gaussian Mixture Models (GMM) based on human skin pixels and tracks segmented foreground using optical flow to detect hand swipe direction for triggering a relevant event. Realtime performance is achieved by implementing the hand gesture recognition module on a smartphone and thus reducing the latency. We augment real-time hand gestures as new GC's interface with its evaluation done in terms of subjective metrics and with the available user interactions in GC.","PeriodicalId":171967,"journal":{"name":"2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133616105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
I. Kakadiaris, Mohammad M. Islam, Tian Xie, Christophoros Nikou, A. Lumsden
{"title":"iRay: Mobile AR Using Structure Sensor","authors":"I. Kakadiaris, Mohammad M. Islam, Tian Xie, Christophoros Nikou, A. Lumsden","doi":"10.1109/ISMAR-Adjunct.2016.0058","DOIUrl":"https://doi.org/10.1109/ISMAR-Adjunct.2016.0058","url":null,"abstract":"Using depth information has become more popular in recent years, as it adds a new dimension to 2D camera views. We have developed a novel mobile application called iRay, which uses depth information to achieve highly accurate markerless registration on a mobile device for medical use. We use a Structure Sensor to capture depth information that is portable to iPad. Its SDK also provides SLAM data to track pose. ICP is applied to achieve highly accurate registration between the 3D surface of a human torso and a pre-scanned torso model. The experiments demonstrate our results under motion blur, partial occlusion, and small movements. This application has the potential to be used for medical education and intervention.","PeriodicalId":171967,"journal":{"name":"2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct)","volume":"148 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133344590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Empower VR Art and AR Book with Spatial Interaction","authors":"Yangxiang Zhang, Z. Zhu, Zhu Yun","doi":"10.1109/ISMAR-Adjunct.2016.0094","DOIUrl":"https://doi.org/10.1109/ISMAR-Adjunct.2016.0094","url":null,"abstract":"In some circumstances it is necessary to allow the public to interact with AR or VR contents in physical space in an easy to use and low cost way, such as mark based AR book in classroom and interactive VR art or AR art in exhibition or museum in open space. The authors developed tangible user interface elements based on marker recognition. The user interface elements include virtual buttons, virtual rotate, and virtual hotspot. The user elements were integrated into various kinds of digital presentation systems by optimizing the logistic structure and interaction design of the user interface system to realize convenient spatial interactions. Thus, providing friendly user interaction and effective communication of information. Especially when interacting with art works, this interface could hide technology aspects and reduce the technology noise over art, and bring magical and amazing experience to users.","PeriodicalId":171967,"journal":{"name":"2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125652947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wen-Jie Chen, Chun-Wei Chen, Jonas Wang, Ming-Der Shieh
{"title":"Effective Registration for Multiple Users AR System","authors":"Wen-Jie Chen, Chun-Wei Chen, Jonas Wang, Ming-Der Shieh","doi":"10.1109/ISMAR-Adjunct.2016.0092","DOIUrl":"https://doi.org/10.1109/ISMAR-Adjunct.2016.0092","url":null,"abstract":"Registration is an important task in augmented reality (AR) systems. For markerless AR, feature descriptors are generally used as a basis of registration process, which is expected to be robust for various application scenarios. This work aims at exploring effective schemes to improve the registration results, especially for applications with large viewpoint angles. Using the proposed scheme, the registration error can be reduced by only evaluating feature points near the virtual object and within the region of interest. Experimental results reveal that about 30% to 50% registration error and 10 times data size of features can be reduced by applying the proposed schemes. Thus, the bandwidth requirement for transmitting features among different users is also decreased accordingly.","PeriodicalId":171967,"journal":{"name":"2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126245025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B. Thompson, L. Levy, Amelia Lambeth, David Byrd, Joelle Alcaidinho, Iulian Radu, Maribeth Gandy Coleman
{"title":"Participatory Design of STEM Education AR Experiences for Heterogeneous Student Groups: Exploring Dimensions of Tangibility, Simulation, and Interaction","authors":"B. Thompson, L. Levy, Amelia Lambeth, David Byrd, Joelle Alcaidinho, Iulian Radu, Maribeth Gandy Coleman","doi":"10.1109/ISMAR-Adjunct.2016.0038","DOIUrl":"https://doi.org/10.1109/ISMAR-Adjunct.2016.0038","url":null,"abstract":"In this paper, we present the results of a multi-year participatory design process exploring the space of educational AR experiences for STEM education targeted at students of various ages and abilities. Our participants included teachers, students (ages five to fourteen), educational technology experts, game designers, and HCI researchers. The work was informed by state educational curriculum guidelines. The activities included developing a set of design dimensions which guided our ideation process, iteratively designing, building, and evaluating six prototypes with our stakeholders, and collecting our observations regarding the use of AR STEM applications by target students.","PeriodicalId":171967,"journal":{"name":"2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct)","volume":"321 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122221748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Learning Object and State Models for AR Task Guidance","authors":"W. Hoff, H. Zhang","doi":"10.1109/ISMAR-Adjunct.2016.0093","DOIUrl":"https://doi.org/10.1109/ISMAR-Adjunct.2016.0093","url":null,"abstract":"We present a method for automatically learning object and state models, which can be used for recognition in an augmented reality task guidance system. We assume that the task involves objects whose appearance is fairly consistent, but the background may vary. The novelty of our approach is that the system can be automatically constructed from examples of experts performing the task. As a result, the system can be easily adapted to new tasks. The approach makes use of the fact that the key features of the object are consistently present in multiple viewing instances; whereas features from the background or irrelevant objects are not consistently present. Using information theory, we automatically identify the features that can best discriminate between object states. In evaluations, our prototype successfully recognized object states in all trials.","PeriodicalId":171967,"journal":{"name":"2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct)","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115318344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Baldassi, Grace T. Cheng, Jonathan Chan, Moqian Tian, Tim Christie, Matthew T. Short
{"title":"Exploring Immersive AR Instructions for Procedural Tasks: The Role of Depth, Motion, and Volumetric Representations","authors":"S. Baldassi, Grace T. Cheng, Jonathan Chan, Moqian Tian, Tim Christie, Matthew T. Short","doi":"10.1109/ISMAR-Adjunct.2016.0101","DOIUrl":"https://doi.org/10.1109/ISMAR-Adjunct.2016.0101","url":null,"abstract":"Wearable Augmented Reality (W-AR) is based on getting a computer as intimate as possible with the wearers' bodies and senses. We need to understand the cognitive and perceptual mechanisms leveraged by this technology and use them for designing AR applications. In this study we explored the potential benefit of W-AR to guide a procedural task of assembling a LEGO™ compared to traditional paper instructions. We measured the time used to complete each step and the subjective perception of helpfulness and effectiveness of the instructions along with the perceived time spent doing the task. The results show that adding motion cues to an AR stereo visualization of the instructions (Dynamic 3D) improved performance compared to both the paper instructions and an AR version with stereo only but no motion (Static 3D). Interestingly, performance for the Static 3D condition was the slowest of the three. Subjective reports did not show any difference across different instruction types, suggesting that advantage of Dynamic 3D instructions are not accessible by covert awareness of the participants. The results provide support to the idea that principles of neurosciences may have direct implications for the product development in Wearable Augmented Reality.","PeriodicalId":171967,"journal":{"name":"2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122827693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Vassigh, Albert Elias, F. Ortega, D. Davis, Giovanna Gallardo, Hadi Alhaffar, Lukas Borges, J. Bernal, N. Rishe
{"title":"Integrating Building Information Modeling with Augmented Reality for Interdisciplinary Learning","authors":"S. Vassigh, Albert Elias, F. Ortega, D. Davis, Giovanna Gallardo, Hadi Alhaffar, Lukas Borges, J. Bernal, N. Rishe","doi":"10.1109/ISMAR-Adjunct.2016.0089","DOIUrl":"https://doi.org/10.1109/ISMAR-Adjunct.2016.0089","url":null,"abstract":"Augmented Reality provides a way to enhance the classroom experience. In particular, student learning about building systems in the fields of Architecture, Civil, and Mechanical Engineering may improve, if visualization outside the classroom is provided. We propose that AR-SKOPE, an application that integrates Building Information Modelling and Augmented Reality may improve learning. This application allows students to visit specific buildings and investigate their various systems with supplementary information using a phone or tablet. We are currently testing our early prototype to conduct a semester-long study.","PeriodicalId":171967,"journal":{"name":"2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct)","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132528532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Perspective on Non-Isometric Shape-from-Template","authors":"A. Bartoli, E. Ozgur","doi":"10.1109/ISMAR-Adjunct.2016.0026","DOIUrl":"https://doi.org/10.1109/ISMAR-Adjunct.2016.0026","url":null,"abstract":"Shape-from-Template (SfT) uses an object's shape template and a deformation law to achieve single-image reconstruction. SfT is a fundamental tool to retexture or augment a deformable object in a monocular video. It has matured for isometric deformations, but the non-isometric case is yet largely open. This is because modeling is generally more complicated and the constraints certainly weaker. Existing algorithms use, for instance, linear elasticity, require one to provide boundary conditions represented by known deformed shape parts and need nonconvex optimization. We use a very simple and generic model to show that non-isometric SfT has a unique solution up to scale under strong perspective imaging and mild deformation curvature. Our model uses a novel type of homography interpretation that we call Perspective-Projection-Affine-Embedding. It may use boundary conditions if available and can be estimated with Linear Least Squares optimization. We provide experimental results on synthetic and real data.","PeriodicalId":171967,"journal":{"name":"2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133591242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}