Tobias Kreienbühl, Richard Wetzel, Naomi Burgess, A. M. Schmid, Dorothee Brovelli
{"title":"AR Circuit Constructor: Combining Electricity Building Blocks and Augmented Reality for Analogy-Driven Learning and Experimentation","authors":"Tobias Kreienbühl, Richard Wetzel, Naomi Burgess, A. M. Schmid, Dorothee Brovelli","doi":"10.1109/ISMAR-Adjunct51615.2020.00019","DOIUrl":"https://doi.org/10.1109/ISMAR-Adjunct51615.2020.00019","url":null,"abstract":"We present AR Circuit Constructor (ARCC), an augmented reality application to explore and inspect electric circuits for use in educational settings. Learners use tangible electricity building blocks to construct a working electric circuit. Then, they can use a tablet device for exploring the circuit in an augmented reality visualization. Learners can switch between three distinct conceptual analogies: bicycle chain, water pipes, and waterfalls. Through experimentation with different circuit configurations, learners explore different properties of electricity to ultimately improve their understanding of it. We describe the development of our application, including a qualitative user study with a group of STEM teachers. The latter allowed us to gain insights into the qualities required for such an application before it can ultimately be deployed in a classroom setting.","PeriodicalId":433361,"journal":{"name":"2020 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)","volume":"229 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122117893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
E. Peña-Tapia, Ryo Hachiuma, Antoine Pasquali, H. Saito
{"title":"LCR-SMPL: Toward Real-time Human Detection and 3D Reconstruction from a Single RGB Image","authors":"E. Peña-Tapia, Ryo Hachiuma, Antoine Pasquali, H. Saito","doi":"10.1109/ISMAR-Adjunct51615.2020.00062","DOIUrl":"https://doi.org/10.1109/ISMAR-Adjunct51615.2020.00062","url":null,"abstract":"This paper presents a novel method for simultaneous human detection and 3D shape reconstruction from a single RGB image. It offers a low-cost alternative to existing motion capture solutions, allowing to reconstruct realistic human 3D shapes and poses by leveraging the speed of an object-detection based architecture and the extended applicability of a parametric human mesh model. Evaluation results using a synthetic dataset show that our approach is on-par with conventional 3D reconstruction methods in terms of accuracy, and outperforms them in terms of inference speed, particularly in the case of multi-person images.","PeriodicalId":433361,"journal":{"name":"2020 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117218727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lei Zhang, Klevin Wu, Bin Yang, Hao Tang, Zhigang Zhu
{"title":"Exploring Virtual Environments by Visually Impaired Using a Mixed Reality Cane Without Visual Feedback","authors":"Lei Zhang, Klevin Wu, Bin Yang, Hao Tang, Zhigang Zhu","doi":"10.1109/ISMAR-Adjunct51615.2020.00028","DOIUrl":"https://doi.org/10.1109/ISMAR-Adjunct51615.2020.00028","url":null,"abstract":"Though virtual reality (VR) has been advanced to certain levels of maturity in recent years, the general public, especially the population of the blind and visually impaired (BVI), still cannot enjoy the benefit provided by VR. Current VR accessibility applications have been developed either on expensive head-mounted displays or with extra accessories and mechanisms, which are either not accessible or inconvenient for BVI individuals. In this paper, we present a mobile VR app that enables BVI users to access a virtual environment on an iPhone in order to build their skills of perception and recognition of the virtual environment and the virtual objects in the environment. The app uses the iPhone on a selfie stick to simulate a long cane in VR, and applies Augmented Reality (AR) techniques to track the iPhone’s real-time poses in an empty space of the real world, which is then synchronized to the long cane in the VR environment. Due to the use of mixed reality (the integration of VR & AR), we call it the Mixed Reality cane (MR Cane), which provides BVI users auditory and vibrotactile feedback whenever the virtual cane comes in contact with objects in VR. Thus, the MR Cane allows BVI individuals to interact with the virtual objects and identify approximate sizes and locations of the objects in the virtual environment. We performed preliminary user studies with blind-folded participants to investigate the effectiveness of the proposed mobile approach and the results indicate that the proposed MR Cane could be effective to help BVI individuals in understanding the interaction with virtual objects and exploring 3D virtual environments. The MR Cane concept can be extended to new applications of navigation, training and entertainment for BVI individuals without more significant efforts.","PeriodicalId":433361,"journal":{"name":"2020 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126117956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xuan Zhang, Jonathan Lundgren, Yoya Mesaki, Yuichi Hiroi, Yuta Itoh
{"title":"Stencil Marker: Designing Partially Transparent Markers for Stacking Augmented Reality Objects","authors":"Xuan Zhang, Jonathan Lundgren, Yoya Mesaki, Yuichi Hiroi, Yuta Itoh","doi":"10.1109/ISMAR-Adjunct51615.2020.00073","DOIUrl":"https://doi.org/10.1109/ISMAR-Adjunct51615.2020.00073","url":null,"abstract":"We propose a transparent colored AR marker that allows 3D objects to be stacked in space. Conventional AR markers make it difficult to display multiple objects in the same position in space, or to manipulate the order or rotation of objects. The proposed transparent colored markers are designed to detect the order and rotation direction of each marker in the stack from the observed image, based on mathematical constraints. We describe these constraints to design markers, the implementation to detect its stacking order and rotation of each marker, and a proof-of-concept application Totem Poles. We also discuss the limitations of the current prototype and possible research directions.","PeriodicalId":433361,"journal":{"name":"2020 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128622512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mario Lorenz, Sebastian Knopp, Jisu Kim, Philipp Klimant
{"title":"Industrial Augmented Reality: 3D-Content Editor for Augmented Reality Maintenance Worker Support System","authors":"Mario Lorenz, Sebastian Knopp, Jisu Kim, Philipp Klimant","doi":"10.1109/ISMAR-Adjunct51615.2020.00060","DOIUrl":"https://doi.org/10.1109/ISMAR-Adjunct51615.2020.00060","url":null,"abstract":"Supporting maintenance with 3D object enhanced instruction is one of the key applications of Augmented Reality (AR) in industry. For the breakthrough of AR in maintenance, it is important that the technicians themselves can create AR-instructions and perform the challenging task of placing 3D objects as they know best how to perform a task and what necessary information needs to be displayed. For this challenge, a 3D-content editor is being presented wherein a first step the 3D objects can roughly be placed using a 2D image of the machine, therefore, limiting the time required to access the machine. In a second step, the positions of the 3D objects can be fine-tuned at the machine site using live footage. The key challenges were to develop an easily accessible UI that requires no prior knowledge of AR content creation in a tool that works both with live footage and images and is usable with a touch screen and keyboard/mouse. The 3D-content editor was qualitatively assessed by technicians revealing its general applicability, but also the requirement for a lot of time to gain the necessary experience for positioning 3D objects.","PeriodicalId":433361,"journal":{"name":"2020 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127006373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Exploratory Study for Designing Social Experience of Watching VR Movies Based on Audience’s Voice Comments","authors":"Shuo Yan, Wenli Jiang, Menghan Xiong, Xukun Shen","doi":"10.1109/ISMAR-Adjunct51615.2020.00049","DOIUrl":"https://doi.org/10.1109/ISMAR-Adjunct51615.2020.00049","url":null,"abstract":"Social experience is important when audience are watching movies. Virtual reality (VR) movies engage audience through immersive environment and interactive narrative. However, VR headsets restrict audience to an individual experience, which disrupt the potential for shared social realities. In our study, we propose an approach to design an asynchronous social experience that allows the participant to receive other audiences’ voice comments (such as their opinions, impressions or emotional reactions) in VR movies. We measured the participants’ feedback on their engagement levels, recall abilities and social presence. The results showed that in VR-Voice Comment (VR-VC) movie, the audience’s voice comments could affect participant’s engagement and the recall of information in the scenes. The participants obtained social awareness and enjoyment at the same time. A few of them were worried mainly because of the potential auditory clutter that resulted from unpredictable voice comments. We discuss the design implications for this and directions for future research. Overall, we observe a positive tendency in watching VR-VC movie, which could be adapted for future VR movie experience.","PeriodicalId":433361,"journal":{"name":"2020 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126275213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"3D human model creation on a serverless environment","authors":"Peter Fasogbon, Yu You, Emre B. Aksu","doi":"10.1109/ISMAR-Adjunct51615.2020.00044","DOIUrl":"https://doi.org/10.1109/ISMAR-Adjunct51615.2020.00044","url":null,"abstract":"The creation of realistic 3D human model is traditionally timeconsuming and cumbersome, and is typically done by professionals. In recent years computer vision technologies can assist in generating human models from controlled environments, we demonstrate a different but easy capturing scenario with less constraints on the subject or the environmental setup. The reconstruction process for 3D human model consists of various intermediate process such as semantic human segmentation, human skeletal keypoint detection, and texture generation. In order to achieve easy, scalable, and flexible deployment to different cloud environments, we have chosen the serverless architecture to offload some common service functionalities to the cloud infrastructure but focused on the core task,which is the reconstruction itself. The event-driven serverless architecture eases the building of such multimedia web services with minimal coding efforts, but simply defines the APIs and declares the APIs with correspondent lambda functions. The proposed approach in this paper allow anyone with a mobile phone to generate 3D models easily and quickly in the scale of few 2-3 minutes, rather than hours.","PeriodicalId":433361,"journal":{"name":"2020 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126289340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Virtual Morris Water Maze to Study Neurodegenarative Disorders","authors":"Daniel Roth, Christian Felix Purps, W. Neumann","doi":"10.1109/ISMAR-Adjunct51615.2020.00048","DOIUrl":"https://doi.org/10.1109/ISMAR-Adjunct51615.2020.00048","url":null,"abstract":"Navigation is a crucial cognitive skill that allows humans and animals to move from one place to another without getting lost. In neurological patients this skill can be impaired, when neural structures that form the brain networks important for spatial learning and navigation are impaired. Thus, spatial navigation represents an important measure of cognitive health that is impossible to test in a clinical examination, due to lack of space in examination rooms. Consequently, spatial navigation is largely neglected in the clinical assessment of neurological, neurosurgical and psychiatric patients. Virtual reality represents a unique opportunity to develop a systematic assessment of spatial navigation for diagnosis and therapeutic monitoring of millions of patients presenting with cognitive decline in the clinical routine. Therefore, we have adapted a classical spatial navigation paradigm that was developed for animal research, the \"Morris Water Maze\" as an openly available Virtual Reality (VR) application, that allows objective quantification of navigational skills in humans. This tool may be used in the future to aid the assessment of the human navigation system in health and neurological disease.","PeriodicalId":433361,"journal":{"name":"2020 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115038556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alejandro Álvarez-Marín, J. Velázquez‐Iturbide, M. Castillo-Vergara
{"title":"Intention to use an interactive AR app for engineering education","authors":"Alejandro Álvarez-Marín, J. Velázquez‐Iturbide, M. Castillo-Vergara","doi":"10.1109/ISMAR-Adjunct51615.2020.00033","DOIUrl":"https://doi.org/10.1109/ISMAR-Adjunct51615.2020.00033","url":null,"abstract":"Augmented reality (AR) has been incorporated into educational processes in various subjects to improve academic performance. One of these areas is the field of electronics since students often have difficulty understanding electricity. An interactive AR app on electrical circuits was developed. The app allows the manipulation of circuit elements, computes the voltage and amperage values using the loop method, and applies Kirchhoff's voltage law. This research aims to determine the intention of using the AR app by students. It also looks to determine if it is conditioned by how the survey is applied (online or face-to-face) or students' gender. The results show that the app is well evaluated on the intention of use by students. Regarding how the survey is applied, the attitude towards using does not present significant differences. In contrast, the students who carried out the online survey presented a higher behavioral intention to use than those who participated in the guided laboratory. Regarding gender, women showed a higher attitude toward using and behavioral intention to use this technology than men.","PeriodicalId":433361,"journal":{"name":"2020 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125724000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Modeling Emotions for Training in Immersive Simulations (METIS): A Cross-Platform Virtual Classroom Study","authors":"A. Delamarre, C. Lisetti, Cédric Buche","doi":"10.1109/ISMAR-Adjunct51615.2020.00036","DOIUrl":"https://doi.org/10.1109/ISMAR-Adjunct51615.2020.00036","url":null,"abstract":"Virtual training environments (VTEs) using immersive technology have been able to successfully provide training for technical skills. Combined with recent advances in virtual social agent technologies and in affective computing, VTEs can now also support the training of social skills. Research looking at the effects of different immersive technologies on users’ experience (UX) can provide important insights about their impact on user’s engagement with the technology, sense presence and co-presence. However, current studies do not address whether emotions displayed by virtual agents provide the same level of UX across different virtual reality (VR) platforms. In this study, we considered a virtual classroom simulator built for desktop computer, and adapted for an immersive VR platform (CAVE). Users interact with virtual animated disruptive students able to display facial expressions, to help them practice their classroom behavior management skills. We assessed effects of the VR platforms and of the display of facial expressions on presence, co-presence, engagement, and believability. Results indicate that users were engaged, found the virtual students believable and felt presence and co-presence for both VR platforms. We also observed an interaction effects of facial expressions and VR platforms for co-presence (p = .018 < .05).","PeriodicalId":433361,"journal":{"name":"2020 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)","volume":"221 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128624438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}