F. Cutolo, N. Cattari, M. Carbone, R. D’amato, V. Ferrari
{"title":"Device-Agnostic Augmented Reality Rendering Pipeline for AR in Medicine","authors":"F. Cutolo, N. Cattari, M. Carbone, R. D’amato, V. Ferrari","doi":"10.1109/ISMAR-Adjunct54149.2021.00077","DOIUrl":"https://doi.org/10.1109/ISMAR-Adjunct54149.2021.00077","url":null,"abstract":"Visual augmented reality (AR) headsets have the potential to enhance surgical navigation by providing physicians with an egocentric visualization interface capable of seamlessly blending the virtual navigation aid with the real surgical scenario. However, technological and human-factor limitations still hinder the routine use of commercial AR headsets in clinical practice. The aim of this work is to unveil the AR rendering pipeline of a device-agnostic software framework conceived to fulfill strict requirements towards the realization of a functional and reliable AR-based surgical navigator and capable of supporting the deployment of AR applications for image-guided surgery on different AR headsets. The AR rendering pipeline provides highly accurate AR overlay under both video and optical see-through modalities with almost no perceivable difference in terms of perception of relative distances and depths when used in the peripersonal space. The rendering pipeline allows the setting of the intrinsic and extrinsic projection parameters of the virtual rendering cameras offline and at runtime: under video see-through modality, the rendering pipeline can be modified to adapt the warping of the camera frames and pursue an orthostereoscopic and almost natural perception of the real scene in the peripersonal space. Similarly, under optical see-through modality, the calibrated intrinsic and extrinsic parameters of the eye-display model can be updated by the user to account for the actual user’s eye position. The results of the performance tests with an eye-replacement camera show an average motion-to-photon latency of around 110 ms for both AR rendering modalities. The AR platform for surgical navigation has already proven its efficacy and reliability under VST modality during real surgical operations in craniomaxillofacial surgery.","PeriodicalId":244088,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131131488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Designing Virtual Pedagogical Agents and Mentors for Extended Reality","authors":"Tiffany D. Do","doi":"10.1109/ISMAR-Adjunct54149.2021.00112","DOIUrl":"https://doi.org/10.1109/ISMAR-Adjunct54149.2021.00112","url":null,"abstract":"The use of virtual and augmented reality for educational purposes has seen a rapid increase in interest in recent years. Extended reality offers unique affordances to learners, and can enhance learning. Specifically, we are interested in the use of pedagogical agents in extended reality due to their potential to increase student motivation and learning. However, the design of pedagogical agents in extended reality is still a nascent area of study, which can be important in an immersive environment where social cues can be more salient. Pedagogical agent design aspects such as speech, appearance, and modality can prime social cues and affect learning outcomes and instructor perception. In this paper, we propose a project to investigate auditory and visual social cues of pedagogical agents in XR such as speech, ethnicity, and modality.","PeriodicalId":244088,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131100472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ihshan Gumilar, Amit Barde, Ashkan F. Hayati, M. Billinghurst, Sanjit Singh
{"title":"Eye-gaze, inter-brain synchrony, and collaborative VR in conjunction with online counselling: A pilot study","authors":"Ihshan Gumilar, Amit Barde, Ashkan F. Hayati, M. Billinghurst, Sanjit Singh","doi":"10.1109/ISMAR-Adjunct54149.2021.00021","DOIUrl":"https://doi.org/10.1109/ISMAR-Adjunct54149.2021.00021","url":null,"abstract":"Eye-gaze plays an essential role in interpersonal communication. Its role in face-to-face interactions and those in virtual environments (VE) has been extensively explored. However, the neural correlates of eye-gaze in inter-personal communication have not been explored exhaustively. The research detailed in this paper is an attempt to explore the neural correlates of eye gaze among two interacting individuals in a VE. The choice of using a VE has been motivated by the increasing frequency with which we use a desktop or Head Mounted Display (HMD) based VEs to interact with each other. The onset of the COVID-19 pandemic has accelerated the pace at which these technologies are being adopted for the purpose of remote collaboration. The pilot study described in this paper is an attempt to explore the effects of eye gaze on face-to-face interaction in a VE using the hyperscanning technique. This technique is used to measure neural activity and determine empirically whether the participants being measured display neural synchrony. Our results demonstrated that eye gaze directions appear to play a significant role in determining whether interacting individuals exhibit inter-brain synchrony. Results from this study can significantly benefit and contribute to positive outcomes for individuals with mental health disorders. We believe the techniques described here can be used to extend a high-quality mental health care to individuals irrespective of their geographical location.","PeriodicalId":244088,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121640672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yanqiu Tian, Alexander G Minton, Howe Yuan Zhu, Gina M. Notaro, R. Galvan-Garza, Yu-kai Wang, Hsiang-Ting Chen, James Allen, M. Ziegler, Chin-Teng Lin
{"title":"A Comparison of Common Video Game versus Real-World Heads-Up-Display Designs for the Purpose of Target Localization and Identification","authors":"Yanqiu Tian, Alexander G Minton, Howe Yuan Zhu, Gina M. Notaro, R. Galvan-Garza, Yu-kai Wang, Hsiang-Ting Chen, James Allen, M. Ziegler, Chin-Teng Lin","doi":"10.1109/ISMAR-Adjunct54149.2021.00054","DOIUrl":"https://doi.org/10.1109/ISMAR-Adjunct54149.2021.00054","url":null,"abstract":"This paper presents the findings of an investigation into the user ergonomics and performance for industry-inspired and traditional video game-inspired Heads-Up-Display (HUD) designs for target localization and identification in a 3D real-world environment. Our online user study (N = 85) compared one industry-inspired design (Ellipse) to three common video game HUD designs (Radar, Radar Indicator, and Compass). Participants interacted and evaluated each HUD design through our novel web-based game. The game involved a target localization and identification task where we recorded and analyzed their performance results as a quantitative metric. Afterwards, participants were asked to provide qualitative responses for specific aspects of each HUD design and comparatively rate the designs. Our findings show that not only do common video game HUDs provide comparable performance to the real-world inspired HUD, participants tended to prefer the designs they had experience with, these being video game designs.","PeriodicalId":244088,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130367391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jan Fiedler, Barbara Dannenmann, Simon Oed, Alexander Kracklauer
{"title":"Virtual Negotiation Training \"Beat the Bot\"","authors":"Jan Fiedler, Barbara Dannenmann, Simon Oed, Alexander Kracklauer","doi":"10.1109/ISMAR-Adjunct54149.2021.00119","DOIUrl":"https://doi.org/10.1109/ISMAR-Adjunct54149.2021.00119","url":null,"abstract":"The VR application \"Beat the Bot\" has successfully combined VR and AI in a negotiation-based dialogue scenario. The purpose is to achieve real success with users, enabling access into future, modern negotiation-based training. The aim of this VR application is to experience a negotiation situation in the form of a pitch and to learn to apply an ideally optimal negotiation style in a highly competitive sales negotiation for capital goods. The user slips into the role of the seller and uses natural language to negotiate with two virtual, AI-controlled agents acting in the role of two professional buyers. The VR application motivates the repetition and consolidation of the learning content by incorporating playful elements and creating a serious game environment [1].","PeriodicalId":244088,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123555104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deepfake Portraits in Augmented Reality for Museum Exhibits","authors":"Nathan Wynn, K. Johnsen, Nick Gonzalez","doi":"10.1109/ISMAR-Adjunct54149.2021.00125","DOIUrl":"https://doi.org/10.1109/ISMAR-Adjunct54149.2021.00125","url":null,"abstract":"In a collaboration with the Georgia Peanut Commission’s Education Center and museum in Georgia, USA, we developed an augmented reality app to guide visitors through the museum and offer immersive educational information about the artifacts, exhibits, and artwork displayed therein. Notably, our augmented reality system applies the First Order Motion Model for Image Animation to several portraits of individuals influential to the Georgia peanut industry to provide immersive animated narration and monologue regarding their contributions to the peanut industry. [4]","PeriodicalId":244088,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124009829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Nugget-Based Concept for Creating Augmented Reality","authors":"Linda Rau, R. Horst, Yu Liu, R. Dörner","doi":"10.1109/ISMAR-Adjunct54149.2021.00051","DOIUrl":"https://doi.org/10.1109/ISMAR-Adjunct54149.2021.00051","url":null,"abstract":"Creating Augmented Reality (AR) applications can be challenging, especially for persons with little or no technical background. This work introduces a concept for pattern-based AR applications that we call AR nuggets. One AR nugget reflects a single pattern from an application domain and includes placeholder objects and default parameters. Authors of AR applications can start with an AR nugget as an executable stand-alone application and customize it. This aims to support and facilitate the authoring process. Additionally, this paper identifies suitable application patterns that serve as a basis for AR nuggets. We implement and adapt AR nuggets to an exemplary use case in the medical domain. In an expert user study, we show that AR nuggets add a statistically significant value to an educational course and can support continuing education in the medical domain.","PeriodicalId":244088,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125399248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Depth Perception using X-Ray Visualizations","authors":"Thomas J. Clarke","doi":"10.1109/ISMAR-Adjunct54149.2021.00114","DOIUrl":"https://doi.org/10.1109/ISMAR-Adjunct54149.2021.00114","url":null,"abstract":"Augmented Reality’s ability to create visual cues that extend reality allows for many new abilities to enhance the way we work, problem solve and evaluate activities. Combining the digital and physical world’s information requires new understandings of how we perceive reality. The ability to look through physical objects without getting conflicting depth cues (X-Ray vision) is one challenge that is currently an open research question. The current research states several methods for improving depth perception such as providing extra occlusion by utilizing X-ray vision effects [4], [11], [12], [18], [23]. Currently, there is a lack of knowledge around this space into how and why some of these aspects work or the different strengths that using these techniques can offer. My research aims at developing a deeper understanding of X-Ray vision effects and how they can and should be used.","PeriodicalId":244088,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125951421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Depth Inpainting via Vision Transformer","authors":"Ilya Makarov, Gleb Borisenko","doi":"10.1109/ISMAR-Adjunct54149.2021.00065","DOIUrl":"https://doi.org/10.1109/ISMAR-Adjunct54149.2021.00065","url":null,"abstract":"Depth inpainting is a crucial task for working with augmented reality. In previous works missing depth values are completed by convolutional encoder-decoder networks, which is a kind of bottleneck. But nowadays vision transformers showed very good quality in various tasks of computer vision and some of them became state of the art. In this study, we presented a supervised method for depth inpainting by RGB images and sparse depth maps via vision transformers. The proposed model was trained and evaluated on the NYUv2 dataset. Experiments showed that a vision transformer with a restrictive convolutional tokenization model can improve the quality of the inpainted depth map.","PeriodicalId":244088,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)","volume":"485 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127278715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}