2021 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)最新文献

筛选
英文 中文
Device-Agnostic Augmented Reality Rendering Pipeline for AR in Medicine 医学AR中与设备无关的增强现实渲染管道
2021 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct) Pub Date : 2021-10-01 DOI: 10.1109/ISMAR-Adjunct54149.2021.00077
F. Cutolo, N. Cattari, M. Carbone, R. D’amato, V. Ferrari
{"title":"Device-Agnostic Augmented Reality Rendering Pipeline for AR in Medicine","authors":"F. Cutolo, N. Cattari, M. Carbone, R. D’amato, V. Ferrari","doi":"10.1109/ISMAR-Adjunct54149.2021.00077","DOIUrl":"https://doi.org/10.1109/ISMAR-Adjunct54149.2021.00077","url":null,"abstract":"Visual augmented reality (AR) headsets have the potential to enhance surgical navigation by providing physicians with an egocentric visualization interface capable of seamlessly blending the virtual navigation aid with the real surgical scenario. However, technological and human-factor limitations still hinder the routine use of commercial AR headsets in clinical practice. The aim of this work is to unveil the AR rendering pipeline of a device-agnostic software framework conceived to fulfill strict requirements towards the realization of a functional and reliable AR-based surgical navigator and capable of supporting the deployment of AR applications for image-guided surgery on different AR headsets. The AR rendering pipeline provides highly accurate AR overlay under both video and optical see-through modalities with almost no perceivable difference in terms of perception of relative distances and depths when used in the peripersonal space. The rendering pipeline allows the setting of the intrinsic and extrinsic projection parameters of the virtual rendering cameras offline and at runtime: under video see-through modality, the rendering pipeline can be modified to adapt the warping of the camera frames and pursue an orthostereoscopic and almost natural perception of the real scene in the peripersonal space. Similarly, under optical see-through modality, the calibrated intrinsic and extrinsic parameters of the eye-display model can be updated by the user to account for the actual user’s eye position. The results of the performance tests with an eye-replacement camera show an average motion-to-photon latency of around 110 ms for both AR rendering modalities. The AR platform for surgical navigation has already proven its efficacy and reliability under VST modality during real surgical operations in craniomaxillofacial surgery.","PeriodicalId":244088,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131131488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Designing Virtual Pedagogical Agents and Mentors for Extended Reality 为扩展现实设计虚拟教学代理和导师
2021 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct) Pub Date : 2021-10-01 DOI: 10.1109/ISMAR-Adjunct54149.2021.00112
Tiffany D. Do
{"title":"Designing Virtual Pedagogical Agents and Mentors for Extended Reality","authors":"Tiffany D. Do","doi":"10.1109/ISMAR-Adjunct54149.2021.00112","DOIUrl":"https://doi.org/10.1109/ISMAR-Adjunct54149.2021.00112","url":null,"abstract":"The use of virtual and augmented reality for educational purposes has seen a rapid increase in interest in recent years. Extended reality offers unique affordances to learners, and can enhance learning. Specifically, we are interested in the use of pedagogical agents in extended reality due to their potential to increase student motivation and learning. However, the design of pedagogical agents in extended reality is still a nascent area of study, which can be important in an immersive environment where social cues can be more salient. Pedagogical agent design aspects such as speech, appearance, and modality can prime social cues and affect learning outcomes and instructor perception. In this paper, we propose a project to investigate auditory and visual social cues of pedagogical agents in XR such as speech, ethnicity, and modality.","PeriodicalId":244088,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131100472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Eye-gaze, inter-brain synchrony, and collaborative VR in conjunction with online counselling: A pilot study 眼睛凝视、脑间同步、协同虚拟现实与在线咨询:一项试点研究
2021 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct) Pub Date : 2021-10-01 DOI: 10.1109/ISMAR-Adjunct54149.2021.00021
Ihshan Gumilar, Amit Barde, Ashkan F. Hayati, M. Billinghurst, Sanjit Singh
{"title":"Eye-gaze, inter-brain synchrony, and collaborative VR in conjunction with online counselling: A pilot study","authors":"Ihshan Gumilar, Amit Barde, Ashkan F. Hayati, M. Billinghurst, Sanjit Singh","doi":"10.1109/ISMAR-Adjunct54149.2021.00021","DOIUrl":"https://doi.org/10.1109/ISMAR-Adjunct54149.2021.00021","url":null,"abstract":"Eye-gaze plays an essential role in interpersonal communication. Its role in face-to-face interactions and those in virtual environments (VE) has been extensively explored. However, the neural correlates of eye-gaze in inter-personal communication have not been explored exhaustively. The research detailed in this paper is an attempt to explore the neural correlates of eye gaze among two interacting individuals in a VE. The choice of using a VE has been motivated by the increasing frequency with which we use a desktop or Head Mounted Display (HMD) based VEs to interact with each other. The onset of the COVID-19 pandemic has accelerated the pace at which these technologies are being adopted for the purpose of remote collaboration. The pilot study described in this paper is an attempt to explore the effects of eye gaze on face-to-face interaction in a VE using the hyperscanning technique. This technique is used to measure neural activity and determine empirically whether the participants being measured display neural synchrony. Our results demonstrated that eye gaze directions appear to play a significant role in determining whether interacting individuals exhibit inter-brain synchrony. Results from this study can significantly benefit and contribute to positive outcomes for individuals with mental health disorders. We believe the techniques described here can be used to extend a high-quality mental health care to individuals irrespective of their geographical location.","PeriodicalId":244088,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121640672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Comparison of Common Video Game versus Real-World Heads-Up-Display Designs for the Purpose of Target Localization and Identification 基于目标定位和识别目的的普通电子游戏与现实世界抬头显示设计的比较
2021 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct) Pub Date : 2021-10-01 DOI: 10.1109/ISMAR-Adjunct54149.2021.00054
Yanqiu Tian, Alexander G Minton, Howe Yuan Zhu, Gina M. Notaro, R. Galvan-Garza, Yu-kai Wang, Hsiang-Ting Chen, James Allen, M. Ziegler, Chin-Teng Lin
{"title":"A Comparison of Common Video Game versus Real-World Heads-Up-Display Designs for the Purpose of Target Localization and Identification","authors":"Yanqiu Tian, Alexander G Minton, Howe Yuan Zhu, Gina M. Notaro, R. Galvan-Garza, Yu-kai Wang, Hsiang-Ting Chen, James Allen, M. Ziegler, Chin-Teng Lin","doi":"10.1109/ISMAR-Adjunct54149.2021.00054","DOIUrl":"https://doi.org/10.1109/ISMAR-Adjunct54149.2021.00054","url":null,"abstract":"This paper presents the findings of an investigation into the user ergonomics and performance for industry-inspired and traditional video game-inspired Heads-Up-Display (HUD) designs for target localization and identification in a 3D real-world environment. Our online user study (N = 85) compared one industry-inspired design (Ellipse) to three common video game HUD designs (Radar, Radar Indicator, and Compass). Participants interacted and evaluated each HUD design through our novel web-based game. The game involved a target localization and identification task where we recorded and analyzed their performance results as a quantitative metric. Afterwards, participants were asked to provide qualitative responses for specific aspects of each HUD design and comparatively rate the designs. Our findings show that not only do common video game HUDs provide comparable performance to the real-world inspired HUD, participants tended to prefer the designs they had experience with, these being video game designs.","PeriodicalId":244088,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130367391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
IEEE ISMAR 2021 - Panels [2 abstracts] IEEE ISMAR 2021 -面板[2个摘要]
2021 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct) Pub Date : 2021-10-01 DOI: 10.1109/ismar-adjunct54149.2021.00010
{"title":"IEEE ISMAR 2021 - Panels [2 abstracts]","authors":"","doi":"10.1109/ismar-adjunct54149.2021.00010","DOIUrl":"https://doi.org/10.1109/ismar-adjunct54149.2021.00010","url":null,"abstract":"","PeriodicalId":244088,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117179618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Virtual Negotiation Training "Beat the Bot" 虚拟谈判培训“打败机器人”
2021 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct) Pub Date : 2021-10-01 DOI: 10.1109/ISMAR-Adjunct54149.2021.00119
Jan Fiedler, Barbara Dannenmann, Simon Oed, Alexander Kracklauer
{"title":"Virtual Negotiation Training \"Beat the Bot\"","authors":"Jan Fiedler, Barbara Dannenmann, Simon Oed, Alexander Kracklauer","doi":"10.1109/ISMAR-Adjunct54149.2021.00119","DOIUrl":"https://doi.org/10.1109/ISMAR-Adjunct54149.2021.00119","url":null,"abstract":"The VR application \"Beat the Bot\" has successfully combined VR and AI in a negotiation-based dialogue scenario. The purpose is to achieve real success with users, enabling access into future, modern negotiation-based training. The aim of this VR application is to experience a negotiation situation in the form of a pitch and to learn to apply an ideally optimal negotiation style in a highly competitive sales negotiation for capital goods. The user slips into the role of the seller and uses natural language to negotiate with two virtual, AI-controlled agents acting in the role of two professional buyers. The VR application motivates the repetition and consolidation of the learning content by incorporating playful elements and creating a serious game environment [1].","PeriodicalId":244088,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123555104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deepfake Portraits in Augmented Reality for Museum Exhibits 博物馆展品增强现实中的深度假肖像
2021 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct) Pub Date : 2021-10-01 DOI: 10.1109/ISMAR-Adjunct54149.2021.00125
Nathan Wynn, K. Johnsen, Nick Gonzalez
{"title":"Deepfake Portraits in Augmented Reality for Museum Exhibits","authors":"Nathan Wynn, K. Johnsen, Nick Gonzalez","doi":"10.1109/ISMAR-Adjunct54149.2021.00125","DOIUrl":"https://doi.org/10.1109/ISMAR-Adjunct54149.2021.00125","url":null,"abstract":"In a collaboration with the Georgia Peanut Commission’s Education Center and museum in Georgia, USA, we developed an augmented reality app to guide visitors through the museum and offer immersive educational information about the artifacts, exhibits, and artwork displayed therein. Notably, our augmented reality system applies the First Order Motion Model for Image Animation to several portraits of individuals influential to the Georgia peanut industry to provide immersive animated narration and monologue regarding their contributions to the peanut industry. [4]","PeriodicalId":244088,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124009829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A Nugget-Based Concept for Creating Augmented Reality 基于金块的增强现实创建概念
2021 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct) Pub Date : 2021-10-01 DOI: 10.1109/ISMAR-Adjunct54149.2021.00051
Linda Rau, R. Horst, Yu Liu, R. Dörner
{"title":"A Nugget-Based Concept for Creating Augmented Reality","authors":"Linda Rau, R. Horst, Yu Liu, R. Dörner","doi":"10.1109/ISMAR-Adjunct54149.2021.00051","DOIUrl":"https://doi.org/10.1109/ISMAR-Adjunct54149.2021.00051","url":null,"abstract":"Creating Augmented Reality (AR) applications can be challenging, especially for persons with little or no technical background. This work introduces a concept for pattern-based AR applications that we call AR nuggets. One AR nugget reflects a single pattern from an application domain and includes placeholder objects and default parameters. Authors of AR applications can start with an AR nugget as an executable stand-alone application and customize it. This aims to support and facilitate the authoring process. Additionally, this paper identifies suitable application patterns that serve as a basis for AR nuggets. We implement and adapt AR nuggets to an exemplary use case in the medical domain. In an expert user study, we show that AR nuggets add a statistically significant value to an educational course and can support continuing education in the medical domain.","PeriodicalId":244088,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125399248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Depth Perception using X-Ray Visualizations 使用x射线可视化的深度感知
2021 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct) Pub Date : 2021-10-01 DOI: 10.1109/ISMAR-Adjunct54149.2021.00114
Thomas J. Clarke
{"title":"Depth Perception using X-Ray Visualizations","authors":"Thomas J. Clarke","doi":"10.1109/ISMAR-Adjunct54149.2021.00114","DOIUrl":"https://doi.org/10.1109/ISMAR-Adjunct54149.2021.00114","url":null,"abstract":"Augmented Reality’s ability to create visual cues that extend reality allows for many new abilities to enhance the way we work, problem solve and evaluate activities. Combining the digital and physical world’s information requires new understandings of how we perceive reality. The ability to look through physical objects without getting conflicting depth cues (X-Ray vision) is one challenge that is currently an open research question. The current research states several methods for improving depth perception such as providing extra occlusion by utilizing X-ray vision effects [4], [11], [12], [18], [23]. Currently, there is a lack of knowledge around this space into how and why some of these aspects work or the different strengths that using these techniques can offer. My research aims at developing a deeper understanding of X-Ray vision effects and how they can and should be used.","PeriodicalId":244088,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125951421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Depth Inpainting via Vision Transformer 通过视觉变压器深度着色
2021 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct) Pub Date : 2021-10-01 DOI: 10.1109/ISMAR-Adjunct54149.2021.00065
Ilya Makarov, Gleb Borisenko
{"title":"Depth Inpainting via Vision Transformer","authors":"Ilya Makarov, Gleb Borisenko","doi":"10.1109/ISMAR-Adjunct54149.2021.00065","DOIUrl":"https://doi.org/10.1109/ISMAR-Adjunct54149.2021.00065","url":null,"abstract":"Depth inpainting is a crucial task for working with augmented reality. In previous works missing depth values are completed by convolutional encoder-decoder networks, which is a kind of bottleneck. But nowadays vision transformers showed very good quality in various tasks of computer vision and some of them became state of the art. In this study, we presented a supervised method for depth inpainting by RGB images and sparse depth maps via vision transformers. The proposed model was trained and evaluated on the NYUv2 dataset. Experiments showed that a vision transformer with a restrictive convolutional tokenization model can improve the quality of the inpainted depth map.","PeriodicalId":244088,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)","volume":"485 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127278715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信