2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)最新文献

筛选
英文 中文
AR Interfaces for Disocclusion—A Comparative Study AR界面用于咬合的比较研究
2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR) Pub Date : 2023-03-01 DOI: 10.1109/VR55154.2023.00068
Shuqi Liao, Yuqi Zhou, V. Popescu
{"title":"AR Interfaces for Disocclusion—A Comparative Study","authors":"Shuqi Liao, Yuqi Zhou, V. Popescu","doi":"10.1109/VR55154.2023.00068","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00068","url":null,"abstract":"An important application of augmented reality (AR) is the design of interfaces that reveal parts of the real world to which the user does not have line of sight. The design space for such interfaces is vast, with many options for integrating the visualization of the occluded parts of the scene into the user's main view. This paper compares four AR interfaces for disocclusion: X-ray, Cutaway, Picture-in-picture, and Multiperspective. The interfaces are compared in a within-subjects study (N = 33) over four tasks: counting dynamic spheres, pointing to the direction of an occluded person, finding the closest object to a given object, and finding pairs of matching numbers. The results show that Cutaway leads to poor performance in tasks where the user needs to see both the occluder and the occludee; that Picture-in-picture and Multiperspective have a visualization comprehensiveness advantage over Cutaway and X-ray, but a disadvantage in terms of directional guidance; that X-ray has a task completion time disadvantage due to the visualization complexity; and that participants gave Cutaway and Picture-in-picture high, and Multiperspective and X-ray low usability scores.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127823959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Extended Depth-of-Field Projector using Learned Diffractive Optics 扩展景深投影仪使用所学的衍射光学
2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR) Pub Date : 2023-03-01 DOI: 10.1109/VR55154.2023.00060
Yuqi Li, Q. Fu, W. Heidrich
{"title":"Extended Depth-of-Field Projector using Learned Diffractive Optics","authors":"Yuqi Li, Q. Fu, W. Heidrich","doi":"10.1109/VR55154.2023.00060","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00060","url":null,"abstract":"Projector Depth-of-Field (DOF) refers to the projection range of projector images in focus. It is a crucial property of projectors in spatial augmented reality (SAR) applications since wide projector DOF can increase the effective projection area on the projection surfaces with large depth variances and thus reduce the number of projectors required. Existing state-of-the-art methods attempt to create all-in-focus displays by adopting either a deep deblurring network or light modulation. Unlike previous work that considers the optimization of the deblurring model and physic modulation separately, in this paper, we propose an end-to-end joint optimization method to learn a diffractive optical element (DOE) placed in front of a projector lens and a compensation network for deblurring. Using the desired image and the captured projection result image, the compensation network can directly output the compensated image for display. We evaluate the proposed method in physical simulation and with a real experimental prototype, showing that the proposed method can extend the projector DOF by a minor modification to the projector and thus superior to the normal projection with a shallow DOF. The compensation method is also compared with the state-of-the-art methods and shows the advance in radiometric compensation in terms of computational efficiency and image quality.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130909076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A study of the influence of AR on the perception, comprehension and projection levels of situation awareness AR对情境感知、理解和投射水平影响的研究
2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR) Pub Date : 2023-03-01 DOI: 10.1109/VR55154.2023.00069
Camille Truong-Allié, Martin Herbeth, Alexis Paljic
{"title":"A study of the influence of AR on the perception, comprehension and projection levels of situation awareness","authors":"Camille Truong-Allié, Martin Herbeth, Alexis Paljic","doi":"10.1109/VR55154.2023.00069","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00069","url":null,"abstract":"In this work, we examine how Augmented Reality (AR) impacts user's situation awareness (SA) on elements secondary to an AR-assisted main task, i.e. not directly concerned by the main task. These secondary elements can still provide relevant information that we do not want the user to miss. A good understanding of user's awareness about them is therefore interesting, especially in a context of a daily use of AR, in which not all elements of user's environment are controlled. In this regard, we measured SA about secondary elements in an industrial workshop where the AR-assisted main task is a pedestrian navigation. We compared SA between three navigation guidance conditions: a paper map, a virtual path, and a virtual path with virtual cues about secondary elements. These secondary elements were either hazardous areas, for example, for mandatory helmets, or items which could be on user's path, for example, misplaced carts, boxes… We adapted an existing SA method evaluation to a real-world environment. With this method, participants were queried about their SA on three levels: perception, comprehension and projection about different items. We found that the use of AR decreased user's SA about secondary elements, and that this degradation mainly occurs at the perception level: with AR, participants are less likely to detect secondary elements. Participants still felt the most secure with AR and virtual cues about secondary elements.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"15 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113964439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visualization and Graphics Technical Committee (VGTC) Statement 可视化和图形技术委员会(VGTC)声明
2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR) Pub Date : 2023-03-01 DOI: 10.1109/vr55154.2023.00007
{"title":"Visualization and Graphics Technical Committee (VGTC) Statement","authors":"","doi":"10.1109/vr55154.2023.00007","DOIUrl":"https://doi.org/10.1109/vr55154.2023.00007","url":null,"abstract":"","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116626531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Simultaneous Scene-independent Camera Localization and Category-level Object Pose Estimation via Multi-level Feature Fusion 基于多层次特征融合的场景无关相机定位和类别级目标姿态估计
2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR) Pub Date : 2023-03-01 DOI: 10.1109/VR55154.2023.00041
Wang Junyi, Yue Qi
{"title":"Simultaneous Scene-independent Camera Localization and Category-level Object Pose Estimation via Multi-level Feature Fusion","authors":"Wang Junyi, Yue Qi","doi":"10.1109/VR55154.2023.00041","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00041","url":null,"abstract":"In AR/MR applications, camera localization and object pose estimation both play crucial roles. The universality of learning techniques, often referred to as scene-independent localization and category-level pose estimation, presents challenges for both tasks. The two missions maintain close relationships due to the spatial geometry constraint, but differing task requirements result in distinct feature extraction. In this paper, we focus on simultaneous scene-independent camera localization and category-level object pose estimation with a unified learning framework. The system consists of a localization branch called SLO-LocNet, a pose estimation branch called SLO-ObjNet, a feature fusion module for feature sharing between two tasks, and two decoders for creating coordinate maps. In SLO-LocNet, localization features are produced for anticipating the relative pose between two adjusted frames using inputs of color and depth images. Furthermore, we establish an image fusion module in order to promote feature sharing in depth and color branches. With SLO-ObjNet, we take the detected depth image and its corresponding point cloud as inputs, and produce object pose features for pose estimation. A geometry fusion module is created to combine depth and point cloud information simultaneously. Between the two tasks, the image fusion module is also exploited to accomplish feature sharing. In terms of the loss function, we present a mixed optimization function that is composed of the relative camera pose, geometry constraint, absolute and relative object pose terms. To verify how well our algorithm could perform, we conduct experiments on both localization and pose estimation datasets, covering 7 Scenes, ScanNet, REAL275 and YCB-Video. All experiments demonstrate superior performance to other existing methods. We specifically train the network on ScanNet and test it on 7 Scenes to demonstrate the universality performance. Additionally, the positive effects of fusion modules and loss function are also demonstrated.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":" 32","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113951888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design and Development of a Mixed Reality Acupuncture Training System 混合现实针灸训练系统的设计与开发
2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR) Pub Date : 2023-03-01 DOI: 10.1109/VR55154.2023.00042
Qilei Sun, Jiayou Huang, Haodong Zhang, Paul Craig, Lingyun Yu, Eng Gee Lim
{"title":"Design and Development of a Mixed Reality Acupuncture Training System","authors":"Qilei Sun, Jiayou Huang, Haodong Zhang, Paul Craig, Lingyun Yu, Eng Gee Lim","doi":"10.1109/VR55154.2023.00042","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00042","url":null,"abstract":"This paper looks at how mixed reality can be used for the improvement and enhancement of Chinese acupuncture practice through the introduction of an acupuncture training simulator. A prototype system developed for our study allows practitioners to insert virtual needles using their bare hands into a full-scale 3D representation of the human body with labelled acupuncture points. This provides them with a safe and natural environment to develop their acupuncture skills simulating the actual physical process of acupuncture. It also helps them to develop their muscle memory for acupuncture and better develops their memory of acupuncture points through a more immersive learning experience. We describe some of the design decisions and technical challenges overcome in the development of our system. We also present the results of a comparative user evaluation with potential users aimed at assessing the viability of such a mixed reality system being used as part of their training and development. The results of our evaluation reveal the training system outperformed in the enhancement of spatial understanding as well as improved learning and dexterity in acupuncture practice. These results go some way to demonstrating the potential of mixed reality for improving practice in therapeutic medicine.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130849558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CoboDeck: A Large-Scale Haptic VR System Using a Collaborative Mobile Robot CoboDeck:使用协作移动机器人的大规模触觉VR系统
2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR) Pub Date : 2023-03-01 DOI: 10.1109/VR55154.2023.00045
Soroosh Mortezapoor, Khrystyna Vasylevska, Emanuel Vonach, H. Kaufmann
{"title":"CoboDeck: A Large-Scale Haptic VR System Using a Collaborative Mobile Robot","authors":"Soroosh Mortezapoor, Khrystyna Vasylevska, Emanuel Vonach, H. Kaufmann","doi":"10.1109/VR55154.2023.00045","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00045","url":null,"abstract":"We present CoboDeck - our proof-of-concept immersive virtual reality haptic system with free walking support. It provides prop-based encountered-type haptic feedback with a mobile robotic platform. Intended for use as a design tool for architects, it enables the user to directly and intuitively interact with virtual objects like walls, doors, or furniture. A collaborative robotic arm mounted on an omnidirectional mobile platform can present a physical prop that matches the position and orientation of a virtual counterpart anywhere in large virtual and real environments. We describe the concept, hardware, and software architecture of our system. Furthermore, we present the first behavioral algorithm tailored for the unique challenges of safe human-robot haptic interaction in VR, explicitly targeting availability and safety while the user is unaware of the robot and can change trajectory at any time. We explain our high-level state machine that controls the robot to follow a user closely and rapidly escape from him as required by the situation. We present our technical evaluation. The results suggest that our chasing approach saves time, decreases the travel distance and thus battery usage, compared to more traditional approaches for mobile platforms assuming a fixed parking position between interactions. We also show that the robot can escape from the user and prevent a possible collision within a mean time of 1.62 s. Finally, we confirm the validity of our approach in a practical validation and discuss the potential of the proposed system.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125552095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
How Do I Get There? Overcoming Reachability Limitations of Constrained Industrial Environments in Augmented Reality Applications 我怎么去那里?在增强现实应用中克服受限工业环境的可达性限制
2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR) Pub Date : 2023-03-01 DOI: 10.1109/VR55154.2023.00027
Daniel Bambusek, Zdenek Materna, Michal Kapinus, V. Beran, P. Smrz
{"title":"How Do I Get There? Overcoming Reachability Limitations of Constrained Industrial Environments in Augmented Reality Applications","authors":"Daniel Bambusek, Zdenek Materna, Michal Kapinus, V. Beran, P. Smrz","doi":"10.1109/VR55154.2023.00027","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00027","url":null,"abstract":"The paper presents an approach for handheld augmented reality in constrained industrial environments, where it might be hard or even impossible to reach certain poses within a workspace. Therefore, a user might be unable to see or interact with some digital content in applications like visual robot programming, robotic program visualizations, or workspace annotation. To overcome this limitation, we propose a temporal switching to a non-immersive virtual reality that allows the user to see the virtual counterpart of the workspace from any angle and distance, where the viewpoint is controlled using a unique combination of on-screen controls complemented by the physical motion of the handheld device. Using such a combination, the user can position the virtual camera roughly to the desired pose using the on-screen controls and then continue working just as in augmented reality. To explore how people would use it and what the benefits would be over pure augmented reality, we chose a representative task of object alignment and conducted a study. The results revealed that mainly physical demands, which is often a limiting factor for handheld augmented reality, could be reduced and that the usability and utility of the approach are rated as high. In addition, suggestions for improving the user interface were proposed and discussed.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122213157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SCP-SLAM: Accelerating DynaSLAM With Static Confidence Propagation SCP-SLAM:用静态信心传播加速DynaSLAM
2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR) Pub Date : 2023-03-01 DOI: 10.1109/VR55154.2023.00066
Ming-Fei Yu, Lei Zhang, Wu-Fan Wang, Jiahui Wang
{"title":"SCP-SLAM: Accelerating DynaSLAM With Static Confidence Propagation","authors":"Ming-Fei Yu, Lei Zhang, Wu-Fan Wang, Jiahui Wang","doi":"10.1109/VR55154.2023.00066","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00066","url":null,"abstract":"DynaSLAM is the state-of-the-art visual simultaneous localization and mapping (SLAM) in dynamic environments. It adopts a convolutional neural network (CNN) for moving object detection, but usually incurs a very high computational cost because it performs semantic segmentation using the CNN model on every frame. This paper proposes SCP-SLAM, which accelerates DynaSLAM by running the CNN only on keyframes and propagating static confidence through other frames in parallel. The proposed static confidence characterizes the moving object features by the residual defined by inter-frame geometry transformation, which can be computed quickly. Our method combines the effectiveness of a CNN with the efficiency of static confidence in a tightly coupled manner. Extensive experiments on the publicly available TUM and Bonn RGB-D dynamic benchmark datasets demonstrate the efficacy of the method. Compared with DynaSLAM, it enables acceleration by a factor of ten on average, but retains comparable localization accuracy.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124284569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Level-of-Detail AR: Dynamically Adjusting Augmented Reality Level of Detail Based on Visual Angle 细节水平AR:基于视角动态调整增强现实细节水平
2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR) Pub Date : 2023-03-01 DOI: 10.1109/VR55154.2023.00022
Abby Wysopal, Vivian Ross, Joyce E Passananti, K. Yu, Brandon Huynh, Tobias Höllerer
{"title":"Level-of-Detail AR: Dynamically Adjusting Augmented Reality Level of Detail Based on Visual Angle","authors":"Abby Wysopal, Vivian Ross, Joyce E Passananti, K. Yu, Brandon Huynh, Tobias Höllerer","doi":"10.1109/VR55154.2023.00022","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00022","url":null,"abstract":"Dynamically adjusting the content of augmented reality (AR) applications to efficiently display information best fitting the available screen estate may be important for user performance and satisfaction. Currently, there is not a common practice for dynamically adjusting the content of AR applications based on their apparent size in the user's view of the surround environment. We present a Level-of-Detail AR mechanism to improve the usability of AR applications at any relative size. Our mechanism dynamically renders textual and interactable content based on its legibility, interactability, and viewability respectively. When tested, Level-of-Detail AR functioned as intended out-of-the-box on 44 of the 45 standard user interface Unity prefabs in Microsoft's Mixed Reality Tool Kit. We additionally evaluated impact on task performance, user distance, and subjective satisfaction through a mixed-design user study with 45 participants. Statistical analysis of our results revealed significant task-dependent differences in user performance between the modes. User satisfaction was consistently higher for the Level-of-Detail AR condition.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124101129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信