2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)最新文献

筛选
英文 中文
SceneAR: Scene-based Micro Narratives for Sharing and Remixing in Augmented Reality 场景:增强现实中基于场景的微叙事共享和混合
2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) Pub Date : 2021-08-28 DOI: 10.1109/ismar52148.2021.00045
Mengyu Chen, A. Monroy-Hernández, Misha Sra
{"title":"SceneAR: Scene-based Micro Narratives for Sharing and Remixing in Augmented Reality","authors":"Mengyu Chen, A. Monroy-Hernández, Misha Sra","doi":"10.1109/ismar52148.2021.00045","DOIUrl":"https://doi.org/10.1109/ismar52148.2021.00045","url":null,"abstract":"Short-form digital storytelling has become a popular medium for millions of people to express themselves. Traditionally, this medium uses primarily 2D media such as text (e.g., memes), images (e.g., Instagram), GIFs (e.g., Giphy), and videos (e.g., TikTok, Snapchat). To expand the modalities from 2D to 3D media, we present SceneAR, a smartphone application for creating sequential scene-based micro narratives in augmented reality (AR). What sets SceneAR apart from prior work is its ability to share the scene-based stories as AR content. No longer limited to sharing images or videos, users can now experience narratives in their own physical environments. Additionally, SceneAR affords users the ability to remix AR content, empowering them to collectively build on others’ creations. We asked 18 people to use SceneAR in a three-day study, and based on user interviews, analyses of screen recordings, and the stories they created, we extracted three themes. From these themes and the study overall, we derived six strategies for designers interested in supporting short-form AR narratives.","PeriodicalId":395413,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132684497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Rotation-constrained optical see-through headset calibration with bare-hand alignment 旋转受限光学透明耳机校准与徒手校准
2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) Pub Date : 2021-08-24 DOI: 10.1109/ismar52148.2021.00041
Xue Hu, F. Baena, F. Cutolo
{"title":"Rotation-constrained optical see-through headset calibration with bare-hand alignment","authors":"Xue Hu, F. Baena, F. Cutolo","doi":"10.1109/ismar52148.2021.00041","DOIUrl":"https://doi.org/10.1109/ismar52148.2021.00041","url":null,"abstract":"The inaccessibility of user-perceived reality remains an open issue in pursuing the accurate calibration of optical see-through (OST) head-mounted displays (HMDs). Manual user alignment is usually required to collect a set of virtual-to-real correspondences, so that a default or an offline display calibration can be updated to account for the user’s eye position(s). Current alignment-based calibration procedures usually require point-wise alignments between rendered image point(s) and associated physical landmark(s) of a target calibration tool. As each alignment can only provide one or a few correspondences, repeated alignments are required to ensure calibration quality. This work presents an accurate and tool-less online OST calibration method to update an offline-calibrated eye-display model. The user’s bare hand is markerlessly tracked by a commercial RGBD camera anchored to the OST headset to generate a user-specific cursor for correspondence collection. The required alignment is object-wise, and can provide thousands of unordered corresponding points in tracked space. The collected correspondences are registered by a proposed rotation-constrained iterative closest point (rcICP) method to optimise the viewpoint-related calibration parameters. We implemented such a method for the Microsoft HoloLens 1. The resiliency of the proposed procedure to noisy data was evaluated through simulated tests and real experiments performed with an eye-replacement camera. According to the simulation test, the rcICP registration is robust against possible user-induced rotational misalignment. With a single alignment, our method achieves 8.81 arcmin (1.37 mm) positional error and 1. 76° rotational error by camera-based tests in the arm-reach distance, and 10.79 arcmin (7.71 pixels) reprojection error by user tests.","PeriodicalId":395413,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126498671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Classifying In-Place Gestures with End-to-End Point Cloud Learning 用端到端点云学习对原位手势进行分类
2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) Pub Date : 2021-08-22 DOI: 10.1109/ismar52148.2021.00038
Lizhi Zhao, Xuequan Lu, Mingde Zhao, Meili Wang
{"title":"Classifying In-Place Gestures with End-to-End Point Cloud Learning","authors":"Lizhi Zhao, Xuequan Lu, Mingde Zhao, Meili Wang","doi":"10.1109/ismar52148.2021.00038","DOIUrl":"https://doi.org/10.1109/ismar52148.2021.00038","url":null,"abstract":"Walking in place for moving through virtual environments has attracted noticeable attention recently. Recent attempts focused on training a classifier to recognize certain patterns of gestures (e.g., standing, walking, etc) with the use of neural networks like CNN or LSTM. Nevertheless, they often consider very few types of gestures and/or induce less desired latency in virtual environments. In this paper, we propose a novel framework for accurate and efficient classification of in-place gestures. Our key idea is to treat several consecutive frames as a “point cloud”. The HMD and two VIVE trackers provide three points in each frame, with each point consisting of 12-dimensional features (i.e., three-dimensional position coordinates, velocity, rotation, angular velocity). We create a dataset consisting of 9 gesture classes for virtual in-place locomotion. In addition to the supervised point-based network, we also take unsupervised domain adaptation into account due to inter-person variations. To this end, we develop an end-to-end joint framework involving both a supervised loss for supervised point learning and an unsupervised loss for unsupervised domain adaptation. Experiments demonstrate that our approach generates very promising outcomes, in terms of high overall classification accuracy (95.0%) and real-time performance (192ms latency). We will release our dataset and source code to the community.","PeriodicalId":395413,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124623889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Using Trajectory Compression Rate to Predict Changes in Cybersickness in Virtual Reality Games 利用轨迹压缩率预测虚拟现实游戏中晕动症的变化
2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) Pub Date : 2021-08-21 DOI: 10.1109/ismar52148.2021.00028
D. Monteiro, Hai-Ning Liang, Xiaohang Tang, Pourang Irani
{"title":"Using Trajectory Compression Rate to Predict Changes in Cybersickness in Virtual Reality Games","authors":"D. Monteiro, Hai-Ning Liang, Xiaohang Tang, Pourang Irani","doi":"10.1109/ismar52148.2021.00028","DOIUrl":"https://doi.org/10.1109/ismar52148.2021.00028","url":null,"abstract":"Identifying cybersickness in virtual reality (VR) applications such as games in a fast, precise, non-intrusive, and non-disruptive way remains challenging. Several factors can cause cybersickness, and their identification will help find its origins and prevent or minimize it. One such factor is virtual movement. Movement, whether physical or virtual, can be represented in different forms. One way to represent and store it is with a temporally annotated point sequence. Because a sequence is memory-consuming, it is often preferable to save it in a compressed form. Compression allows redundant data to be eliminated while still preserving changes in speed and direction. Since changes in direction and velocity in VR can be associated with cybersickness, changes in compression rate can likely indicate changes in cybersickness levels. In this research, we explore whether quantifying changes in virtual movement can be used to estimate variation in cybersickness levels of VR users. We investigate the correlation between changes in the compression rate of movement data in two VR games with changes in players’ cybersickness levels captured during gameplay. Our results show (1) a clear correlation between changes in compression rate and cybersickness, and (2) that a machine learning approach can be used to identify these changes. Finally, results from a second experiment show that our approach is feasible for cybersickness inference in games and other VR applications that involve movement.","PeriodicalId":395413,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132035916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Exploring Head-based Mode-Switching in Virtual Reality 探索虚拟现实中基于头部的模式切换
2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) Pub Date : 2021-08-12 DOI: 10.1109/ISMAR52148.2021.00026
Rongkai Shi, Nan Zhu, Hai-Ning Liang, Shengdong Zhao
{"title":"Exploring Head-based Mode-Switching in Virtual Reality","authors":"Rongkai Shi, Nan Zhu, Hai-Ning Liang, Shengdong Zhao","doi":"10.1109/ISMAR52148.2021.00026","DOIUrl":"https://doi.org/10.1109/ISMAR52148.2021.00026","url":null,"abstract":"Mode-switching supports multilevel operations using a limited number of input methods. In Virtual Reality (VR) head-mounted displays (HMD), common approaches for mode-switching use buttons, controllers, and users’ hands. However, they are inefficient and challenging to do with tasks that require both hands (e.g., when users need to use two hands during drawing operations). Using head gestures for mode-switching can be an efficient and cost-effective way, allowing for a more continuous and smooth transition between modes. In this paper, we explore the use of head gestures for mode-switching especially in scenarios when both users’ hands are performing tasks. We present a first user study that evaluated eight head gestures that could be suitable for VR HMD with a dual-hand line-drawing task. Results show that move forward, move backward, roll left, and roll right led to better performance and are preferred by participants. A second study integrating these four gestures in Tilt Brush, an open-source painting VR application, is conducted to further explore the applicability of these gestures and derive insights. Results show that Tilt Brush with head gestures allowed users to change modes with ease and led to improved interaction and user experience. The paper ends with a discussion on some design recommendations for using head-based mode-switching in VR HMD.","PeriodicalId":395413,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121054465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
PAVAL: Position-Aware Virtual Agent Locomotion for Assisted Virtual Reality Navigation 辅助虚拟现实导航的位置感知虚拟代理运动
2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) Pub Date : 2021-07-23 DOI: 10.1109/ismar52148.2021.00039
Z. Ye, Jun-Long Chen, Miao Wang, Yong-Liang Yang
{"title":"PAVAL: Position-Aware Virtual Agent Locomotion for Assisted Virtual Reality Navigation","authors":"Z. Ye, Jun-Long Chen, Miao Wang, Yong-Liang Yang","doi":"10.1109/ismar52148.2021.00039","DOIUrl":"https://doi.org/10.1109/ismar52148.2021.00039","url":null,"abstract":"Virtual agents are typical assistance tools for navigation and interaction in Virtual Reality (VR) tour, training, education, etc. It has been demonstrated that the gaits, gestures, gazes, and positions of virtual agents are major factors that affect the user’s perception and experience for seated and standing VR. In this paper, we present a novel position-aware virtual agent locomotion method, called PAVAL, that can perform virtual agent positioning (position+orientation) in real time for room-scale VR navigation assistance. We first analyze design guidelines for virtual agent locomotion and model the problem using the positions of the user and the surrounding virtual objects. Then we conduct a one-off preliminary study to collect subjective data and present a model for virtual agent positioning prediction with fixed user position. Based on the model, we propose an algorithm to optimize the object of interest, virtual agent position, and virtual agent orientation in sequence for virtual agent locomotion. As a result, during user navigation in a virtual scene, the virtual agent automatically moves in real time and introduces virtual object information to the user. We evaluate PAVAL and two alternative methods via a user study with humanoid virtual agents in various scenes, including virtual museum, factory, and school gym. The results reveal that our method is superior to the baseline condition.","PeriodicalId":395413,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"1284 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116488165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
TEyeD: Over 20 Million Real-World Eye Images with Pupil, Eyelid, and Iris 2D and 3D Segmentations, 2D and 3D Landmarks, 3D Eyeball, Gaze Vector, and Eye Movement Types TEyeD:超过2000万真实世界的眼睛图像,包括瞳孔,眼睑和虹膜2D和3D分割,2D和3D地标,3D眼球,凝视矢量和眼球运动类型
2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) Pub Date : 2021-02-03 DOI: 10.1109/ismar52148.2021.00053
Wolfgang Fuhl, G. Kasneci, Enkelejda Kasneci
{"title":"TEyeD: Over 20 Million Real-World Eye Images with Pupil, Eyelid, and Iris 2D and 3D Segmentations, 2D and 3D Landmarks, 3D Eyeball, Gaze Vector, and Eye Movement Types","authors":"Wolfgang Fuhl, G. Kasneci, Enkelejda Kasneci","doi":"10.1109/ismar52148.2021.00053","DOIUrl":"https://doi.org/10.1109/ismar52148.2021.00053","url":null,"abstract":"We present TEyeD, the world’s largest unified public data set of eye images taken with head-mounted devices. TEyeD was acquired with seven different head-mounted eye trackers. Among them, two eye trackers were integrated into virtual reality (VR) or augmented reality (AR) devices. The images in TEyeD were obtained from various tasks, including car rides, simulator rides, outdoor sports activities, and daily indoor activities. The data set includes 2D&3D landmarks, semantic segmentation, 3D eyeball annotation and the gaze vector and eye movement types for all images. Landmarks and semantic segmentation are provided for the pupil, iris and eyelids. Video lengths vary from a few minutes to several hours. With more than 20 million carefully annotated images, TEyeD provides a unique, coherent resource and a valuable foundation for advancing research in the field of computer vision, eye tracking and gaze estimation in modern VR and AR applications. Data and code at DOWNLOAD LINK.","PeriodicalId":395413,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130367108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信