2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)最新文献

筛选
英文 中文
Edge-Guided Near-Eye Image Analysis for Head Mounted Displays 头戴式显示器边缘引导近眼图像分析
2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) Pub Date : 2021-10-01 DOI: 10.1109/ismar52148.2021.00015
Zhimin Wang, Yuxin Zhao, Yunfei Liu, Feng Lu
{"title":"Edge-Guided Near-Eye Image Analysis for Head Mounted Displays","authors":"Zhimin Wang, Yuxin Zhao, Yunfei Liu, Feng Lu","doi":"10.1109/ismar52148.2021.00015","DOIUrl":"https://doi.org/10.1109/ismar52148.2021.00015","url":null,"abstract":"Eye tracking provides an effective way for interaction in Augmented Reality (AR) Head Mounted Displays (HMDs). Current eye tracking techniques for AR HMDs require eye segmentation and ellipse fitting under near-infrared illumination. However, due to the low contrast between sclera and iris regions and unpredictable reflections, it is still challenging to accomplish accurate iris/pupil segmentation and the corresponding ellipse fitting tasks. In this paper, inspired by the fact that most essential information is encoded in the edge areas, we propose a novel near-eye image analysis method with edge maps as guidance. Specifically, we first utilize an Edge Extraction Network ($E^{2}-$Net) to predict high-quality edge maps, which only contain eyelids and iris/pupil contours without other undesired edges. Then we feed the edge maps into an Edge-Guided Segmentation and Fitting Network (ESF-Net) for accurate segmentation and ellipse fitting. Extensive experimental results demonstrate that our method outperforms current state-of-the-art methods in near-eye image segmentation and ellipse fitting tasks, based on which we present applications of eye tracking with AR HMD.","PeriodicalId":395413,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127613090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
CrowdXR - Pitfalls and Potentials of Experiments with Remote Participants CrowdXR——远程参与者实验的陷阱和潜力
2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) Pub Date : 2021-10-01 DOI: 10.1109/ismar52148.2021.00062
Jiayan Zhao, Mark B. Simpson, P. Sajjadi, J. O. Wallgrün, Ping Li, M. Bagher, D. Oprean, Lace M. K. Padilla, A. Klippel
{"title":"CrowdXR - Pitfalls and Potentials of Experiments with Remote Participants","authors":"Jiayan Zhao, Mark B. Simpson, P. Sajjadi, J. O. Wallgrün, Ping Li, M. Bagher, D. Oprean, Lace M. K. Padilla, A. Klippel","doi":"10.1109/ismar52148.2021.00062","DOIUrl":"https://doi.org/10.1109/ismar52148.2021.00062","url":null,"abstract":"Although the COVID-19 pandemic has made the need for remote data collection more apparent than ever, progress has been slow in the virtual reality (VR) research community, and little is known about the quality of the data acquired from crowdsourced participants who own a head-mounted display (HMD), which we call crowdXR. To investigate this problem, we report on a VR spatial cognition experiment that was conducted both in-lab and out-of-lab. The in-lab study was administered as a traditional experiment with undergraduate students and dedicated VR equipment. The out-of-lab study was carried out remotely by recruiting HMD owners from VR-related research mailing lists, VR subreddits in Reddit, and crowdsourcing platforms. Demographic comparisons show that our out-of-lab sample was older, included more males, and had a higher sense of direction than our in-lab sample. The results of the involved spatial memory tasks indicate that the reliability of the data from out-of-lab participants was as good as or better than their in-lab counterparts. Additionally, the data for testing our research hypotheses were comparable between in- and out-of-lab studies. We conclude that crowdsourcing is a feasible and effective alternative to the use of university participant pools for collecting survey and performance data for VR research, despite potential design issues that may affect the generalizability of study results. We discuss the implications and future directions of running VR studies outside the laboratory and provide a set of practical recommendations.","PeriodicalId":395413,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124856409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Gaze Comes in Handy: Predicting and Preventing Erroneous Hand Actions in AR-Supported Manual Tasks 凝视派上用场:在ar支持的手动任务中预测和防止错误的手部动作
2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) Pub Date : 2021-10-01 DOI: 10.1109/ismar52148.2021.00031
Julian Wolf, Q. Lohmeyer, Christian Holz, M. Meboldt
{"title":"Gaze Comes in Handy: Predicting and Preventing Erroneous Hand Actions in AR-Supported Manual Tasks","authors":"Julian Wolf, Q. Lohmeyer, Christian Holz, M. Meboldt","doi":"10.1109/ismar52148.2021.00031","DOIUrl":"https://doi.org/10.1109/ismar52148.2021.00031","url":null,"abstract":"Emerging Augmented Reality headsets incorporate gaze and hand tracking and can, thus, observe the user’s behavior without interfering with ongoing activities. In this paper, we analyze hand-eye coordination in real-time to predict hand actions during target selection and warn users of potential errors before they occur. In our first user study, we recorded 10 participants playing a memory card game, which involves frequent hand-eye coordination with little task-relevant information. We found that participants’ gaze locked onto target cards 350ms before the hands touched them in 73.3% of all cases, which coincided with the peak velocity of the hand moving to the target. Based on our findings, we then introduce a closed-loop support system that monitors the user’s fingertip position to detect the first card turn and analyzes gaze, hand velocity and trajectory to predict the second card before it is turned by the user. In a second study with 12 participants, our support system correctly displayed color-coded visual alerts in a timely manner with an accuracy of 85.9%. The results indicate the high value of eye and hand tracking features for behavior prediction and provide a first step towards predictive real-time user support.","PeriodicalId":395413,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126111983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
AlterEcho: Loose Avatar-Streamer Coupling for Expressive VTubing AlterEcho:松散的Avatar-Streamer耦合表达v油管
2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) Pub Date : 2021-10-01 DOI: 10.1109/ismar52148.2021.00027
Man To Tang, Victor Long Zhu, V. Popescu
{"title":"AlterEcho: Loose Avatar-Streamer Coupling for Expressive VTubing","authors":"Man To Tang, Victor Long Zhu, V. Popescu","doi":"10.1109/ismar52148.2021.00027","DOIUrl":"https://doi.org/10.1109/ismar52148.2021.00027","url":null,"abstract":"VTubers are live streamers who embody computer animation virtual avatars. VTubing is a rapidly rising form of online entertainment in East Asia, most notably in Japan and China, and it has been more recently introduced in the West. However, animating an expressive VTuber avatar remains a challenge due to budget and usability limitations of current solutions, i.e., high-fidelity motion capture is expensive, while keyboard-based VTubing interfaces impose a cognitive burden on the streamer. This paper proposes a novel approach for VTubing animation based on the key principle of loosening the coupling between the VTuber and their avatar, and it describes a first implementation of the approach in the AlterEcho VTubing animation system. AlterEcho generates expressive VTuber avatar animation automatically, without the streamer’s explicit intervention; it breaks the strict tethering of the avatar from the streamer, allowing the avatar’s nonverbal behavior to deviate from that of the streamer. Without the complete independence of a true alter ego, but also without the constraint of mirroring the streamer with the fidelity of an echo, AlterEcho produces avatar animations that have been rated significantly higher by VTubers and viewers (N = 315) compared to animations created using simple motion capture, or using VMagicMirror, a state-of-the-art keyboard-based VTubing system. Our work also opens the door to personalizing the avatar persona for individual viewers.","PeriodicalId":395413,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122024029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Safety, Power Imbalances, Ethics and Proxy Sex: Surveying In-The-Wild Interactions Between VR Users and Bystanders 安全、权力失衡、伦理与代理性别:虚拟现实用户与旁观者的野外互动调查
2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) Pub Date : 2021-10-01 DOI: 10.1109/ismar52148.2021.00036
Joseph O'Hagan, J. Williamson, Mark Mcgill, M. Khamis
{"title":"Safety, Power Imbalances, Ethics and Proxy Sex: Surveying In-The-Wild Interactions Between VR Users and Bystanders","authors":"Joseph O'Hagan, J. Williamson, Mark Mcgill, M. Khamis","doi":"10.1109/ismar52148.2021.00036","DOIUrl":"https://doi.org/10.1109/ismar52148.2021.00036","url":null,"abstract":"VR users and bystanders must sometimes interact, but our understanding of these interactions - their purpose, how they are accomplished, attitudes toward them, and where they break down - is limited. This current gap inhibits research into managing or supporting these interactions, and preventing unwanted or abusive activity. We present the results of the first survey (N=100) that investigates stories of actual emergent in-the-wild interactions between VR users and bystanders. Our analysis indicates VR user and bystander interactions can be categorised into one of three categories: coexisting, demoing, and interrupting. We highlight common interaction patterns and impediments encountered during these interactions. Bystanders play an important role in moderating the VR user’s experience, for example intervening to save the VR user from potential harm. However, our stories also suggest that the occlusive nature of VR introduces the potential for bystanders to exploit the vulnerable state of the VR user; and for the VR user to exploit the bystander for enhanced immersion, introducing significant ethical concerns.","PeriodicalId":395413,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128100002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
Scan&Paint: Image-based Projection Painting 扫描和油漆:基于图像的投影绘画
2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) Pub Date : 2021-10-01 DOI: 10.1109/ismar52148.2021.00069
V. Klein, Markus Leuschner, Tobias Langen, Philipp Kurth, M. Stamminger, F. Bauer
{"title":"Scan&Paint: Image-based Projection Painting","authors":"V. Klein, Markus Leuschner, Tobias Langen, Philipp Kurth, M. Stamminger, F. Bauer","doi":"10.1109/ismar52148.2021.00069","DOIUrl":"https://doi.org/10.1109/ismar52148.2021.00069","url":null,"abstract":"We present a pop-up projection painting system that projects onto an unknown three-dimensional surface, while the user creates the projection content on the fly. The digital paint is projected immediately and follows the object if it is moved. If unexplored surface areas are thereby exposed, an automated trigger system issues new depth recordings that expand and refine the surface estimate. By intertwining scanning and projection painting we scan the exposed surface at the appropriate time and only if needed. Like image-based rendering, multiple automatically recorded depth maps are fused in screen space to synthesize novel views of the object, making projection poses independent from the scan positions. Since the user’s digital paint is also stored in images, we eliminate the need to reconstruct and parametrize a single full mesh, which makes geometry and color updates simple and fast.","PeriodicalId":395413,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130867456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Mirror, Mirror on My Phone: Investigating Dimensions of Self-Face Perception Induced by Augmented Reality Filters 镜子,我手机上的镜子:调查增强现实过滤器诱导的自我面孔感知的维度
2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) Pub Date : 2021-10-01 DOI: 10.1109/ismar52148.2021.00064
Rebecca Fribourg, Etienne Peillard, R. Mcdonnell
{"title":"Mirror, Mirror on My Phone: Investigating Dimensions of Self-Face Perception Induced by Augmented Reality Filters","authors":"Rebecca Fribourg, Etienne Peillard, R. Mcdonnell","doi":"10.1109/ismar52148.2021.00064","DOIUrl":"https://doi.org/10.1109/ismar52148.2021.00064","url":null,"abstract":"The main use of Augmented Reality (AR) today for the general public is in applications for smartphones. In particular, social network applications allow the use of many AR filters, modifying users’ environments but also their own image. These AR filters are increasingly and frequently being used and can distort in many ways users’ facial traits. Yet, we still do not know clearly how users perceive their faces augmented by these filters. In this paper, we present a study that aims to evaluate the impact of different filters, modifying several facial features such as the size or position of the eyes, the shape of the face or the orientation of the eyebrows, or adding virtual content such as virtual glasses. These filters are evaluated via a self-evaluation questionnaire, asking the participants about the personality, emotion, appeal and intelligence traits that their distorted face conveys. Our results show relative effects between the different filters in line with previous results regarding the perception of others. However, they also reveal specific effects on self-perception, showing, inter alia, that facial deformation decreases participants’ credence towards their image. The findings of this study covering multiple factors allow us to highlight the impact of face deformation on user perception but also the specificity related to this use in AR, paving the way for new works focusing on the psychological impact of such filters.","PeriodicalId":395413,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121261597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
[Title page i] [标题页i]
2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) Pub Date : 2021-10-01 DOI: 10.1109/ismar52148.2021.00001
{"title":"[Title page i]","authors":"","doi":"10.1109/ismar52148.2021.00001","DOIUrl":"https://doi.org/10.1109/ismar52148.2021.00001","url":null,"abstract":"","PeriodicalId":395413,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121688279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BDLoc: Global Localization from 2.5D Building Map BDLoc:基于2.5D建筑地图的全球定位
2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) Pub Date : 2021-10-01 DOI: 10.1109/ismar52148.2021.00022
Hai Li, Tianxing Fan, Hongjia Zhai, Zhaopeng Cui, H. Bao, Guofeng Zhang
{"title":"BDLoc: Global Localization from 2.5D Building Map","authors":"Hai Li, Tianxing Fan, Hongjia Zhai, Zhaopeng Cui, H. Bao, Guofeng Zhang","doi":"10.1109/ismar52148.2021.00022","DOIUrl":"https://doi.org/10.1109/ismar52148.2021.00022","url":null,"abstract":"Robust and accurate global 6DoF localization is essential for many applications, i.e., augmented reality and autonomous driving. Most existing 6DoF visual localization approaches need to build a dense texture model in advance, which is computationally extensive and almost infeasible in the global range. In this work, we propose BDLoc, a hierarchical global localization framework via the 2.5D building map, which is able to estimate the accurate pose of the query street-view image without using detailed dense 3D model and texture information. Specifically speaking, we first extract the 3D building information from the street-view image and surrounding 2.5D building map, and then solve a coarse relative pose by local to global registration. In order to improve the feature extraction, we propose a novel SPG-Net which is able to capture both local and global features. Finally, an iterative semantic alignment is applied to obtain a finner result with the differentiable rendering and the cross-view semantic constraint. Except for a coarse longitude and latitude from GPS, BDLoc doesn’t need any additional information like altitude and orientation that are necessary for many previous works. We also create a large dataset to explore the performance of the 2.5D map-based localization task. Extensive experiments demonstrate the superior performance of our method.","PeriodicalId":395413,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125311869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Excite-O-Meter: Software Framework to Integrate Heart Activity in Virtual Reality Excite-O-Meter:在虚拟现实中集成心脏活动的软件框架
2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) Pub Date : 2021-10-01 DOI: 10.1109/ismar52148.2021.00052
Luis Quintero, J. Muñoz, Jeroen De Mooij, Michael Gaebler
{"title":"Excite-O-Meter: Software Framework to Integrate Heart Activity in Virtual Reality","authors":"Luis Quintero, J. Muñoz, Jeroen De Mooij, Michael Gaebler","doi":"10.1109/ismar52148.2021.00052","DOIUrl":"https://doi.org/10.1109/ismar52148.2021.00052","url":null,"abstract":"Bodily signals can complement subjective and behavioral measures to analyze human factors, such as user engagement or stress, when interacting with virtual reality (VR) environments. Enabling widespread use of (also the real-time analysis) of bodily signals in VR applications could be a powerful method to design more user-centric, personalized VR experiences. However, technical and scientific challenges (e.g., cost of research-grade sensing devices, required coding skills, expert knowledge needed to interpret the data) complicate the integration of bodily data in existing interactive applications. This paper presents the design, development, and evaluation of an open-source software framework named Excite-O-Meter. It allows existing VR applications to integrate, record, analyze, and visualize bodily signals from wearable sensors, with the example of cardiac activity (heart rate and its variability) from the chest strap Polar H10. Survey responses from 58 potential users determined the design requirements for the framework. Two tests evaluated the framework and setup in terms of data acquisition/analysis and data quality. Finally, we present an example experiment that shows how our tool can be an easy-to-use and scientifically validated tool for researchers, hobbyists, or game designers to integrate bodily signals in VR applications.","PeriodicalId":395413,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128095539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信