Proceedings of the 2020 ACM Symposium on Spatial User Interaction最新文献

筛选
英文 中文
The Effect of Spatial Reference on Visual Attention and Workload during Viewpoint Guidance in Augmented Reality 增强现实视点引导中空间参考对视觉注意力和工作负荷的影响
Proceedings of the 2020 ACM Symposium on Spatial User Interaction Pub Date : 2020-10-30 DOI: 10.1145/3385959.3418449
Daniela Markov-Vetter, M. Luboschik, A. T. Islam, P. Gauger, O. Staadt
{"title":"The Effect of Spatial Reference on Visual Attention and Workload during Viewpoint Guidance in Augmented Reality","authors":"Daniela Markov-Vetter, M. Luboschik, A. T. Islam, P. Gauger, O. Staadt","doi":"10.1145/3385959.3418449","DOIUrl":"https://doi.org/10.1145/3385959.3418449","url":null,"abstract":"Considering human capability for spatial orientation and navigation, the visualization used to support the localization of off-screen targets inevitably influences the visual-spatial processing that relies on two frameworks. So far it is not proven which frame of reference, egocentric or exocentric, contributes most to efficient viewpoint guidance in a head-mounted Augmented Reality environment. This could be justified by the lack of objectively assessing the allocation of attention and mental workload demanded by the guidance method. This paper presents a user study investigating the effect of egocentric and exocentric viewpoint guidance on visual attention and mental workload. In parallel to a localization task, participants had to complete a divided attention task using the oddball paradigm. During task fulfilment, the heart rate variability was measured to determine the physiological stress level. The objective assessment of mental workload was supplemented by subjective ratings using the NASA TLX. The results show that egocentric viewpoint guidance leads to most efficient target cueing in terms of faster localization, higher accuracy and slower self-reported workload. In addition, egocentric target cueing causes a slight decrease in physiological stress and enables faster recognition of simultaneous events, although visual attention seemed to be covertly oriented.","PeriodicalId":157249,"journal":{"name":"Proceedings of the 2020 ACM Symposium on Spatial User Interaction","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131646727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Tangible VR: Traversing Space in XR to Grow a Virtual Butterfly 有形的VR:在XR中穿越空间生长虚拟蝴蝶
Proceedings of the 2020 ACM Symposium on Spatial User Interaction Pub Date : 2020-10-30 DOI: 10.1145/3385959.3421720
Jiaqi Zhang, B. L. Silva
{"title":"Tangible VR: Traversing Space in XR to Grow a Virtual Butterfly","authors":"Jiaqi Zhang, B. L. Silva","doi":"10.1145/3385959.3421720","DOIUrl":"https://doi.org/10.1145/3385959.3421720","url":null,"abstract":"Immersive reality technologies have been widely utilized in the area of cultural heritage, also known as Virtual Heritage. We present a tangible Virtual Reality (VR) interaction demo that allows users to freely walk in the physical space while engaging with digital and tangible objects in a “learning area”. The space setup includes stations that are used symbiotically in the virtual and physical environments, such setup defines consistency throughout the experience. With this method, we enhance the immersive learning experience by mapping the large virtual space into a smaller physical place with a seamless transition.","PeriodicalId":157249,"journal":{"name":"Proceedings of the 2020 ACM Symposium on Spatial User Interaction","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129319904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
RayGraphy: Aerial Volumetric Graphics Rendered Using Lasers in Fog 射线图:在雾中使用激光渲染的空中体积图形
Proceedings of the 2020 ACM Symposium on Spatial User Interaction Pub Date : 2020-10-30 DOI: 10.1145/3385959.3418446
Wataru Yamada, H. Manabe, Daizo Ikeda, J. Rekimoto
{"title":"RayGraphy: Aerial Volumetric Graphics Rendered Using Lasers in Fog","authors":"Wataru Yamada, H. Manabe, Daizo Ikeda, J. Rekimoto","doi":"10.1145/3385959.3418446","DOIUrl":"https://doi.org/10.1145/3385959.3418446","url":null,"abstract":"We present RayGraphy display technology that renders volumetric graphics by superimposing the trajectories of lights in indoor space filled with fog. Since the traditional FogScreen approach requires the shaping of a thin layer of fog, it can only show two-dimensional images in a narrow range that is close to the fog-emitting nozzle. Although a method that renders volumetric graphics with plasma generated using high-power laser was also proposed, its operation in a public space is considered quite dangerous. The proposed system mainly comprises dozens of laser projectors circularly arranged in a fog-filled space, and renders volumetric graphics in a fog by superimposing weak laser beams from the projectors. Compared to the conventional methods, this system employing weak laser beams and the non-shaped innocuous fog is more scalable and safer. We aim to construct a new spatial augmented reality platform where computer-generated images can be drawn directly in the real world. We implement a prototype that consists of 32 laser projectors and a fog machine. Moreover, we evaluate and discuss the system performance and characteristics in experiments.","PeriodicalId":157249,"journal":{"name":"Proceedings of the 2020 ACM Symposium on Spatial User Interaction","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129834757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Augmented Unlocking Techniques for Smartphones Using Pre-Touch Information 使用预触控信息的智能手机增强解锁技术
Proceedings of the 2020 ACM Symposium on Spatial User Interaction Pub Date : 2019-08-24 DOI: 10.1145/3385959.3418455
Matthew Lakier, Dimcho Karakashev, Yixin Wang, I. Goldberg
{"title":"Augmented Unlocking Techniques for Smartphones Using Pre-Touch Information","authors":"Matthew Lakier, Dimcho Karakashev, Yixin Wang, I. Goldberg","doi":"10.1145/3385959.3418455","DOIUrl":"https://doi.org/10.1145/3385959.3418455","url":null,"abstract":"Smartphones secure a significant amount of personal and private information, and are playing an increasingly important role in people’s lives. However, current techniques to manually authenticate to smartphones have failed in both not-so-surprising (shoulder surfing) and quite surprising (smudge attacks) ways. In this work, we propose a new technique called 3D Pattern. Our 3D Pattern technique takes advantage of pre-touch sensing, which could soon allow smartphones to sense a user’s finger position at some distance from the screen. We describe and implement the technique, and evaluate it in a small pilot study (n=6) by comparing it to PIN and pattern locks. Our results show that although our prototype takes longer to authenticate, it is completely immune to smudge attacks and promises to be more resistant to shoulder surfing.","PeriodicalId":157249,"journal":{"name":"Proceedings of the 2020 ACM Symposium on Spatial User Interaction","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115329981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信