Daniela Markov-Vetter, M. Luboschik, A. T. Islam, P. Gauger, O. Staadt
{"title":"The Effect of Spatial Reference on Visual Attention and Workload during Viewpoint Guidance in Augmented Reality","authors":"Daniela Markov-Vetter, M. Luboschik, A. T. Islam, P. Gauger, O. Staadt","doi":"10.1145/3385959.3418449","DOIUrl":"https://doi.org/10.1145/3385959.3418449","url":null,"abstract":"Considering human capability for spatial orientation and navigation, the visualization used to support the localization of off-screen targets inevitably influences the visual-spatial processing that relies on two frameworks. So far it is not proven which frame of reference, egocentric or exocentric, contributes most to efficient viewpoint guidance in a head-mounted Augmented Reality environment. This could be justified by the lack of objectively assessing the allocation of attention and mental workload demanded by the guidance method. This paper presents a user study investigating the effect of egocentric and exocentric viewpoint guidance on visual attention and mental workload. In parallel to a localization task, participants had to complete a divided attention task using the oddball paradigm. During task fulfilment, the heart rate variability was measured to determine the physiological stress level. The objective assessment of mental workload was supplemented by subjective ratings using the NASA TLX. The results show that egocentric viewpoint guidance leads to most efficient target cueing in terms of faster localization, higher accuracy and slower self-reported workload. In addition, egocentric target cueing causes a slight decrease in physiological stress and enables faster recognition of simultaneous events, although visual attention seemed to be covertly oriented.","PeriodicalId":157249,"journal":{"name":"Proceedings of the 2020 ACM Symposium on Spatial User Interaction","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131646727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Tangible VR: Traversing Space in XR to Grow a Virtual Butterfly","authors":"Jiaqi Zhang, B. L. Silva","doi":"10.1145/3385959.3421720","DOIUrl":"https://doi.org/10.1145/3385959.3421720","url":null,"abstract":"Immersive reality technologies have been widely utilized in the area of cultural heritage, also known as Virtual Heritage. We present a tangible Virtual Reality (VR) interaction demo that allows users to freely walk in the physical space while engaging with digital and tangible objects in a “learning area”. The space setup includes stations that are used symbiotically in the virtual and physical environments, such setup defines consistency throughout the experience. With this method, we enhance the immersive learning experience by mapping the large virtual space into a smaller physical place with a seamless transition.","PeriodicalId":157249,"journal":{"name":"Proceedings of the 2020 ACM Symposium on Spatial User Interaction","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129319904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wataru Yamada, H. Manabe, Daizo Ikeda, J. Rekimoto
{"title":"RayGraphy: Aerial Volumetric Graphics Rendered Using Lasers in Fog","authors":"Wataru Yamada, H. Manabe, Daizo Ikeda, J. Rekimoto","doi":"10.1145/3385959.3418446","DOIUrl":"https://doi.org/10.1145/3385959.3418446","url":null,"abstract":"We present RayGraphy display technology that renders volumetric graphics by superimposing the trajectories of lights in indoor space filled with fog. Since the traditional FogScreen approach requires the shaping of a thin layer of fog, it can only show two-dimensional images in a narrow range that is close to the fog-emitting nozzle. Although a method that renders volumetric graphics with plasma generated using high-power laser was also proposed, its operation in a public space is considered quite dangerous. The proposed system mainly comprises dozens of laser projectors circularly arranged in a fog-filled space, and renders volumetric graphics in a fog by superimposing weak laser beams from the projectors. Compared to the conventional methods, this system employing weak laser beams and the non-shaped innocuous fog is more scalable and safer. We aim to construct a new spatial augmented reality platform where computer-generated images can be drawn directly in the real world. We implement a prototype that consists of 32 laser projectors and a fog machine. Moreover, we evaluate and discuss the system performance and characteristics in experiments.","PeriodicalId":157249,"journal":{"name":"Proceedings of the 2020 ACM Symposium on Spatial User Interaction","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129834757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Matthew Lakier, Dimcho Karakashev, Yixin Wang, I. Goldberg
{"title":"Augmented Unlocking Techniques for Smartphones Using Pre-Touch Information","authors":"Matthew Lakier, Dimcho Karakashev, Yixin Wang, I. Goldberg","doi":"10.1145/3385959.3418455","DOIUrl":"https://doi.org/10.1145/3385959.3418455","url":null,"abstract":"Smartphones secure a significant amount of personal and private information, and are playing an increasingly important role in people’s lives. However, current techniques to manually authenticate to smartphones have failed in both not-so-surprising (shoulder surfing) and quite surprising (smudge attacks) ways. In this work, we propose a new technique called 3D Pattern. Our 3D Pattern technique takes advantage of pre-touch sensing, which could soon allow smartphones to sense a user’s finger position at some distance from the screen. We describe and implement the technique, and evaluate it in a small pilot study (n=6) by comparing it to PIN and pattern locks. Our results show that although our prototype takes longer to authenticate, it is completely immune to smudge attacks and promises to be more resistant to shoulder surfing.","PeriodicalId":157249,"journal":{"name":"Proceedings of the 2020 ACM Symposium on Spatial User Interaction","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115329981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}