Proceedings of the 2020 ACM Symposium on Spatial User Interaction最新文献

筛选
英文 中文
Eye Gaze-based Object Rotation for Head-mounted Displays 头戴式显示器的基于眼睛注视的对象旋转
Proceedings of the 2020 ACM Symposium on Spatial User Interaction Pub Date : 2020-10-30 DOI: 10.1145/3385959.3418444
Chang Liu, J. Orlosky, Alexander Plopski
{"title":"Eye Gaze-based Object Rotation for Head-mounted Displays","authors":"Chang Liu, J. Orlosky, Alexander Plopski","doi":"10.1145/3385959.3418444","DOIUrl":"https://doi.org/10.1145/3385959.3418444","url":null,"abstract":"Hands-free manipulation of 3D objects has long been a challenge for augmented and virtual reality (AR/VR). While many methods use eye gaze to assist with hand-based manipulations, interfaces cannot yet provide completely gaze-based 6 degree-of-freedom (DoF) manipulations in an efficient manner. To address this problem, we implemented three methods to handle rotations of virtual objects using gaze, including RotBar: a method that maps line-of-sight eye gaze onto per-axis rotations, RotPlane: a method that makes use of orthogonal planes to achieve per-axis angular rotations, and RotBall: a method that combines a traditional arcball with an external ring to handle user-perspective roll manipulations. We validated the efficiency of each method by conducting a user study involving a series of orientation tasks along different axes with each method. Experimental results showed that users could accomplish single-axis orientation tasks with RotBar and RotPlane significantly faster and more accurate than RotBall. On the other hand for multi-axis orientation tasks, RotBall significantly outperformed RotBar and RotPlane in terms of speed and accuracy.","PeriodicalId":157249,"journal":{"name":"Proceedings of the 2020 ACM Symposium on Spatial User Interaction","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132030326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A First Pilot Study to Compare Virtual Group Meetings using Video Conferences and (Immersive) Virtual Reality 比较使用视频会议和(沉浸式)虚拟现实的虚拟小组会议的第一个试点研究
Proceedings of the 2020 ACM Symposium on Spatial User Interaction Pub Date : 2020-10-30 DOI: 10.1145/3385959.3422699
Frank Steinicke, N. Lehmann-Willenbrock, A. Meinecke
{"title":"A First Pilot Study to Compare Virtual Group Meetings using Video Conferences and (Immersive) Virtual Reality","authors":"Frank Steinicke, N. Lehmann-Willenbrock, A. Meinecke","doi":"10.1145/3385959.3422699","DOIUrl":"https://doi.org/10.1145/3385959.3422699","url":null,"abstract":"Face-to-face communication has evolved as most natural means for communication. However, virtual group meetings have received considerable attention as an alternative for allowing multiple persons to communicate over distance, e. g., via video conferences or immersive virtual reality (VR) systems, but they incur numerous limitations and challenges. In particular, they often hinder spatial perception of full-body language, deictic relations, or eye-to-eye contact. The differences between video conferences and immersive VR meetings still remain poorly understood. We report about a pilot study in which we compared virtual group meetings using video conferences and VR meetings with and without head-mounted displays (HMDs). The results suggest that participants feel higher sense of presence when using an immersive VR meeting, but only if an HMD is used. Usability of video conferences as well as immersive VR is acceptable, whereas non-immersive VR without HMD was not acceptable.","PeriodicalId":157249,"journal":{"name":"Proceedings of the 2020 ACM Symposium on Spatial User Interaction","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126640217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Towards a Specification Language for Spatial User Interaction 面向空间用户交互的规范语言
Proceedings of the 2020 ACM Symposium on Spatial User Interaction Pub Date : 2020-10-30 DOI: 10.1145/3385959.3424278
Khadidja Chaoui, S. Bouzidi-Hassini, Y. Bellik
{"title":"Towards a Specification Language for Spatial User Interaction","authors":"Khadidja Chaoui, S. Bouzidi-Hassini, Y. Bellik","doi":"10.1145/3385959.3424278","DOIUrl":"https://doi.org/10.1145/3385959.3424278","url":null,"abstract":"Spatial interactions have a great potential in ubiquitous environments. Physical objects, endowed with interconnected sensors, cooperate in a transparent manner to help users in their daily tasks. In our context, we qualify an interaction as spatial if it results from considering spatial attributes (location, orientation, speed…) of the user's body or of a given object used by her/him. According to our literature review, we found that despite their benefits (simplicity, concision, naturalness…), spatial interactions are not as widespread as other interaction models such as graphical or tactile ones. We think that this fact is due to the lack of software tools and frameworks that can make the design and development of spatial interaction easy and fast. In this paper, we propose a spatial interaction modeling language named SUIL (Spatial User Interaction Language) which represents the first step towards the development of such tools.","PeriodicalId":157249,"journal":{"name":"Proceedings of the 2020 ACM Symposium on Spatial User Interaction","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115532287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Substituting Teleportation Visualization for Collaborative Virtual Environments 协同虚拟环境中隐形传态可视化的替代
Proceedings of the 2020 ACM Symposium on Spatial User Interaction Pub Date : 2020-10-30 DOI: 10.1145/3385959.3422698
Santawat Thanyadit, Parinya Punpongsanon, Thammathip Piumsomboon, T. Pong
{"title":"Substituting Teleportation Visualization for Collaborative Virtual Environments","authors":"Santawat Thanyadit, Parinya Punpongsanon, Thammathip Piumsomboon, T. Pong","doi":"10.1145/3385959.3422698","DOIUrl":"https://doi.org/10.1145/3385959.3422698","url":null,"abstract":"Virtual Reality (VR) offers a boundless space for users to create, express, and explore in the absence of the limitation of the physical world. Teleportation is a locomotion technique in a virtual environment that overcomes our spatial constraint and a common approach for travel in VR applications. However, in a multi-user virtual environment, teleportation causes spatial discontinuity of user’s location in space. This may cause confusion and difficulty in tracking one’s collaborator who keeps disappearing and reappearing around the environment. To reduce the impact of such issue, we have identified the requirements for designing the substituted visualization (SV) and present four SVs of the collaborator during the process of teleportation, which includes hover, jump, fade, and portal.","PeriodicalId":157249,"journal":{"name":"Proceedings of the 2020 ACM Symposium on Spatial User Interaction","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115553007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Temporal Manipulation Interface of Motion Data for Movement Observation in a Personal Training 面向个人训练中运动观察的运动数据时间操作界面
Proceedings of the 2020 ACM Symposium on Spatial User Interaction Pub Date : 2020-10-30 DOI: 10.1145/3385959.3422696
Natsuki Hamanishi, J. Rekimoto
{"title":"Temporal Manipulation Interface of Motion Data for Movement Observation in a Personal Training","authors":"Natsuki Hamanishi, J. Rekimoto","doi":"10.1145/3385959.3422696","DOIUrl":"https://doi.org/10.1145/3385959.3422696","url":null,"abstract":"In this paper, we propose a observation method to easily distinguish the temporal changes of three-dimensional (3D) motions and its temporal manipulation interface. Conventional motion observation methods have several limitations when observing 3D motion data. Direct Manipulation (DM) interface is suitable for observing the temporal features of videos. Besides, it is suitable for daily use because it does not require learning any special operations. Our aim is to introduce DM into 3D motion observation without losing these advantage by mapping temporal changes into the specific vector in 3D space in the real-space.","PeriodicalId":157249,"journal":{"name":"Proceedings of the 2020 ACM Symposium on Spatial User Interaction","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124577689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Exploring the Limitations of Environment Lighting on Optical See-Through Head-Mounted Displays 探索光学透明头戴式显示器环境照明的局限性
Proceedings of the 2020 ACM Symposium on Spatial User Interaction Pub Date : 2020-10-30 DOI: 10.1145/3385959.3418445
A. Erickson, Kangsoo Kim, G. Bruder, G. Welch
{"title":"Exploring the Limitations of Environment Lighting on Optical See-Through Head-Mounted Displays","authors":"A. Erickson, Kangsoo Kim, G. Bruder, G. Welch","doi":"10.1145/3385959.3418445","DOIUrl":"https://doi.org/10.1145/3385959.3418445","url":null,"abstract":"Due to the additive light model employed by most optical see-through head-mounted displays (OST-HMDs), they provide the best augmented reality (AR) views in dark environments, where the added AR light does not have to compete against existing real-world lighting. AR imagery displayed on such devices loses a significant amount of contrast in well-lit environments such as outdoors in direct sunlight. To compensate for this, OST-HMDs often use a tinted visor to reduce the amount of environment light that reaches the user’s eyes, which in turn results in a loss of contrast in the user’s physical environment. While these effects are well known and grounded in existing literature, formal measurements of the illuminance and contrast of modern OST-HMDs are currently missing. In this paper, we provide illuminance measurements for both the Microsoft HoloLens 1 and its successor the HoloLens 2 under varying environment lighting conditions ranging from 0 to 20,000 lux. We evaluate how environment lighting impacts the user by calculating contrast ratios between rendered black (transparent) and white imagery displayed under these conditions, and evaluate how the intensity of environment lighting is impacted by donning and using the HMD. Our results indicate the further need for refinement in the design of future OST-HMDs to optimize contrast in environments with illuminance values greater than or equal to those found in indoor working environments.","PeriodicalId":157249,"journal":{"name":"Proceedings of the 2020 ACM Symposium on Spatial User Interaction","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130572287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 35
Evaluating Interaction Cue Purpose and Timing for Learning and Retaining Virtual Reality Training 评估学习和保留虚拟现实培训的交互提示目的和时间
Proceedings of the 2020 ACM Symposium on Spatial User Interaction Pub Date : 2020-10-30 DOI: 10.1145/3385959.3418448
Xinyu Hu, Alec G. Moore, J. C. Eubanks, Afham Ahmed Aiyaz, Ryan P. McMahan
{"title":"Evaluating Interaction Cue Purpose and Timing for Learning and Retaining Virtual Reality Training","authors":"Xinyu Hu, Alec G. Moore, J. C. Eubanks, Afham Ahmed Aiyaz, Ryan P. McMahan","doi":"10.1145/3385959.3418448","DOIUrl":"https://doi.org/10.1145/3385959.3418448","url":null,"abstract":"Interaction cues inform users about potential actions to take. Tutorials, games, educational systems, and training applications often employ interaction cues to direct users to take specific actions at particular moments. Prior studies have investigated many aspects of interaction cues, such as the feedforward and perceived affordances that often accompany them. However, two less-researched aspects of interaction cues include the effects of their purpose (i.e., the type of task conveyed) and their timing (i.e., when they are presented). In this paper, we present a study that evaluates the effects of interaction cue purpose and timing on performance while learning and retaining tasks with a virtual reality (VR) training application. Our results indicate that participants retained manipulation tasks significantly better than travel or selection tasks, despite both being significantly easier to complete than the manipulation tasks. Our results also indicate that immediate interaction cues afforded significantly faster learning and better retention than delayed interaction cues.","PeriodicalId":157249,"journal":{"name":"Proceedings of the 2020 ACM Symposium on Spatial User Interaction","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131844028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Extend, Push, Pull: Smartphone Mediated Interaction in Spatial Augmented Reality via Intuitive Mode Switching 延伸,推,拉:智能手机在空间增强现实中通过直观模式切换的交互作用
Proceedings of the 2020 ACM Symposium on Spatial User Interaction Pub Date : 2020-10-30 DOI: 10.1145/3385959.3418456
Jeremy J. Hartmann, Aakar Gupta, Daniel Vogel
{"title":"Extend, Push, Pull: Smartphone Mediated Interaction in Spatial Augmented Reality via Intuitive Mode Switching","authors":"Jeremy J. Hartmann, Aakar Gupta, Daniel Vogel","doi":"10.1145/3385959.3418456","DOIUrl":"https://doi.org/10.1145/3385959.3418456","url":null,"abstract":"We investigate how smartphones can be used to mediate the manipulation of smartphone-based content in spatial augmented reality (SAR). A major challenge here is in seamlessly transitioning a phone between its use as a smartphone to its use as a controller for SAR. Most users are familiar with hand extension as a way for using a remote control for SAR. We therefore propose to use hand extension as an intuitive mode switching mechanism for switching back and forth between the mobile interaction mode and the spatial interaction mode. Based on this intuitive mode switch, our technique enables the user to push smartphone content to an external SAR environment, interact with the external content, rotate-scale-translate it, and pull the content back into the smartphone, all the while ensuring no conflict between mobile interaction and spatial interaction. To ensure feasibility of hand extension as mode switch, we evaluate the classification of extended and retracted states of the smartphone based on the phone’s relative 3D position with respect to the user’s head while varying user postures, surface distances, and target locations. Our results show that a random forest classifier can classify the extended and retracted states with a 96% accuracy on average.","PeriodicalId":157249,"journal":{"name":"Proceedings of the 2020 ACM Symposium on Spatial User Interaction","volume":"131 1-2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123574388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Automatic Generation of Spatial Tactile Effects by Analyzing Cross-modality Features of a Video 通过分析视频的交叉模态特征自动生成空间触觉效果
Proceedings of the 2020 ACM Symposium on Spatial User Interaction Pub Date : 2020-10-30 DOI: 10.1145/3385959.3418459
Kai Zhang, Lawrence H. Kim, Yipeng Guo, Sean Follmer
{"title":"Automatic Generation of Spatial Tactile Effects by Analyzing Cross-modality Features of a Video","authors":"Kai Zhang, Lawrence H. Kim, Yipeng Guo, Sean Follmer","doi":"10.1145/3385959.3418459","DOIUrl":"https://doi.org/10.1145/3385959.3418459","url":null,"abstract":"Tactile effects can enhance user experience of multimedia content. However, generating appropriate tactile stimuli without any human intervention remains a challenge. While visual or audio information has been used to automatically generate tactile effects, utilizing cross-modal information may further improve the spatiotemporal synchronization and user experience of the tactile effects. In this paper, we present a pipeline for automatic generation of vibrotactile effects through the extraction of both the visual and audio features from a video. Two neural network models are used to extract the diegetic audio content, and localize a sounding object in the scene. These models are then used to determine the spatial distribution and the intensity of the tactile effects. To evaluate the performance of our method, we conducted a user study to compare the videos with tactile effects generated by our method to both the original videos without any tactile stimuli and videos with tactile effects generated based on visual features only. The study results demonstrate that our cross-modal method creates tactile effects with better spatiotemporal synchronization than the existing visual-based method and provides a more immersive user experience.","PeriodicalId":157249,"journal":{"name":"Proceedings of the 2020 ACM Symposium on Spatial User Interaction","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126094608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Mission Impossible Spaces: Using Challenge-Based Distractors to Reduce Noticeability of Self-Overlapping Virtual Architecture 不可能的任务空间:使用基于挑战的干扰来降低自重叠虚拟建筑的可注意性
Proceedings of the 2020 ACM Symposium on Spatial User Interaction Pub Date : 2020-10-30 DOI: 10.1145/3385959.3418453
Claudiu-Bogdan Ciumedean, Cristian Patras, Mantas Cibulskis, Norbert Váradi, N. C. Nilsson
{"title":"Mission Impossible Spaces: Using Challenge-Based Distractors to Reduce Noticeability of Self-Overlapping Virtual Architecture","authors":"Claudiu-Bogdan Ciumedean, Cristian Patras, Mantas Cibulskis, Norbert Váradi, N. C. Nilsson","doi":"10.1145/3385959.3418453","DOIUrl":"https://doi.org/10.1145/3385959.3418453","url":null,"abstract":"Impossible spaces make it possible to maximize the area of virtual environments that can be explored on foot through self-overlapping virtual architecture. This paper details a study exploring how users’ ability to detect overlapping virtual architecture is affected when the virtual environment includes distractors that impose additional cognitive load by challenging the users. The results indicate that such distractors both increase self-reported task load and reduce users’ ability to reliably detect overlaps between adjacent virtual rooms. That is, rooms could overlap by up to 68% when distractors were presented, compared to 40% when no distractors were present.","PeriodicalId":157249,"journal":{"name":"Proceedings of the 2020 ACM Symposium on Spatial User Interaction","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117144006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信