Proceedings of the 2020 ACM Symposium on Spatial User Interaction最新文献

筛选
英文 中文
Methods for Evaluating Depth Perception in a Large-Screen Immersive Display 大屏幕沉浸式显示中深度感知的评估方法
Proceedings of the 2020 ACM Symposium on Spatial User Interaction Pub Date : 2020-10-30 DOI: 10.1145/3385959.3418447
Dylan Gaines, S. Kuhl
{"title":"Methods for Evaluating Depth Perception in a Large-Screen Immersive Display","authors":"Dylan Gaines, S. Kuhl","doi":"10.1145/3385959.3418447","DOIUrl":"https://doi.org/10.1145/3385959.3418447","url":null,"abstract":"We perform an experiment on distance perception in a large-screen display immersive virtual environment. Large-screen displays typically make direct blind walking tasks impossible, despite them being a popular distance response measure in the real world and in head-mounted displays. We use a movable large-screen display to compare direct blind walking and indirect triangulated pointing with monoscopic viewing. We find that participants judged distances to be 89.4% ± 28.7% and 108.5% ± 44.9% of their actual distances in the direct blind walking and triangulated pointing conditions, respectively. However, we find no statistically significant difference between these approaches. This work adds to the limited number of research studies on egocentric distance judgments with a large display wall for distances of 3-5 meters. It is the first, to our knowledge, to perform direct blind walking with a large display.","PeriodicalId":157249,"journal":{"name":"Proceedings of the 2020 ACM Symposium on Spatial User Interaction","volume":"229 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114990042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Rotational Self-motion Cues Improve Spatial Learning when Teleporting in Virtual Environments 旋转自我运动提示提高空间学习时,在虚拟环境中传送
Proceedings of the 2020 ACM Symposium on Spatial User Interaction Pub Date : 2020-10-30 DOI: 10.1145/3385959.3418443
A. Lim, Jonathan W. Kelly, Nathan C. Sepich, L. Cherep, Grace C. Freed, Stephen B Gilbert
{"title":"Rotational Self-motion Cues Improve Spatial Learning when Teleporting in Virtual Environments","authors":"A. Lim, Jonathan W. Kelly, Nathan C. Sepich, L. Cherep, Grace C. Freed, Stephen B Gilbert","doi":"10.1145/3385959.3418443","DOIUrl":"https://doi.org/10.1145/3385959.3418443","url":null,"abstract":"Teleporting interfaces are widely used in virtual reality applications to explore large virtual environments. When teleporting, the user indicates the intended location in the virtual environment and is instantly transported, typically without self-motion cues. This project explored the cost of teleporting on the acquisition of survey knowledge (i.e., a ”cognitive map”). Two teleporting interfaces were compared, one with and one without visual and body-based rotational self-motion cues. Both interfaces lacked translational self-motion cues. Participants used one of the two teleporting interfaces to find and study the locations of six objects scattered throughout a large virtual environment. After learning, participants completed two measures of cognitive map fidelity: an object-to-object pointing task and a map drawing task. The results indicate superior spatial learning when rotational self-motion cues were available. Therefore, virtual reality developers should strongly consider the benefits of rotational self-motion cues when creating and choosing locomotion interfaces.","PeriodicalId":157249,"journal":{"name":"Proceedings of the 2020 ACM Symposium on Spatial User Interaction","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117270613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Being Part of the Swarm: Experiencing Human-Swarm Interaction with VR and Tangible Robots 成为群体的一部分:体验与VR和有形机器人的群体互动
Proceedings of the 2020 ACM Symposium on Spatial User Interaction Pub Date : 2020-10-30 DOI: 10.1145/3385959.3422695
Hala Khodr, Ulysse Ramage, K. G. Kim, A. Ozgur, Barbara Bruno, P. Dillenbourg
{"title":"Being Part of the Swarm: Experiencing Human-Swarm Interaction with VR and Tangible Robots","authors":"Hala Khodr, Ulysse Ramage, K. G. Kim, A. Ozgur, Barbara Bruno, P. Dillenbourg","doi":"10.1145/3385959.3422695","DOIUrl":"https://doi.org/10.1145/3385959.3422695","url":null,"abstract":"A swarm is the coherent behavior that emerges ubiquitously from simple interaction rules between self-organized agents. Understanding swarms is of utmost importance in many disciplines and jobs, but hard to teach due to the elusive nature of the phenomenon, which requires to observe events at different scales (i.e., from different perspectives) and to understand the links between them. In this article, we investigate the potential of combining a swarm of tangible, haptic-enabled robots with Virtual Reality, to provide a user with multiple perspectives and interaction modalities on the swarm, ultimately aiming at supporting the learning of emergent behaviours. The framework we developed relies on Cellulo robots and Oculus Quest and was preliminarily evaluated in a user study involving 15 participants. Results suggests that the framework effectively allows users to experience the interaction with the swarm under different perspectives and modalities.","PeriodicalId":157249,"journal":{"name":"Proceedings of the 2020 ACM Symposium on Spatial User Interaction","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130247897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
BUDI: Building Urban Designs Interactively Can Spatial-Based Collaboration be Seamless? BUDI:以互动方式构建城市设计基于空间的协作是否可以无缝进行?
Proceedings of the 2020 ACM Symposium on Spatial User Interaction Pub Date : 2020-10-30 DOI: 10.1145/3385959.3422703
Xi Sun, Tianming Wei, Matthew Plaudis, Y. Coady
{"title":"BUDI: Building Urban Designs Interactively Can Spatial-Based Collaboration be Seamless?","authors":"Xi Sun, Tianming Wei, Matthew Plaudis, Y. Coady","doi":"10.1145/3385959.3422703","DOIUrl":"https://doi.org/10.1145/3385959.3422703","url":null,"abstract":"BUDI (Building Urban Designs Interactively) is an integrated 3D visualization and remote collaboration platform for complex urban design tasks. Users with different backgrounds can remotely engage in the entire design cycle, improving the quality of the end result. In BUDI, a virtual environment was designed to seamlessly expand beyond a traditional two-dimensional surface into a fully immersive three-dimensional space. Clients on various devices connect with servers for different functionalities tailored for various user groups. A demonstration with a local urban planning use-case shows the costs and benefits of BUDI as a spatial-based collaborative platform. We consider the trade-offs encountered when trying to make the collaboration seamless. Specifically, we introduce the multi-dimensional data visualization and interactions the platform provides, and outline how users can interact with and analyze various aspects of urban design.","PeriodicalId":157249,"journal":{"name":"Proceedings of the 2020 ACM Symposium on Spatial User Interaction","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123212953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Investigating the Effectiveness of Locked Dwell Time-based Point and Tap Gesture for Selection of Nail-sized Objects in Dense Virtual Environment 基于锁定停留时间的点触手势在密集虚拟环境中选择指甲大小物体的有效性研究
Proceedings of the 2020 ACM Symposium on Spatial User Interaction Pub Date : 2020-10-30 DOI: 10.1145/3385959.3422701
Shimmila Bhowmick, Ayaskanta Panigrahi, Pranjal Protim Borah, P. Kalita, K. Sorathia
{"title":"Investigating the Effectiveness of Locked Dwell Time-based Point and Tap Gesture for Selection of Nail-sized Objects in Dense Virtual Environment","authors":"Shimmila Bhowmick, Ayaskanta Panigrahi, Pranjal Protim Borah, P. Kalita, K. Sorathia","doi":"10.1145/3385959.3422701","DOIUrl":"https://doi.org/10.1145/3385959.3422701","url":null,"abstract":"In immersive VR environments, object selection is an essential interaction. However, current object selection techniques suffer from issues of hand jitter, accuracy, and fatigue, especially to select nail-size objects. Here, we present locked dwell time-based point and tap, a novel object selection technique designed for nail-size object selection in a dense virtual environment. The objects are within arm’s reach. We also compare locked dwell time-based point and tap with magnetic grasp, pinch and raycasting. 40 participants evaluated the effectiveness and efficiency of these techniques. The results found that locked dwell time-based point and tap took significantly less task completion time and error rate. It was also the most preferred and caused least effort among all the techniques. We also measured easy to use, easy to learn and perceived naturalness of the techniques.","PeriodicalId":157249,"journal":{"name":"Proceedings of the 2020 ACM Symposium on Spatial User Interaction","volume":"340 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133929974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Punch Typing: Alternative Method for Text Entry in Virtual Reality 打孔打字:虚拟现实中文本输入的替代方法
Proceedings of the 2020 ACM Symposium on Spatial User Interaction Pub Date : 2020-10-30 DOI: 10.1145/3385959.3421722
Powen Yao, V. Lympouridis, Tian Zhu, M. Zyda, R. Jia
{"title":"Punch Typing: Alternative Method for Text Entry in Virtual Reality","authors":"Powen Yao, V. Lympouridis, Tian Zhu, M. Zyda, R. Jia","doi":"10.1145/3385959.3421722","DOIUrl":"https://doi.org/10.1145/3385959.3421722","url":null,"abstract":"A common way to perform data entry in virtual reality remains using virtual laser pointers to select characters from a flat 2D keyboard in 3-dimensional space. In this demo, we present a data input method that takes advantage of 3D space by interacting with a keyboard with keys arranged in three dimensions. Each hand is covered by a hemisphere of keys based on the QWERTY layout, allowing users to type by moving their hands in a motion similar to punching. Although the goal is to achieve a gesture more akin to tapping, current controllers or hand tracking technology doesn't allow such high fidelity. Thus, the presented interaction using VR controllers is more comparable to punching.","PeriodicalId":157249,"journal":{"name":"Proceedings of the 2020 ACM Symposium on Spatial User Interaction","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122426849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
BodySLAM: Opportunistic User Digitization in Multi-User AR/VR Experiences BodySLAM:多用户AR/VR体验中的机会用户数字化
Proceedings of the 2020 ACM Symposium on Spatial User Interaction Pub Date : 2020-10-30 DOI: 10.1145/3385959.3418452
Karan Ahuja, Mayank Goel, Chris Harrison
{"title":"BodySLAM: Opportunistic User Digitization in Multi-User AR/VR Experiences","authors":"Karan Ahuja, Mayank Goel, Chris Harrison","doi":"10.1145/3385959.3418452","DOIUrl":"https://doi.org/10.1145/3385959.3418452","url":null,"abstract":"Today’s augmented and virtual reality (AR/VR) systems do not provide body, hand or mouth tracking without special worn sensors or external infrastructure. Simultaneously, AR/VR systems are increasingly being used in co-located, multi-user experiences, opening the possibility for opportunistic capture of other users. This is the core idea behind BodySLAM, which uses disparate camera views from users to digitize the body, hands and mouth of other people, and then relay that information back to the respective users. If a user is seen by two or more people, 3D pose can be estimated via stereo reconstruction. Our system also maps the arrangement of users in real world coordinates. Our approach requires no additional hardware or sensors beyond what is already found in commercial AR/VR devices, such as Microsoft HoloLens or Oculus Quest.","PeriodicalId":157249,"journal":{"name":"Proceedings of the 2020 ACM Symposium on Spatial User Interaction","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127983428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Exploring the Need and Design for Situated Video Analytics 探索定位视频分析的需求和设计
Proceedings of the 2020 ACM Symposium on Spatial User Interaction Pub Date : 2020-10-30 DOI: 10.1145/3385959.3418458
F. Alallah, Y. Sakamoto, Pourang Irani
{"title":"Exploring the Need and Design for Situated Video Analytics","authors":"F. Alallah, Y. Sakamoto, Pourang Irani","doi":"10.1145/3385959.3418458","DOIUrl":"https://doi.org/10.1145/3385959.3418458","url":null,"abstract":"Visual video analytics research, stemming from data captured by surveillance cameras, have mainly focused on traditional computing paradigms, despite emerging platforms including mobile devices. We investigate the potential for situated video analytics, which involves the inspection of video data in the actual environment where the video was captured [14]. Our ultimate goal is to explore the means to visually explore video data effectively, in situated contexts. We first investigate the performance of visual analytic tasks in situated vs. non-situated settings. We find that participants largely benefit from environmental cues for many analytic tasks. We then pose the question of how best to represent situated video data. To answer this, in a design session we explore end-users’ views on how to capture such data. Through the process of sketching, participants leveraged being situated, and explored how being in-situ influenced the participants’ integration of their designs. Based on these two elements, our paper proposes the need to develop novel spatial analytic user interfaces to support situated video analysis.","PeriodicalId":157249,"journal":{"name":"Proceedings of the 2020 ACM Symposium on Spatial User Interaction","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115875029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Interfacing with Sensory Options Using a Virtual Equipment System 使用虚拟设备系统与传感选项接口
Proceedings of the 2020 ACM Symposium on Spatial User Interaction Pub Date : 2020-10-30 DOI: 10.1145/3385959.3421723
Powen Yao, V. Lympouridis, Tian Zhu, M. Zyda
{"title":"Interfacing with Sensory Options Using a Virtual Equipment System","authors":"Powen Yao, V. Lympouridis, Tian Zhu, M. Zyda","doi":"10.1145/3385959.3421723","DOIUrl":"https://doi.org/10.1145/3385959.3421723","url":null,"abstract":"We envision the development of a novel Virtual Equipment System to replace existing 2d Interfaces in virtual reality with a set of embedded and distributed input devices. We have built a prototype that takes advantage of the user's spatial awareness, offering them a set of virtual equipment relevant to their sensory organs. It allows the users to quickly access a suite of intuitively designed interfaces using their spatial and body awareness. Our Virtual Equipment System can be standardized and applied to other extended reality devices and frameworks.","PeriodicalId":157249,"journal":{"name":"Proceedings of the 2020 ACM Symposium on Spatial User Interaction","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130147345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Mixed Reality Spatial Computing in a Remote Learning Classroom 混合现实空间计算在远程学习课堂中的应用
Proceedings of the 2020 ACM Symposium on Spatial User Interaction Pub Date : 2020-10-30 DOI: 10.1145/3385959.3422705
J. Akers, Joelle Zimmermann, Laura C. Trutoiu, B. Schowengerdt, Ira Kemelmacher-Shlizerman
{"title":"Mixed Reality Spatial Computing in a Remote Learning Classroom","authors":"J. Akers, Joelle Zimmermann, Laura C. Trutoiu, B. Schowengerdt, Ira Kemelmacher-Shlizerman","doi":"10.1145/3385959.3422705","DOIUrl":"https://doi.org/10.1145/3385959.3422705","url":null,"abstract":"We present a case study on the use of mixed reality (MR) spatial computing in a fully remote classroom. We conducted a 10-week undergraduate class fully online, using a combination of traditional teleconferencing software and MR spatial computing (Magic Leap One headsets) using an avatar-mediated social interaction application (Spatial). The class culminated in a virtual poster session, using Spatial in MR to present project results, and we conducted a preliminary investigation of students experiences via interviews and questionnaires. Students reported that they had a good experience using MR for the poster session and that they thought it provided advantages over 2D video conferencing. Particular advantages cited were a stronger sense that they were in the presence of other students and instructors, an improved ability to tell where others were directing their attention, and a better ability to share 3D project content and collaborate.","PeriodicalId":157249,"journal":{"name":"Proceedings of the 2020 ACM Symposium on Spatial User Interaction","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129060981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信