视觉线索改善空间定向在远程呈现在虚拟现实

Jennifer Brade, Tobias Hoppe, Sven Winkler, Philipp Klimant, Georg Jahn
{"title":"视觉线索改善空间定向在远程呈现在虚拟现实","authors":"Jennifer Brade, Tobias Hoppe, Sven Winkler, Philipp Klimant, Georg Jahn","doi":"10.54941/ahfe1002862","DOIUrl":null,"url":null,"abstract":"When moving in reality, successful spatial orientation is enabled\n through continuous updating of egocentric spatial relations to the\n surrounding environment. But in Virtual Reality (VR) or telepresence, cues\n of one’s own movement are rarely provided, which typically impairs spatial\n orientation. Telepresence robots are mostly operated by minimal real\n movements of the user via PC-based controls, which entail a lack of real\n translations and rotations and thus can disrupt spatial orientation. Studies\n in virtual environments show that a certain degree of spatial updating is\n possible without body-based cues to self-motion (vestibular, proprioceptive,\n motor efference) solely through continuous visual information about the\n change in orientation or additional visual landmarks. While a large number\n of studies investigated spatial orientation in virtual environments, spatial\n updating in telepresence remains largely unexplored. VR and telepresence\n environments share the common feature that the user is not physically\n located in the mediated environment and thus interacts in an environment\n that does not correspond to the body-based cues generated by posture and\n self-motion in the real environment. Despite this similarity, virtual and\n telepresence environments also have significant differences in how the\n environment is presented: common, commercially available telepresence\n systems can usually only display the environment on a 2D monitor. The 2D\n monitor impairs the operator's depth perception compared with 3D\n presentation in VR, for instance in an HMD, and interacting by means of\n mouse movements on a 2D plane is indirect compared with moving VR\n controllers and the HMD in 3D space. Thus, it cannot be assumed without\n verification that the spatial orientation in 2D telepresence systems can be\n compared with that in VR systems. Therefore, we employed a standard spatial\n orientation task with a telepresence robot to evaluate if results concerning\n the number of visual cues turn out similar to findings in VR-studies.To\n address the research question, a triangle completion task (TCT) was carried\n out using the telepresence robot Double 3. The participants (n= 30)\n controlled the telepresence robot remotely using a computer and a mouse: At\n first, they moved the robot to a specified point, then they turned the robot\n to orient towards a second specified point, moved there and were then asked\n to return the robot to its starting point. To evaluate the influence of the\n number of visual cues on the performance in the TCT, three conditions that\n varied in the amount of visual information provided for navigating the third\n leg were presented in a within-subjects design. Similar to studies that\n showed support of spatial orientation in TCT by visual cues in VR, the\n number of visual cues available while navigating the third leg supported\n triangle completion with a telepresence robot. This was confirmed by the\n trend of reduced error with more visual cues and a reliable difference\n between the conditions with sparse and many visual cues. Connecting results\n obtained in VR with telepresence and teleoperation scenarios is valuable to\n inform designing telepresence and teleoperation interfaces. We demonstrated\n that a standard task for studying spatial orientation performance is\n applicable with telepresence robots.","PeriodicalId":269162,"journal":{"name":"Proceedings of the 6th International Conference on Intelligent Human Systems Integration (IHSI 2023) Integrating People and Intelligent Systems, February 22–24, 2023, Venice, Italy","volume":"28 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Visual cues improve spatial orientation in telepresence as in VR\",\"authors\":\"Jennifer Brade, Tobias Hoppe, Sven Winkler, Philipp Klimant, Georg Jahn\",\"doi\":\"10.54941/ahfe1002862\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"When moving in reality, successful spatial orientation is enabled\\n through continuous updating of egocentric spatial relations to the\\n surrounding environment. But in Virtual Reality (VR) or telepresence, cues\\n of one’s own movement are rarely provided, which typically impairs spatial\\n orientation. Telepresence robots are mostly operated by minimal real\\n movements of the user via PC-based controls, which entail a lack of real\\n translations and rotations and thus can disrupt spatial orientation. Studies\\n in virtual environments show that a certain degree of spatial updating is\\n possible without body-based cues to self-motion (vestibular, proprioceptive,\\n motor efference) solely through continuous visual information about the\\n change in orientation or additional visual landmarks. While a large number\\n of studies investigated spatial orientation in virtual environments, spatial\\n updating in telepresence remains largely unexplored. VR and telepresence\\n environments share the common feature that the user is not physically\\n located in the mediated environment and thus interacts in an environment\\n that does not correspond to the body-based cues generated by posture and\\n self-motion in the real environment. Despite this similarity, virtual and\\n telepresence environments also have significant differences in how the\\n environment is presented: common, commercially available telepresence\\n systems can usually only display the environment on a 2D monitor. The 2D\\n monitor impairs the operator's depth perception compared with 3D\\n presentation in VR, for instance in an HMD, and interacting by means of\\n mouse movements on a 2D plane is indirect compared with moving VR\\n controllers and the HMD in 3D space. Thus, it cannot be assumed without\\n verification that the spatial orientation in 2D telepresence systems can be\\n compared with that in VR systems. Therefore, we employed a standard spatial\\n orientation task with a telepresence robot to evaluate if results concerning\\n the number of visual cues turn out similar to findings in VR-studies.To\\n address the research question, a triangle completion task (TCT) was carried\\n out using the telepresence robot Double 3. The participants (n= 30)\\n controlled the telepresence robot remotely using a computer and a mouse: At\\n first, they moved the robot to a specified point, then they turned the robot\\n to orient towards a second specified point, moved there and were then asked\\n to return the robot to its starting point. To evaluate the influence of the\\n number of visual cues on the performance in the TCT, three conditions that\\n varied in the amount of visual information provided for navigating the third\\n leg were presented in a within-subjects design. Similar to studies that\\n showed support of spatial orientation in TCT by visual cues in VR, the\\n number of visual cues available while navigating the third leg supported\\n triangle completion with a telepresence robot. This was confirmed by the\\n trend of reduced error with more visual cues and a reliable difference\\n between the conditions with sparse and many visual cues. Connecting results\\n obtained in VR with telepresence and teleoperation scenarios is valuable to\\n inform designing telepresence and teleoperation interfaces. We demonstrated\\n that a standard task for studying spatial orientation performance is\\n applicable with telepresence robots.\",\"PeriodicalId\":269162,\"journal\":{\"name\":\"Proceedings of the 6th International Conference on Intelligent Human Systems Integration (IHSI 2023) Integrating People and Intelligent Systems, February 22–24, 2023, Venice, Italy\",\"volume\":\"28 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1900-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 6th International Conference on Intelligent Human Systems Integration (IHSI 2023) Integrating People and Intelligent Systems, February 22–24, 2023, Venice, Italy\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.54941/ahfe1002862\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 6th International Conference on Intelligent Human Systems Integration (IHSI 2023) Integrating People and Intelligent Systems, February 22–24, 2023, Venice, Italy","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.54941/ahfe1002862","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

当在现实中移动时,成功的空间定位是通过不断更新以自我为中心的空间关系来实现的。但在虚拟现实(VR)或远程呈现中,很少提供自己运动的线索,这通常会损害空间方向。远程呈现机器人大多通过基于pc的控制由用户的最小真实运动操作,这需要缺乏真正的平移和旋转,因此可能会破坏空间方向。在虚拟环境中的研究表明,在没有基于身体的自我运动线索(前庭、本体感受、运动感知)的情况下,仅仅通过关于方向变化或其他视觉标志的连续视觉信息,就可以实现一定程度的空间更新。虽然大量的研究调查了虚拟环境中的空间定位,但远程呈现的空间更新在很大程度上仍未被探索。虚拟现实和远程呈现环境有一个共同的特点,即用户不在中介环境中物理位置,因此在与真实环境中由姿势和自我运动产生的基于身体的线索不对应的环境中进行交互。尽管有这种相似之处,虚拟环境和远程呈现环境在环境的呈现方式上也有显著差异:常见的、商业上可用的远程呈现系统通常只能在2D监视器上显示环境。与VR中的3D呈现相比,例如在HMD中,2D显示器损害了操作员的深度感知,并且与在3D空间中移动VR控制器和HMD相比,通过鼠标移动在2D平面上进行交互是间接的。因此,在未经验证的情况下,不能假设二维临场感系统中的空间方向可以与VR系统中的空间方向进行比较。因此,我们采用了一个远程呈现机器人的标准空间定向任务来评估关于视觉线索数量的结果是否与vr研究的结果相似。为了解决研究问题,利用远程呈现机器人“双3”进行了三角完成任务(TCT)。参与者(n= 30)使用计算机和鼠标远程控制远程呈现机器人:首先,他们将机器人移动到一个指定的点,然后他们将机器人转向第二个指定的点,移动到那里,然后被要求将机器人返回到其起点。为了评估视觉线索数量对TCT表现的影响,在受试者内设计中提出了三种不同的条件,这些条件在导航第三条腿时提供的视觉信息数量上有所不同。类似于在VR中通过视觉线索支持TCT中的空间方向的研究,当使用远程呈现机器人导航第三条腿支持三角形完成时,可用的视觉线索数量。视觉线索多的情况下误差减小的趋势,以及视觉线索稀疏和视觉线索多的情况之间的可靠差异,证实了这一点。将VR获得的结果与远程呈现和远程操作场景相结合,对于设计远程呈现和远程操作界面具有重要的参考价值。我们证明了研究空间定向性能的标准任务适用于远程呈现机器人。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Visual cues improve spatial orientation in telepresence as in VR
When moving in reality, successful spatial orientation is enabled through continuous updating of egocentric spatial relations to the surrounding environment. But in Virtual Reality (VR) or telepresence, cues of one’s own movement are rarely provided, which typically impairs spatial orientation. Telepresence robots are mostly operated by minimal real movements of the user via PC-based controls, which entail a lack of real translations and rotations and thus can disrupt spatial orientation. Studies in virtual environments show that a certain degree of spatial updating is possible without body-based cues to self-motion (vestibular, proprioceptive, motor efference) solely through continuous visual information about the change in orientation or additional visual landmarks. While a large number of studies investigated spatial orientation in virtual environments, spatial updating in telepresence remains largely unexplored. VR and telepresence environments share the common feature that the user is not physically located in the mediated environment and thus interacts in an environment that does not correspond to the body-based cues generated by posture and self-motion in the real environment. Despite this similarity, virtual and telepresence environments also have significant differences in how the environment is presented: common, commercially available telepresence systems can usually only display the environment on a 2D monitor. The 2D monitor impairs the operator's depth perception compared with 3D presentation in VR, for instance in an HMD, and interacting by means of mouse movements on a 2D plane is indirect compared with moving VR controllers and the HMD in 3D space. Thus, it cannot be assumed without verification that the spatial orientation in 2D telepresence systems can be compared with that in VR systems. Therefore, we employed a standard spatial orientation task with a telepresence robot to evaluate if results concerning the number of visual cues turn out similar to findings in VR-studies.To address the research question, a triangle completion task (TCT) was carried out using the telepresence robot Double 3. The participants (n= 30) controlled the telepresence robot remotely using a computer and a mouse: At first, they moved the robot to a specified point, then they turned the robot to orient towards a second specified point, moved there and were then asked to return the robot to its starting point. To evaluate the influence of the number of visual cues on the performance in the TCT, three conditions that varied in the amount of visual information provided for navigating the third leg were presented in a within-subjects design. Similar to studies that showed support of spatial orientation in TCT by visual cues in VR, the number of visual cues available while navigating the third leg supported triangle completion with a telepresence robot. This was confirmed by the trend of reduced error with more visual cues and a reliable difference between the conditions with sparse and many visual cues. Connecting results obtained in VR with telepresence and teleoperation scenarios is valuable to inform designing telepresence and teleoperation interfaces. We demonstrated that a standard task for studying spatial orientation performance is applicable with telepresence robots.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信