在VR运动中使用头部方向和凝视方向预测目的地

Jonathan Gandrud, V. Interrante
{"title":"在VR运动中使用头部方向和凝视方向预测目的地","authors":"Jonathan Gandrud, V. Interrante","doi":"10.1145/2931002.2931010","DOIUrl":null,"url":null,"abstract":"This paper reports preliminary investigations into the extent to which future directional intention might be reliably inferred from head pose and eye gaze during locomotion. Such findings could help inform the more effective implementation of realistic detailed animation for dynamic virtual agents in interactive first-person crowd simulations in VR, as well as the design of more efficient predictive controllers for redirected walking. In three different studies, with a total of 19 participants, we placed people at the base of a T-shaped virtual hallway environment and collected head position, head orientation, and gaze direction data as they set out to perform a hidden target search task across two rooms situated at right angles to the end of the hallway. Subjects wore an nVisorST50 HMD equipped with an Arrington Research ViewPoint eye tracker; positional data were tracked using a 12-camera Vicon MX40 motion capture system. The hidden target search task was used to blind participants to the actual focus of our study, which was to gain insight into how effectively head position, head orientation and gaze direction data might predict people's eventual choice of which room to search first. Our results suggest that eye gaze data does have the potential to provide additional predictive value over the use of 6DOF head tracked data alone, despite the relatively limited field-of-view of the display we used.","PeriodicalId":102213,"journal":{"name":"Proceedings of the ACM Symposium on Applied Perception","volume":"61 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"22","resultStr":"{\"title\":\"Predicting destination using head orientation and gaze direction during locomotion in VR\",\"authors\":\"Jonathan Gandrud, V. Interrante\",\"doi\":\"10.1145/2931002.2931010\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper reports preliminary investigations into the extent to which future directional intention might be reliably inferred from head pose and eye gaze during locomotion. Such findings could help inform the more effective implementation of realistic detailed animation for dynamic virtual agents in interactive first-person crowd simulations in VR, as well as the design of more efficient predictive controllers for redirected walking. In three different studies, with a total of 19 participants, we placed people at the base of a T-shaped virtual hallway environment and collected head position, head orientation, and gaze direction data as they set out to perform a hidden target search task across two rooms situated at right angles to the end of the hallway. Subjects wore an nVisorST50 HMD equipped with an Arrington Research ViewPoint eye tracker; positional data were tracked using a 12-camera Vicon MX40 motion capture system. The hidden target search task was used to blind participants to the actual focus of our study, which was to gain insight into how effectively head position, head orientation and gaze direction data might predict people's eventual choice of which room to search first. Our results suggest that eye gaze data does have the potential to provide additional predictive value over the use of 6DOF head tracked data alone, despite the relatively limited field-of-view of the display we used.\",\"PeriodicalId\":102213,\"journal\":{\"name\":\"Proceedings of the ACM Symposium on Applied Perception\",\"volume\":\"61 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-07-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"22\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the ACM Symposium on Applied Perception\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/2931002.2931010\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the ACM Symposium on Applied Perception","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2931002.2931010","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 22

摘要

本文报告了初步调查的程度,未来的方向意向可能可靠地推断从头部姿势和眼睛注视在运动。这些发现可以帮助我们更有效地实现虚拟现实中交互式第一人称人群模拟中动态虚拟代理的逼真细节动画,以及设计更有效的重定向行走预测控制器。在三个不同的研究中,共有19名参与者,我们将人们置于t形虚拟走廊环境的底部,并收集他们开始执行隐藏目标搜索任务时的头部位置,头部方向和凝视方向数据,这些任务位于与走廊尽头成直角的两个房间。受试者佩戴了配备阿灵顿研究视点眼动仪的nVisorST50头戴式头戴设备;使用12个摄像头的Vicon MX40动作捕捉系统跟踪位置数据。隐藏目标搜索任务是为了让参与者不知道我们研究的真正重点,即深入了解头部位置、头部方向和凝视方向数据如何有效地预测人们最终选择首先搜索哪个房间。我们的研究结果表明,尽管我们使用的显示器的视野相对有限,但眼睛注视数据确实有可能比单独使用6DOF头部跟踪数据提供额外的预测价值。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Predicting destination using head orientation and gaze direction during locomotion in VR
This paper reports preliminary investigations into the extent to which future directional intention might be reliably inferred from head pose and eye gaze during locomotion. Such findings could help inform the more effective implementation of realistic detailed animation for dynamic virtual agents in interactive first-person crowd simulations in VR, as well as the design of more efficient predictive controllers for redirected walking. In three different studies, with a total of 19 participants, we placed people at the base of a T-shaped virtual hallway environment and collected head position, head orientation, and gaze direction data as they set out to perform a hidden target search task across two rooms situated at right angles to the end of the hallway. Subjects wore an nVisorST50 HMD equipped with an Arrington Research ViewPoint eye tracker; positional data were tracked using a 12-camera Vicon MX40 motion capture system. The hidden target search task was used to blind participants to the actual focus of our study, which was to gain insight into how effectively head position, head orientation and gaze direction data might predict people's eventual choice of which room to search first. Our results suggest that eye gaze data does have the potential to provide additional predictive value over the use of 6DOF head tracked data alone, despite the relatively limited field-of-view of the display we used.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信