虚拟环境中眼动追踪精度对注视转向的影响研究与建模。

IF 6.5
Xuning Hu, Yichuan Zhang, Yushi Wei, Liangyuting Zhang, Yue Li, Wolfgang Stuerzlinger, Hai-Ning Liang
{"title":"虚拟环境中眼动追踪精度对注视转向的影响研究与建模。","authors":"Xuning Hu, Yichuan Zhang, Yushi Wei, Liangyuting Zhang, Yue Li, Wolfgang Stuerzlinger, Hai-Ning Liang","doi":"10.1109/TVCG.2025.3616824","DOIUrl":null,"url":null,"abstract":"<p><p>Recent advances in eye-tracking technology have positioned gaze as an efficient and intuitive input method for Virtual Reality (VR), offering a natural and immersive user experience. As a result, gaze input is now leveraged for fundamental interaction tasks such as selection, manipulation, crossing, and steering. Although several studies have modeled user steering performance across various path characteristics and input methods, our understanding of gaze-based steering in VR remains limited. This gap persists because the unique qualities of eye movements-involving rapid, continuous motions-and the variability in eye-tracking make findings from other input modalities nontransferable to a gaze-based context, underscoring the need for a dedicated investigation into gaze-based steering behaviors and performance. To bridge this gap, we present two user studies to explore and model gaze-based steering. In the first one, user behavior data are collected across various path characteristics and eye-tracking conditions. Based on this data, we propose four refined models that extend the classic Steering Law to predict users' movement time in gaze-based steering tasks, explicitly incorporating the impact of tracking quality. The best-performing model achieves an adjusted R<sup>2</sup> of 0.956, corresponding to a 16% improvement in movement time prediction. This model also yields a substantial reduction in AIC (from 1550 to 1132) and BIC (from 1555 to 1142), highlighting improved model quality and better balance between goodness of fit and model complexity. Finally, data from a second study with varied settings, such as a different eye-tracking sampling rate, illustrate the strong robustness and predictability of our models. Finally, we present scenarios and applications that demonstrate how our models can be used to design enhanced gaze-based interactions in VR systems.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5000,"publicationDate":"2025-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Exploring and Modeling the Effects of Eye-Tracking Accuracy and Precision on Gaze-Based Steering in Virtual Environments.\",\"authors\":\"Xuning Hu, Yichuan Zhang, Yushi Wei, Liangyuting Zhang, Yue Li, Wolfgang Stuerzlinger, Hai-Ning Liang\",\"doi\":\"10.1109/TVCG.2025.3616824\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Recent advances in eye-tracking technology have positioned gaze as an efficient and intuitive input method for Virtual Reality (VR), offering a natural and immersive user experience. As a result, gaze input is now leveraged for fundamental interaction tasks such as selection, manipulation, crossing, and steering. Although several studies have modeled user steering performance across various path characteristics and input methods, our understanding of gaze-based steering in VR remains limited. This gap persists because the unique qualities of eye movements-involving rapid, continuous motions-and the variability in eye-tracking make findings from other input modalities nontransferable to a gaze-based context, underscoring the need for a dedicated investigation into gaze-based steering behaviors and performance. To bridge this gap, we present two user studies to explore and model gaze-based steering. In the first one, user behavior data are collected across various path characteristics and eye-tracking conditions. Based on this data, we propose four refined models that extend the classic Steering Law to predict users' movement time in gaze-based steering tasks, explicitly incorporating the impact of tracking quality. The best-performing model achieves an adjusted R<sup>2</sup> of 0.956, corresponding to a 16% improvement in movement time prediction. This model also yields a substantial reduction in AIC (from 1550 to 1132) and BIC (from 1555 to 1142), highlighting improved model quality and better balance between goodness of fit and model complexity. Finally, data from a second study with varied settings, such as a different eye-tracking sampling rate, illustrate the strong robustness and predictability of our models. Finally, we present scenarios and applications that demonstrate how our models can be used to design enhanced gaze-based interactions in VR systems.</p>\",\"PeriodicalId\":94035,\"journal\":{\"name\":\"IEEE transactions on visualization and computer graphics\",\"volume\":\"PP \",\"pages\":\"\"},\"PeriodicalIF\":6.5000,\"publicationDate\":\"2025-10-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on visualization and computer graphics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/TVCG.2025.3616824\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on visualization and computer graphics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TVCG.2025.3616824","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

眼球追踪技术的最新进展将目光定位为一种高效直观的虚拟现实(VR)输入方法,提供自然而身临其境的用户体验。因此,凝视输入现在被用于基本的交互任务,如选择、操作、交叉和转向。尽管有几项研究模拟了用户在不同路径特征和输入方法下的转向性能,但我们对VR中基于凝视的转向的理解仍然有限。这种差距之所以存在,是因为眼球运动的独特特性——包括快速、连续的运动——以及眼球追踪的可变性,使得其他输入模式的发现无法转移到基于凝视的环境中,这强调了对基于凝视的转向行为和表现进行专门研究的必要性。为了弥补这一差距,我们提出了两个用户研究来探索和建模基于凝视的转向。在第一种方法中,收集了不同路径特征和眼动追踪条件下的用户行为数据。基于这些数据,我们提出了四个改进的模型,扩展经典转向定律来预测用户在基于注视的转向任务中的运动时间,明确地考虑了跟踪质量的影响。表现最好的模型调整后的R2为0.956,相当于运动时间预测提高了16%。该模型还产生了AIC(从1550到1132)和BIC(从1555到1142)的大幅减少,突出了模型质量的提高以及在拟合优度和模型复杂性之间更好的平衡。最后,来自第二项研究的数据具有不同的设置,例如不同的眼动追踪采样率,说明了我们模型的强鲁棒性和可预测性。最后,我们提出了场景和应用程序,展示了我们的模型如何用于在VR系统中设计增强的基于凝视的交互。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Exploring and Modeling the Effects of Eye-Tracking Accuracy and Precision on Gaze-Based Steering in Virtual Environments.

Recent advances in eye-tracking technology have positioned gaze as an efficient and intuitive input method for Virtual Reality (VR), offering a natural and immersive user experience. As a result, gaze input is now leveraged for fundamental interaction tasks such as selection, manipulation, crossing, and steering. Although several studies have modeled user steering performance across various path characteristics and input methods, our understanding of gaze-based steering in VR remains limited. This gap persists because the unique qualities of eye movements-involving rapid, continuous motions-and the variability in eye-tracking make findings from other input modalities nontransferable to a gaze-based context, underscoring the need for a dedicated investigation into gaze-based steering behaviors and performance. To bridge this gap, we present two user studies to explore and model gaze-based steering. In the first one, user behavior data are collected across various path characteristics and eye-tracking conditions. Based on this data, we propose four refined models that extend the classic Steering Law to predict users' movement time in gaze-based steering tasks, explicitly incorporating the impact of tracking quality. The best-performing model achieves an adjusted R2 of 0.956, corresponding to a 16% improvement in movement time prediction. This model also yields a substantial reduction in AIC (from 1550 to 1132) and BIC (from 1555 to 1142), highlighting improved model quality and better balance between goodness of fit and model complexity. Finally, data from a second study with varied settings, such as a different eye-tracking sampling rate, illustrate the strong robustness and predictability of our models. Finally, we present scenarios and applications that demonstrate how our models can be used to design enhanced gaze-based interactions in VR systems.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信