RIDS: Implicit Detection of a Selection Gesture Using Hand Motion Dynamics During Freehand Pointing in Virtual Reality

Ting Zhang, Zhenhong Hu, Aakar Gupta, Chi-Hao Wu, Hrvoje Benko, Tanya R. Jonker
{"title":"RIDS: Implicit Detection of a Selection Gesture Using Hand Motion Dynamics During Freehand Pointing in Virtual Reality","authors":"Ting Zhang, Zhenhong Hu, Aakar Gupta, Chi-Hao Wu, Hrvoje Benko, Tanya R. Jonker","doi":"10.1145/3526113.3545701","DOIUrl":null,"url":null,"abstract":"Freehand interactions with augmented and virtual reality are growing in popularity, but they lack reliability and robustness. Implicit behavior from users, such as hand or gaze movements, might provide additional signals to improve the reliability of input. In this paper, the primary goal is to improve the detection of a selection gesture in VR during point-and-click interaction. Thus, we propose and investigate the use of information contained within the hand motion dynamics that precede a selection gesture. We built two models that classified if a user is likely to perform a selection gesture at the current moment in time. We collected data during a pointing-and-selection task from 15 participants and trained two models with different architectures, i.e., a logistic regression classifier was trained using predefined hand motion features and a temporal convolutional network (TCN) classifier was trained using raw hand motion data. Leave-one-subject-out cross-validation PR-AUCs of 0.36 and 0.90 were obtained for each model respectively, demonstrating that the models performed well above chance (=0.13). The TCN model was found to improve the precision of a noisy selection gesture by 11.2% without sacrificing recall performance. An initial analysis of the generalizability of the models demonstrated above-chance performance, suggesting that this approach could be scaled to other interaction tasks in the future.","PeriodicalId":200048,"journal":{"name":"Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology","volume":"27 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3526113.3545701","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

Freehand interactions with augmented and virtual reality are growing in popularity, but they lack reliability and robustness. Implicit behavior from users, such as hand or gaze movements, might provide additional signals to improve the reliability of input. In this paper, the primary goal is to improve the detection of a selection gesture in VR during point-and-click interaction. Thus, we propose and investigate the use of information contained within the hand motion dynamics that precede a selection gesture. We built two models that classified if a user is likely to perform a selection gesture at the current moment in time. We collected data during a pointing-and-selection task from 15 participants and trained two models with different architectures, i.e., a logistic regression classifier was trained using predefined hand motion features and a temporal convolutional network (TCN) classifier was trained using raw hand motion data. Leave-one-subject-out cross-validation PR-AUCs of 0.36 and 0.90 were obtained for each model respectively, demonstrating that the models performed well above chance (=0.13). The TCN model was found to improve the precision of a noisy selection gesture by 11.2% without sacrificing recall performance. An initial analysis of the generalizability of the models demonstrated above-chance performance, suggesting that this approach could be scaled to other interaction tasks in the future.
rid:在虚拟现实中徒手指向时使用手部运动动力学的隐式检测选择手势
与增强现实和虚拟现实的徒手交互越来越受欢迎,但它们缺乏可靠性和健壮性。来自用户的隐式行为,如手部或目光的移动,可能会提供额外的信号,以提高输入的可靠性。在本文中,主要目标是提高在虚拟现实中的选择手势在指向和点击交互过程中的检测。因此,我们提出并研究了在选择手势之前的手部运动动力学中包含的信息的使用。我们建立了两个模型,用于对用户在当前时刻是否可能执行选择手势进行分类。我们收集了15名参与者在指向和选择任务中的数据,并训练了两种不同架构的模型,即使用预定义的手部运动特征训练逻辑回归分类器,使用原始手部运动数据训练时间卷积网络(TCN)分类器。每个模型的留一被试交叉验证的pr - auc分别为0.36和0.90,表明模型的表现都高于概率(=0.13)。发现TCN模型在不牺牲召回性能的情况下,将噪声选择手势的精度提高了11.2%。对模型的可泛化性的初步分析显示了高于机会的性能,表明该方法可以扩展到将来的其他交互任务。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信