{"title":"低级语音和手部跟踪交互动作:Let's Go There的探索","authors":"Jaisie Sin, Cosmin Munteanu","doi":"10.1145/3447527.3474875","DOIUrl":null,"url":null,"abstract":"Hand-tracking allows users to engage with a virtual environment with their own hands, rather than the more traditional method of using accompanying controllers in order to operate the device they are using and interact with the virtual world. We seek to explore the range of low-level interaction actions and high-level interaction tasks and domains can be associated with the multimodal hand-tracking and voice input in VR. Thus, we created Let's Go There, which explores this joint-input method. So far, we have identified four low-level interaction actions which are exemplified by this demo: positioning oneself, positioning others, selection, and information assignment. We anticipate potential high-level interaction tasks and domains to include customer service training, social skills training, and cultural competency training (e.g. when interacting with older adults). Let's Go There, the system described in this paper, had been previously demonstrated at CUI 2020 and MobileHCI 2021. We have since updated our approach to its development to separate it into low- and high-level interactions. Thus, we believe there is value in bringing it to MobileHCI again to highlight these different types of interactions for further showcase and discussion.","PeriodicalId":281566,"journal":{"name":"Adjunct Publication of the 23rd International Conference on Mobile Human-Computer Interaction","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Low-level Voice and Hand-Tracking Interaction Actions: Explorations with Let's Go There\",\"authors\":\"Jaisie Sin, Cosmin Munteanu\",\"doi\":\"10.1145/3447527.3474875\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Hand-tracking allows users to engage with a virtual environment with their own hands, rather than the more traditional method of using accompanying controllers in order to operate the device they are using and interact with the virtual world. We seek to explore the range of low-level interaction actions and high-level interaction tasks and domains can be associated with the multimodal hand-tracking and voice input in VR. Thus, we created Let's Go There, which explores this joint-input method. So far, we have identified four low-level interaction actions which are exemplified by this demo: positioning oneself, positioning others, selection, and information assignment. We anticipate potential high-level interaction tasks and domains to include customer service training, social skills training, and cultural competency training (e.g. when interacting with older adults). Let's Go There, the system described in this paper, had been previously demonstrated at CUI 2020 and MobileHCI 2021. We have since updated our approach to its development to separate it into low- and high-level interactions. Thus, we believe there is value in bringing it to MobileHCI again to highlight these different types of interactions for further showcase and discussion.\",\"PeriodicalId\":281566,\"journal\":{\"name\":\"Adjunct Publication of the 23rd International Conference on Mobile Human-Computer Interaction\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-09-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Adjunct Publication of the 23rd International Conference on Mobile Human-Computer Interaction\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3447527.3474875\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Adjunct Publication of the 23rd International Conference on Mobile Human-Computer Interaction","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3447527.3474875","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
手部追踪允许用户用自己的手与虚拟环境互动,而不是使用更传统的方法,即使用附带的控制器来操作他们正在使用的设备并与虚拟世界互动。我们试图探索与VR中的多模态手部跟踪和语音输入相关的低级交互动作和高级交互任务和域的范围。因此,我们创建了Let's Go There,它探索了这种联合输入法。到目前为止,我们已经确定了四种低级交互动作,通过这个演示举例说明:定位自己,定位他人,选择和信息分配。我们预计潜在的高层次互动任务和领域包括客户服务培训、社交技能培训和文化能力培训(例如与老年人互动时)。本文中描述的系统Let’s Go There之前已经在CUI 2020和MobileHCI 2021上进行了演示。此后,我们更新了其开发方法,将其分为低级和高级交互。因此,我们相信在MobileHCI上再次强调这些不同类型的交互,以进一步展示和讨论是有价值的。
Low-level Voice and Hand-Tracking Interaction Actions: Explorations with Let's Go There
Hand-tracking allows users to engage with a virtual environment with their own hands, rather than the more traditional method of using accompanying controllers in order to operate the device they are using and interact with the virtual world. We seek to explore the range of low-level interaction actions and high-level interaction tasks and domains can be associated with the multimodal hand-tracking and voice input in VR. Thus, we created Let's Go There, which explores this joint-input method. So far, we have identified four low-level interaction actions which are exemplified by this demo: positioning oneself, positioning others, selection, and information assignment. We anticipate potential high-level interaction tasks and domains to include customer service training, social skills training, and cultural competency training (e.g. when interacting with older adults). Let's Go There, the system described in this paper, had been previously demonstrated at CUI 2020 and MobileHCI 2021. We have since updated our approach to its development to separate it into low- and high-level interactions. Thus, we believe there is value in bringing it to MobileHCI again to highlight these different types of interactions for further showcase and discussion.