{"title":"Let's Go There: Combining Voice and Pointing in VR","authors":"Jaisie Sin, Cosmin Munteanu","doi":"10.1145/3405755.3406161","DOIUrl":null,"url":null,"abstract":"Hand-tracking has been advertised as a natural means to engage with a virtual environment that also enhances the feeling of presence in and lowers the barriers to entry to virtual reality. We seek to explore combining hand-tracking with voice input (which is then processed with automatic speech recognition) for a novel multimodal experience. Thus, we created Let's Go There, which explores this joint-input method for four functions in virtual reality environments: positioning, object identification, information mapping, and disambiguation. This combination may serve as a more intuitive means for users to communicate and navigate in virtual environments. We expect there to be multiple potential applications of this multimodal form of interaction across numerous domains including training, education, teamwork, and games.","PeriodicalId":380130,"journal":{"name":"Proceedings of the 2nd Conference on Conversational User Interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2nd Conference on Conversational User Interfaces","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3405755.3406161","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
Hand-tracking has been advertised as a natural means to engage with a virtual environment that also enhances the feeling of presence in and lowers the barriers to entry to virtual reality. We seek to explore combining hand-tracking with voice input (which is then processed with automatic speech recognition) for a novel multimodal experience. Thus, we created Let's Go There, which explores this joint-input method for four functions in virtual reality environments: positioning, object identification, information mapping, and disambiguation. This combination may serve as a more intuitive means for users to communicate and navigate in virtual environments. We expect there to be multiple potential applications of this multimodal form of interaction across numerous domains including training, education, teamwork, and games.
手部追踪被宣传为一种与虚拟环境互动的自然方式,它还能增强身临其境的感觉,降低进入虚拟现实的门槛。我们试图探索将手部跟踪与语音输入(然后由自动语音识别处理)相结合,以获得一种新颖的多模式体验。因此,我们创建了Let's Go There,它探索了这种联合输入法在虚拟现实环境中的四个功能:定位,对象识别,信息映射和消歧。这种组合可以作为用户在虚拟环境中进行通信和导航的更直观的手段。我们期望这种多模式的交互形式在许多领域有多种潜在的应用,包括培训、教育、团队合作和游戏。