A Commonsense Knowledge-based Object Retrieval Approach for Virtual Reality

Haiyan Jiang, Dongdong Weng, Xiaonuo Dongye, Nan Zhang, Luo Le
{"title":"A Commonsense Knowledge-based Object Retrieval Approach for Virtual Reality","authors":"Haiyan Jiang, Dongdong Weng, Xiaonuo Dongye, Nan Zhang, Luo Le","doi":"10.1109/VRW58643.2023.00240","DOIUrl":null,"url":null,"abstract":"Out-of-reach object retrieval is an important task in many applications in virtual reality. Hand gestures have been widely studied for object retrieval. However, the one-to-one mapping metaphor of the gesture would cause ambiguity problems and memory burdens for the retrieval of plenty of objects. Therefore, we proposed a grasping gesture-based object retrieval approach for out-of-reach objects based on a graphical model called And-Or graphs (AOG), leveraging the scene-object occurrence, object co-occurrence, and human grasp commonsense knowledge. This approach enables users to acquire objects by using natural grasping gestures according to their experience of grasping physical objects. Importantly, users could perform the same grasping gesture for different virtual objects and perform different grasping gestures for one virtual object in the virtual environment.","PeriodicalId":412598,"journal":{"name":"2023 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"2007 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/VRW58643.2023.00240","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Out-of-reach object retrieval is an important task in many applications in virtual reality. Hand gestures have been widely studied for object retrieval. However, the one-to-one mapping metaphor of the gesture would cause ambiguity problems and memory burdens for the retrieval of plenty of objects. Therefore, we proposed a grasping gesture-based object retrieval approach for out-of-reach objects based on a graphical model called And-Or graphs (AOG), leveraging the scene-object occurrence, object co-occurrence, and human grasp commonsense knowledge. This approach enables users to acquire objects by using natural grasping gestures according to their experience of grasping physical objects. Importantly, users could perform the same grasping gesture for different virtual objects and perform different grasping gestures for one virtual object in the virtual environment.
基于常识知识的虚拟现实对象检索方法
在虚拟现实的许多应用中,遥不可及对象检索是一项重要的任务。手势在物体检索方面得到了广泛的研究。然而,手势的一对一映射隐喻会引起歧义问题,并且会给大量对象的检索带来内存负担。因此,我们提出了一种基于抓取手势的遥不可及物体检索方法,该方法基于一种称为and - or图(AOG)的图形模型,利用场景-物体发生、物体共发生和人类抓取常识知识。这种方法使用户能够根据抓取物理对象的经验,使用自然抓取手势来获取对象。重要的是,用户可以对不同的虚拟物体执行相同的抓取手势,并在虚拟环境中对一个虚拟物体执行不同的抓取手势。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信