Haiyan Jiang, Dongdong Weng, Xiaonuo Dongye, Nan Zhang, Luo Le
{"title":"A Commonsense Knowledge-based Object Retrieval Approach for Virtual Reality","authors":"Haiyan Jiang, Dongdong Weng, Xiaonuo Dongye, Nan Zhang, Luo Le","doi":"10.1109/VRW58643.2023.00240","DOIUrl":null,"url":null,"abstract":"Out-of-reach object retrieval is an important task in many applications in virtual reality. Hand gestures have been widely studied for object retrieval. However, the one-to-one mapping metaphor of the gesture would cause ambiguity problems and memory burdens for the retrieval of plenty of objects. Therefore, we proposed a grasping gesture-based object retrieval approach for out-of-reach objects based on a graphical model called And-Or graphs (AOG), leveraging the scene-object occurrence, object co-occurrence, and human grasp commonsense knowledge. This approach enables users to acquire objects by using natural grasping gestures according to their experience of grasping physical objects. Importantly, users could perform the same grasping gesture for different virtual objects and perform different grasping gestures for one virtual object in the virtual environment.","PeriodicalId":412598,"journal":{"name":"2023 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"2007 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/VRW58643.2023.00240","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Out-of-reach object retrieval is an important task in many applications in virtual reality. Hand gestures have been widely studied for object retrieval. However, the one-to-one mapping metaphor of the gesture would cause ambiguity problems and memory burdens for the retrieval of plenty of objects. Therefore, we proposed a grasping gesture-based object retrieval approach for out-of-reach objects based on a graphical model called And-Or graphs (AOG), leveraging the scene-object occurrence, object co-occurrence, and human grasp commonsense knowledge. This approach enables users to acquire objects by using natural grasping gestures according to their experience of grasping physical objects. Importantly, users could perform the same grasping gesture for different virtual objects and perform different grasping gestures for one virtual object in the virtual environment.