{"title":"Clustering of image features based on contact and occlusion among robot body and objects","authors":"T. Somei, Y. Kobayashi, A. Shimizu, T. Kaneko","doi":"10.1109/WORV.2013.6521939","DOIUrl":null,"url":null,"abstract":"This paper presents a recognition framework for a robot without predefined knowledge on its environment. Image features (keypoints) are clustered based on statistical dependencies with respect to their motions and occlusions. Estimation of conditional probability is used to evaluate statistical dependencies among configuration of robot and features in images. Features that move depending on the configuration of the robot can be regarded as part of robot's body. Different kinds of occlusion can happen depending on relative position of robot hand and objects. Those differences can be expressed as different structures of `dependency network' in the proposed framework. The proposed recognition was verified by experiment using a humanoid robot equipped with camera and arm. It was first confirmed that part of the robot body was autonomously extracted without any a priori knowledge using conditional probability. In the generation of dependency network, different structures of networks were constructed depending on position of the robot hand relative to an object.","PeriodicalId":130461,"journal":{"name":"2013 IEEE Workshop on Robot Vision (WORV)","volume":"104 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2013 IEEE Workshop on Robot Vision (WORV)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/WORV.2013.6521939","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
This paper presents a recognition framework for a robot without predefined knowledge on its environment. Image features (keypoints) are clustered based on statistical dependencies with respect to their motions and occlusions. Estimation of conditional probability is used to evaluate statistical dependencies among configuration of robot and features in images. Features that move depending on the configuration of the robot can be regarded as part of robot's body. Different kinds of occlusion can happen depending on relative position of robot hand and objects. Those differences can be expressed as different structures of `dependency network' in the proposed framework. The proposed recognition was verified by experiment using a humanoid robot equipped with camera and arm. It was first confirmed that part of the robot body was autonomously extracted without any a priori knowledge using conditional probability. In the generation of dependency network, different structures of networks were constructed depending on position of the robot hand relative to an object.