{"title":"基于Dempster-Shafer融合的机器人主动目标检测","authors":"A. S.PouryaHoseini, M. Nicolescu, M. Nicolescu","doi":"10.1145/3271553.3271564","DOIUrl":null,"url":null,"abstract":"Employing multiple sensing capabilities in a robotic platform offers significant advantages in increasing the recognition abilities of robots. Specifically, for vision-based object detection in a real-world environment, acquiring information from different viewpoints might be decisive for correct classifications in the presence of occlusions or to disambiguate between similar objects. For this reason, an active vision object detection system is proposed in this paper. It is implemented on a robotic environment that uses a 3D camera mounted on the robot head and an RGB camera on its hand. The system tries to detect and recognize objects being seen from the head camera, while computing a confidence score on the classification. In the case of an unreliable classification, another stage of object recognition is dynamically requested, but this time from the viewpoint of the hand camera. The objects detected from the two cameras are matched and their classification decisions are fused through a novel fusion approach based on the Dempster-Shafer evidence theory. Experimental results show sizable improvements in object recognition performance compared to a traditional singlecamera configuration, as well as applicability to handling partial occlusions.","PeriodicalId":414782,"journal":{"name":"Proceedings of the 2nd International Conference on Vision, Image and Signal Processing","volume":"43 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Active Object Detection Through Dynamic Incorporation of Dempster-Shafer Fusion for Robotic Applications\",\"authors\":\"A. S.PouryaHoseini, M. Nicolescu, M. Nicolescu\",\"doi\":\"10.1145/3271553.3271564\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Employing multiple sensing capabilities in a robotic platform offers significant advantages in increasing the recognition abilities of robots. Specifically, for vision-based object detection in a real-world environment, acquiring information from different viewpoints might be decisive for correct classifications in the presence of occlusions or to disambiguate between similar objects. For this reason, an active vision object detection system is proposed in this paper. It is implemented on a robotic environment that uses a 3D camera mounted on the robot head and an RGB camera on its hand. The system tries to detect and recognize objects being seen from the head camera, while computing a confidence score on the classification. In the case of an unreliable classification, another stage of object recognition is dynamically requested, but this time from the viewpoint of the hand camera. The objects detected from the two cameras are matched and their classification decisions are fused through a novel fusion approach based on the Dempster-Shafer evidence theory. Experimental results show sizable improvements in object recognition performance compared to a traditional singlecamera configuration, as well as applicability to handling partial occlusions.\",\"PeriodicalId\":414782,\"journal\":{\"name\":\"Proceedings of the 2nd International Conference on Vision, Image and Signal Processing\",\"volume\":\"43 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-08-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2nd International Conference on Vision, Image and Signal Processing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3271553.3271564\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2nd International Conference on Vision, Image and Signal Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3271553.3271564","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Active Object Detection Through Dynamic Incorporation of Dempster-Shafer Fusion for Robotic Applications
Employing multiple sensing capabilities in a robotic platform offers significant advantages in increasing the recognition abilities of robots. Specifically, for vision-based object detection in a real-world environment, acquiring information from different viewpoints might be decisive for correct classifications in the presence of occlusions or to disambiguate between similar objects. For this reason, an active vision object detection system is proposed in this paper. It is implemented on a robotic environment that uses a 3D camera mounted on the robot head and an RGB camera on its hand. The system tries to detect and recognize objects being seen from the head camera, while computing a confidence score on the classification. In the case of an unreliable classification, another stage of object recognition is dynamically requested, but this time from the viewpoint of the hand camera. The objects detected from the two cameras are matched and their classification decisions are fused through a novel fusion approach based on the Dempster-Shafer evidence theory. Experimental results show sizable improvements in object recognition performance compared to a traditional singlecamera configuration, as well as applicability to handling partial occlusions.