{"title":"Handling Ambiguous Object Recognition Situations in a Robotic Environment via Dynamic Information Fusion","authors":"A. S.PouryaHoseini, M. Nicolescu, M. Nicolescu","doi":"10.1109/COGSIMA.2018.8423982","DOIUrl":null,"url":null,"abstract":"Vision is usually a rich source of information for robots aiming to understand activities that take place in their surroundings, where a relevant task can be to detect and recognize objects of interest. In real world conditions a robot may not have a good viewing angle or be sufficiently close to an object to distinguish its features, which can lead to misclassifications. One solution to address this problem is active vision, leading to an improved level of situational awareness in a dynamic environment. In that context, a vision system on the robot actively manipulates the camera to obtain enough discriminating features for the task of object detection and recognition. In this paper, an active vision system is proposed that is able to identify a situation with a high possibility of misclassification (for example, partial occlusions) and then to take appropriate action by dynamically incorporating another camera installed on the robot’s hand. A decision fusion technique based on a transferable belief model generates the final classification results. Experimental results show considerable improvements in object detection and recognition performance.","PeriodicalId":231353,"journal":{"name":"2018 IEEE Conference on Cognitive and Computational Aspects of Situation Management (CogSIMA)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE Conference on Cognitive and Computational Aspects of Situation Management (CogSIMA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/COGSIMA.2018.8423982","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
Vision is usually a rich source of information for robots aiming to understand activities that take place in their surroundings, where a relevant task can be to detect and recognize objects of interest. In real world conditions a robot may not have a good viewing angle or be sufficiently close to an object to distinguish its features, which can lead to misclassifications. One solution to address this problem is active vision, leading to an improved level of situational awareness in a dynamic environment. In that context, a vision system on the robot actively manipulates the camera to obtain enough discriminating features for the task of object detection and recognition. In this paper, an active vision system is proposed that is able to identify a situation with a high possibility of misclassification (for example, partial occlusions) and then to take appropriate action by dynamically incorporating another camera installed on the robot’s hand. A decision fusion technique based on a transferable belief model generates the final classification results. Experimental results show considerable improvements in object detection and recognition performance.