{"title":"A Mobile System for Scene Monitoring and Object Retrieval","authors":"David Birkas, K. Birkas, T. Popa","doi":"10.1145/2915926.2915933","DOIUrl":null,"url":null,"abstract":"Object retrieval in a scene is an important, but largely unsolved research problem with a wide range of practical applications in security and monitoring systems, in automatic navigation such as self-driving cars, in 3D modelling, scene understanding, etc. Although this problem has been traditionally researched using color cameras and video setups as its main sensing modality, the emergence and already big success of the real-time hybrid depth and color cameras such as the Kinect that are now available even on several laptop, tablet and smart-phone models opened this problem to new popular acquisition modalities. In this paper we present a data driven retrieval system prototype based on a depth-camera sensing technology. Our system uses a combination of local and global feature and fuses the information from different views to reliably retrieve objects in a scene in the presence of noisy data and severe occlusions. Our system does not require that the objects in the scene are in their natural up-right position and is capable of retrieving smaller object than previous depth-map methods.","PeriodicalId":409915,"journal":{"name":"Proceedings of the 29th International Conference on Computer Animation and Social Agents","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 29th International Conference on Computer Animation and Social Agents","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2915926.2915933","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
Object retrieval in a scene is an important, but largely unsolved research problem with a wide range of practical applications in security and monitoring systems, in automatic navigation such as self-driving cars, in 3D modelling, scene understanding, etc. Although this problem has been traditionally researched using color cameras and video setups as its main sensing modality, the emergence and already big success of the real-time hybrid depth and color cameras such as the Kinect that are now available even on several laptop, tablet and smart-phone models opened this problem to new popular acquisition modalities. In this paper we present a data driven retrieval system prototype based on a depth-camera sensing technology. Our system uses a combination of local and global feature and fuses the information from different views to reliably retrieve objects in a scene in the presence of noisy data and severe occlusions. Our system does not require that the objects in the scene are in their natural up-right position and is capable of retrieving smaller object than previous depth-map methods.