Guy Berdugo, Omri Soceanu, Y. Moshe, Dmitry Rudoy, Itsik Dvir
{"title":"Object reidentification in real world scenarios across multiple non-overlapping cameras","authors":"Guy Berdugo, Omri Soceanu, Y. Moshe, Dmitry Rudoy, Itsik Dvir","doi":"10.5281/ZENODO.42233","DOIUrl":null,"url":null,"abstract":"In a world where surveillance cameras are at every street corner, there is a growing need for synergy among cameras as well as the automation of the data analysis process. This paper deals with the problem of reidentification of objects in a set of multiple cameras inputs without any prior knowledge of the cameras distribution or coverage. The proposed approach is robust to change of scale, lighting conditions, noise and viewpoints among cameras, as well as object rotation and unpredictable trajectories. Both novel and traditional features are extracted from the object. Light and noise invariance is achieved using textural features such as oriented gradients, color ratio and color saliency. A probabilistic framework is used incorporating the different features into a human probabilistic model. Experimental results show that textural features improve the reidentification rate and the robustness of the recognition process compared with other state-of-the-art algorithms.","PeriodicalId":409817,"journal":{"name":"2010 18th European Signal Processing Conference","volume":"6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2010-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"25","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2010 18th European Signal Processing Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5281/ZENODO.42233","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 25
Abstract
In a world where surveillance cameras are at every street corner, there is a growing need for synergy among cameras as well as the automation of the data analysis process. This paper deals with the problem of reidentification of objects in a set of multiple cameras inputs without any prior knowledge of the cameras distribution or coverage. The proposed approach is robust to change of scale, lighting conditions, noise and viewpoints among cameras, as well as object rotation and unpredictable trajectories. Both novel and traditional features are extracted from the object. Light and noise invariance is achieved using textural features such as oriented gradients, color ratio and color saliency. A probabilistic framework is used incorporating the different features into a human probabilistic model. Experimental results show that textural features improve the reidentification rate and the robustness of the recognition process compared with other state-of-the-art algorithms.