{"title":"Constraint patch matching for faster person re-identification","authors":"Aske R. Lejbølle, Kamal Nasrollahi, T. Moeslund","doi":"10.1109/ISBA.2017.7947703","DOIUrl":null,"url":null,"abstract":"In person re-identification, it is of great importance to extract very discriminative features in order to distinguish between images of different persons captured in different camera views. Features can be extracted globally from the entire or parts of the image, or locally from patches. As a result, matching can either be done using global feature descriptors or local patch features with the latter being more computationally heavy due to the number of patch pairs to match. Horizontal constraints are therefore usually added to only match patches at same horizontal location. As an extension, we propose an algorithm which adds vertical constraints to different body parts, to increase accuracy and decrease processing time. The constraints are applied to the CVPDL system by Li et al. [13] in order to compare accuracy with the use of a foreground mask and processing time when only adding horizontal constraints. We refer to our constrained CVPDL as C-CVPDL. Experiments conducted on two datasets, VIPeR and CUHK01, show C-CVPDL to achieve similar rank-1 accuracy on VIPeR while improving rank-1 accuracy by 3.83% for CUHK01 compared to CVPDL. Furthermore, experimental results on CUHK03 show a rank-1 accuracy 52.05%, being comparable to state-of-the-art CNN's and beating other patch matching systems. Finally, timings for VIPeR and CUHK01 show our constraints to decrease time by 32.77% and 37%, respectively, while only taking up to 36ms to compute per person.","PeriodicalId":436086,"journal":{"name":"2017 IEEE International Conference on Identity, Security and Behavior Analysis (ISBA)","volume":"458 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 IEEE International Conference on Identity, Security and Behavior Analysis (ISBA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISBA.2017.7947703","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In person re-identification, it is of great importance to extract very discriminative features in order to distinguish between images of different persons captured in different camera views. Features can be extracted globally from the entire or parts of the image, or locally from patches. As a result, matching can either be done using global feature descriptors or local patch features with the latter being more computationally heavy due to the number of patch pairs to match. Horizontal constraints are therefore usually added to only match patches at same horizontal location. As an extension, we propose an algorithm which adds vertical constraints to different body parts, to increase accuracy and decrease processing time. The constraints are applied to the CVPDL system by Li et al. [13] in order to compare accuracy with the use of a foreground mask and processing time when only adding horizontal constraints. We refer to our constrained CVPDL as C-CVPDL. Experiments conducted on two datasets, VIPeR and CUHK01, show C-CVPDL to achieve similar rank-1 accuracy on VIPeR while improving rank-1 accuracy by 3.83% for CUHK01 compared to CVPDL. Furthermore, experimental results on CUHK03 show a rank-1 accuracy 52.05%, being comparable to state-of-the-art CNN's and beating other patch matching systems. Finally, timings for VIPeR and CUHK01 show our constraints to decrease time by 32.77% and 37%, respectively, while only taking up to 36ms to compute per person.