Constraint patch matching for faster person re-identification

Aske R. Lejbølle, Kamal Nasrollahi, T. Moeslund
{"title":"Constraint patch matching for faster person re-identification","authors":"Aske R. Lejbølle, Kamal Nasrollahi, T. Moeslund","doi":"10.1109/ISBA.2017.7947703","DOIUrl":null,"url":null,"abstract":"In person re-identification, it is of great importance to extract very discriminative features in order to distinguish between images of different persons captured in different camera views. Features can be extracted globally from the entire or parts of the image, or locally from patches. As a result, matching can either be done using global feature descriptors or local patch features with the latter being more computationally heavy due to the number of patch pairs to match. Horizontal constraints are therefore usually added to only match patches at same horizontal location. As an extension, we propose an algorithm which adds vertical constraints to different body parts, to increase accuracy and decrease processing time. The constraints are applied to the CVPDL system by Li et al. [13] in order to compare accuracy with the use of a foreground mask and processing time when only adding horizontal constraints. We refer to our constrained CVPDL as C-CVPDL. Experiments conducted on two datasets, VIPeR and CUHK01, show C-CVPDL to achieve similar rank-1 accuracy on VIPeR while improving rank-1 accuracy by 3.83% for CUHK01 compared to CVPDL. Furthermore, experimental results on CUHK03 show a rank-1 accuracy 52.05%, being comparable to state-of-the-art CNN's and beating other patch matching systems. Finally, timings for VIPeR and CUHK01 show our constraints to decrease time by 32.77% and 37%, respectively, while only taking up to 36ms to compute per person.","PeriodicalId":436086,"journal":{"name":"2017 IEEE International Conference on Identity, Security and Behavior Analysis (ISBA)","volume":"458 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 IEEE International Conference on Identity, Security and Behavior Analysis (ISBA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISBA.2017.7947703","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

In person re-identification, it is of great importance to extract very discriminative features in order to distinguish between images of different persons captured in different camera views. Features can be extracted globally from the entire or parts of the image, or locally from patches. As a result, matching can either be done using global feature descriptors or local patch features with the latter being more computationally heavy due to the number of patch pairs to match. Horizontal constraints are therefore usually added to only match patches at same horizontal location. As an extension, we propose an algorithm which adds vertical constraints to different body parts, to increase accuracy and decrease processing time. The constraints are applied to the CVPDL system by Li et al. [13] in order to compare accuracy with the use of a foreground mask and processing time when only adding horizontal constraints. We refer to our constrained CVPDL as C-CVPDL. Experiments conducted on two datasets, VIPeR and CUHK01, show C-CVPDL to achieve similar rank-1 accuracy on VIPeR while improving rank-1 accuracy by 3.83% for CUHK01 compared to CVPDL. Furthermore, experimental results on CUHK03 show a rank-1 accuracy 52.05%, being comparable to state-of-the-art CNN's and beating other patch matching systems. Finally, timings for VIPeR and CUHK01 show our constraints to decrease time by 32.77% and 37%, respectively, while only taking up to 36ms to compute per person.
约束贴片匹配,加快人员再识别
在人的再识别中,为了区分不同摄像机视角下的不同人的图像,提取具有很强判别性的特征是非常重要的。特征可以从整体或部分图像中提取,也可以从局部补丁中提取。因此,匹配既可以使用全局特征描述符,也可以使用局部补丁特征,后者由于需要匹配的补丁对数量较多,计算量更大。因此,通常只添加水平约束来匹配相同水平位置的补丁。作为扩展,我们提出了一种对不同身体部位增加垂直约束的算法,以提高精度和减少处理时间。Li等人[13]将约束应用于CVPDL系统,以便与仅添加水平约束时使用前景掩模的精度和处理时间进行比较。我们将约束的CVPDL称为C-CVPDL。在VIPeR和CUHK01两个数据集上进行的实验表明,C-CVPDL在VIPeR上实现了相似的rank-1精度,而CUHK01的rank-1精度比CVPDL提高了3.83%。此外,在CUHK03上的实验结果显示,rank-1的准确率为52.05%,与最先进的CNN相当,击败了其他补丁匹配系统。最后,VIPeR和CUHK01的时间显示我们的限制,分别减少了32.77%和37%的时间,而每人只需要36毫秒的计算时间。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信