Improved Part-aligned Deep Features Learning for Person Re-Identification

Sheng-Luen Chung, Yuchen Xue, S. Chien, Ruei-Shan Chan
{"title":"Improved Part-aligned Deep Features Learning for Person Re-Identification","authors":"Sheng-Luen Chung, Yuchen Xue, S. Chien, Ruei-Shan Chan","doi":"10.1109/AVSS.2019.8909867","DOIUrl":null,"url":null,"abstract":"Person Re-IDentification (Re-ID) is to recognize a person who has been seen before by different cameras from possibly scenes. Re-ID poses as one of the most difficult computer vision problems owing to the enormous amount of identities involved in a large-scale image pool, with much similar appearance constrained by low resolution image, in a possibly occluded scene, etc. Global features geared for general object recognition and face recognition are far less adequate to re-identify a same person across cameras. As such, more discriminating features are needed to identify people. In particular, part-based feature extraction methods that extract by learning local fine- grained features of different human body parts from detected persons have been proved effective for person Re-ID. To further improve the part-aligned spatial feature approach, this paper proposes an improved part-aligned feature (IPAF) deep learning framework to better characterize a person's complete information with the following threes highlights: part alignment, finer part segmentation, and better learning network backbone. Our proposed solution has been trained and tested on the two most comprehensive Re-ID datasets with comparable performance of reported state-of-the-art solutions: for the dataset of Market1501 (DukeMTMC-reID), our proposed solution both achieves competitive results with mAP of 85.96% (84.70%) and CMC 1 of 94.30% (89.84%), respectively.","PeriodicalId":243194,"journal":{"name":"2019 16th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 16th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AVSS.2019.8909867","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Person Re-IDentification (Re-ID) is to recognize a person who has been seen before by different cameras from possibly scenes. Re-ID poses as one of the most difficult computer vision problems owing to the enormous amount of identities involved in a large-scale image pool, with much similar appearance constrained by low resolution image, in a possibly occluded scene, etc. Global features geared for general object recognition and face recognition are far less adequate to re-identify a same person across cameras. As such, more discriminating features are needed to identify people. In particular, part-based feature extraction methods that extract by learning local fine- grained features of different human body parts from detected persons have been proved effective for person Re-ID. To further improve the part-aligned spatial feature approach, this paper proposes an improved part-aligned feature (IPAF) deep learning framework to better characterize a person's complete information with the following threes highlights: part alignment, finer part segmentation, and better learning network backbone. Our proposed solution has been trained and tested on the two most comprehensive Re-ID datasets with comparable performance of reported state-of-the-art solutions: for the dataset of Market1501 (DukeMTMC-reID), our proposed solution both achieves competitive results with mAP of 85.96% (84.70%) and CMC 1 of 94.30% (89.84%), respectively.
改进的部分对齐深度特征学习用于人物再识别
人物再识别(Re-ID)是指从可能的场景中识别出之前被不同摄像机看到过的人。由于大规模图像池中涉及大量身份,在低分辨率图像中,在可能被遮挡的场景中,具有许多相似的外观,因此重新识别是最困难的计算机视觉问题之一。面向一般物体识别和面部识别的全局特征远不足以在不同的相机上重新识别同一个人。因此,需要更多的识别特征来识别人。特别是基于部位的特征提取方法,通过学习人体不同部位的局部细粒度特征,从被检测的人身上提取,已被证明是有效的。为了进一步改进部分对齐空间特征方法,本文提出了一种改进的部分对齐特征(IPAF)深度学习框架,以更好地表征人的完整信息,该框架具有三个重点:部分对齐、更精细的部分分割和更好的学习网络骨干。我们提出的解决方案已经在两个最全面的Re-ID数据集上进行了训练和测试,其性能与报道的最先进的解决方案相当:对于Market1501数据集(DukeMTMC-reID),我们提出的解决方案分别取得了85.96%(84.70%)和94.30%(89.84%)的竞争结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信