{"title":"Multi-view based coupled dictionary learning for person re-identification","authors":"Fei Ma, Qinglong Liu, Xiaoke Zhu, Xiaoyuan Jing","doi":"10.1109/SPAC.2017.8304357","DOIUrl":null,"url":null,"abstract":"Person re-identification is a hot topic, which can be applied in pedestrian tracking and intelligent monitoring. However, person reidentification is challenging due to the large variations of visual appearance caused by view angle, lighting, background clutter and occlusion. In practice, there exist large differences among different types of features and among different cameras. To improve the favorable representation of different features, we propose a multi-view based coupled dictionary pair learning approach, which can learn the color features and texture features respectively. The color dictionary pair aims to learn the color feature of each person from different cameras. The texture dictionary pair seeks to learn the texture feature of person from both cameras. The learned coupled dictionary pair can demonstrate the intrinsic relationship of different cameras and different types of features. Experimental results on two public pedestrian datasets demonstrate that our proposed approach can perform better than the other competing methods.","PeriodicalId":161647,"journal":{"name":"2017 International Conference on Security, Pattern Analysis, and Cybernetics (SPAC)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 International Conference on Security, Pattern Analysis, and Cybernetics (SPAC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SPAC.2017.8304357","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Person re-identification is a hot topic, which can be applied in pedestrian tracking and intelligent monitoring. However, person reidentification is challenging due to the large variations of visual appearance caused by view angle, lighting, background clutter and occlusion. In practice, there exist large differences among different types of features and among different cameras. To improve the favorable representation of different features, we propose a multi-view based coupled dictionary pair learning approach, which can learn the color features and texture features respectively. The color dictionary pair aims to learn the color feature of each person from different cameras. The texture dictionary pair seeks to learn the texture feature of person from both cameras. The learned coupled dictionary pair can demonstrate the intrinsic relationship of different cameras and different types of features. Experimental results on two public pedestrian datasets demonstrate that our proposed approach can perform better than the other competing methods.