{"title":"Point-Level and Set-Level Deep Representation Learning for Cross-Modal Person Re-identification","authors":"Jihui Hu, Pengfei Ye, Danyang Li, Lingyun Dong, Xiaopan Chen, Xiaoke Zhu","doi":"10.1109/ICCC56324.2022.10065694","DOIUrl":null,"url":null,"abstract":"In practice, significant modality differences usually exist between visible and infrared images, which makes visible-infrared Person Re-Identification (VI-ReID) a challenging research task. Due to the existing influence of pose variation, background changes, and occlusion, there are often outlier samples within the set of images from the same person. These outlier samples can adversely affect the process of learning the cross-modal matching model. The existing VI-ReID methods mainly focus on learning cross-modal feature representation by using image-level discriminant constraints, i.e., the distance between the truly-matching cross-modal images should be smaller than that between wrong-matching cross-modal images. However, most of these methods ignore the adverse influence caused by outliers. To solve the above problems, we proposed a Point-level and Set-level Deep Representation Learning (PSDRL) approach for VI-ReID in this paper. By using the set-level constraint in the process of deep representation learning, the discrepancy between visible and infrared modalities can be decreased, and the adverse effect of outliers can be weakened. By employing the image-level constraint, the discriminability of the obtained deep feature representations can be improved. Extensive experiments are conducted on the publicly available cross-modal Person Re-Identification datasets, including SYSU-MM01 and RegDB. Experimental results demonstrate the effectiveness of the proposed approach.","PeriodicalId":263098,"journal":{"name":"2022 IEEE 8th International Conference on Computer and Communications (ICCC)","volume":"80 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 8th International Conference on Computer and Communications (ICCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCC56324.2022.10065694","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In practice, significant modality differences usually exist between visible and infrared images, which makes visible-infrared Person Re-Identification (VI-ReID) a challenging research task. Due to the existing influence of pose variation, background changes, and occlusion, there are often outlier samples within the set of images from the same person. These outlier samples can adversely affect the process of learning the cross-modal matching model. The existing VI-ReID methods mainly focus on learning cross-modal feature representation by using image-level discriminant constraints, i.e., the distance between the truly-matching cross-modal images should be smaller than that between wrong-matching cross-modal images. However, most of these methods ignore the adverse influence caused by outliers. To solve the above problems, we proposed a Point-level and Set-level Deep Representation Learning (PSDRL) approach for VI-ReID in this paper. By using the set-level constraint in the process of deep representation learning, the discrepancy between visible and infrared modalities can be decreased, and the adverse effect of outliers can be weakened. By employing the image-level constraint, the discriminability of the obtained deep feature representations can be improved. Extensive experiments are conducted on the publicly available cross-modal Person Re-Identification datasets, including SYSU-MM01 and RegDB. Experimental results demonstrate the effectiveness of the proposed approach.