{"title":"Multi-Feature Extraction and Fusion Method for Pedestrian Re-identification","authors":"Xu Zhang, Laxmisha Rai","doi":"10.1109/ICEICT51264.2020.9334218","DOIUrl":null,"url":null,"abstract":"Pedestrian re-identification is to identify the target interested from pedestrian images taken by multiple cameras. Recently, the ReID (Person re-identification) algorithm has shown that the local features of pedestrians are used to describe various parts of the body, the global features of pedestrians are used to represent the overall information, and the local features of relationships are used to make certain connections between local features to form more discriminative features. Although these algorithms have a certain effect on pedestrian re-identification, their recognition accuracy is still not satisfactory. To solve these problems, we propose a novel multi-feature extraction fusion model (MFEFM). It can extract three different features of pedestrian images at the same time and merge them together to form a more discriminative feature. First, use ResNet-50 as the infrastructure to extract basic features. Then, global maximum pooling (GMP) is used to extract local features of pedestrian images, global average pooling (GAP) is used to extract global features, and pose-estimator is used to extract key point features in parallel. Finally, we use the relationship network to form connected local features and key point features, and then connect these three features together.","PeriodicalId":124337,"journal":{"name":"2020 IEEE 3rd International Conference on Electronic Information and Communication Technology (ICEICT)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE 3rd International Conference on Electronic Information and Communication Technology (ICEICT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICEICT51264.2020.9334218","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Pedestrian re-identification is to identify the target interested from pedestrian images taken by multiple cameras. Recently, the ReID (Person re-identification) algorithm has shown that the local features of pedestrians are used to describe various parts of the body, the global features of pedestrians are used to represent the overall information, and the local features of relationships are used to make certain connections between local features to form more discriminative features. Although these algorithms have a certain effect on pedestrian re-identification, their recognition accuracy is still not satisfactory. To solve these problems, we propose a novel multi-feature extraction fusion model (MFEFM). It can extract three different features of pedestrian images at the same time and merge them together to form a more discriminative feature. First, use ResNet-50 as the infrastructure to extract basic features. Then, global maximum pooling (GMP) is used to extract local features of pedestrian images, global average pooling (GAP) is used to extract global features, and pose-estimator is used to extract key point features in parallel. Finally, we use the relationship network to form connected local features and key point features, and then connect these three features together.