Yushan Chen , Guofeng Zou , Zhiwei Huang , Guizhen Chen , Bin Hu
{"title":"基于多特征融合增强行人特征的人再识别","authors":"Yushan Chen , Guofeng Zou , Zhiwei Huang , Guizhen Chen , Bin Hu","doi":"10.1016/j.displa.2025.103187","DOIUrl":null,"url":null,"abstract":"<div><div>Person re-identification (person re-ID) is one of the important contents of joint intelligent analysis based on surveillance video, which plays an important role in maintaining social public safety. The key challenge of person re-ID is to address the problem of large intra-class variations among the same person and small inter-class variations between different persons. To solve this problem, we propose a Person Re-identification Network Based on Multi-feature Fusion to Enhance Pedestrian Features (MFEFNet). This network, through global, attribute, and local branches, leverages the complementary information between different levels of pedestrian features, thereby enhancing the accuracy of person re-ID. Firstly, this network leverages the stability of attribute features to reduce intra-class variations and the sensitivity of local features to increase inter-class differences. Secondly, a self-attention fusion module is proposed to address the issue of small receptive fields caused by residual structures, thereby enhancing the ability to extract global features. Thirdly, an attribute area weight module is proposed to address the issue that different pedestrian attributes focus on different person regions. By localizing regions related to attributes, it reduces information redundancy. Finally, this method achieved 95.63% Rank-1 accuracy and 88.29% mAP on Market-1501 dataset, 90.13% Rank-1 accuracy and 79.85% mAP on DukeMTMC-reID dataset and 77.21% Rank-1 accuracy and 60.34% mAP on Occluded-Market dataset.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"91 ","pages":"Article 103187"},"PeriodicalIF":3.4000,"publicationDate":"2025-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Person re-identification based on Multi-feature Fusion to Enhance Pedestrian Features\",\"authors\":\"Yushan Chen , Guofeng Zou , Zhiwei Huang , Guizhen Chen , Bin Hu\",\"doi\":\"10.1016/j.displa.2025.103187\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Person re-identification (person re-ID) is one of the important contents of joint intelligent analysis based on surveillance video, which plays an important role in maintaining social public safety. The key challenge of person re-ID is to address the problem of large intra-class variations among the same person and small inter-class variations between different persons. To solve this problem, we propose a Person Re-identification Network Based on Multi-feature Fusion to Enhance Pedestrian Features (MFEFNet). This network, through global, attribute, and local branches, leverages the complementary information between different levels of pedestrian features, thereby enhancing the accuracy of person re-ID. Firstly, this network leverages the stability of attribute features to reduce intra-class variations and the sensitivity of local features to increase inter-class differences. Secondly, a self-attention fusion module is proposed to address the issue of small receptive fields caused by residual structures, thereby enhancing the ability to extract global features. Thirdly, an attribute area weight module is proposed to address the issue that different pedestrian attributes focus on different person regions. By localizing regions related to attributes, it reduces information redundancy. Finally, this method achieved 95.63% Rank-1 accuracy and 88.29% mAP on Market-1501 dataset, 90.13% Rank-1 accuracy and 79.85% mAP on DukeMTMC-reID dataset and 77.21% Rank-1 accuracy and 60.34% mAP on Occluded-Market dataset.</div></div>\",\"PeriodicalId\":50570,\"journal\":{\"name\":\"Displays\",\"volume\":\"91 \",\"pages\":\"Article 103187\"},\"PeriodicalIF\":3.4000,\"publicationDate\":\"2025-08-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Displays\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0141938225002240\",\"RegionNum\":2,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Displays","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0141938225002240","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
Person re-identification based on Multi-feature Fusion to Enhance Pedestrian Features
Person re-identification (person re-ID) is one of the important contents of joint intelligent analysis based on surveillance video, which plays an important role in maintaining social public safety. The key challenge of person re-ID is to address the problem of large intra-class variations among the same person and small inter-class variations between different persons. To solve this problem, we propose a Person Re-identification Network Based on Multi-feature Fusion to Enhance Pedestrian Features (MFEFNet). This network, through global, attribute, and local branches, leverages the complementary information between different levels of pedestrian features, thereby enhancing the accuracy of person re-ID. Firstly, this network leverages the stability of attribute features to reduce intra-class variations and the sensitivity of local features to increase inter-class differences. Secondly, a self-attention fusion module is proposed to address the issue of small receptive fields caused by residual structures, thereby enhancing the ability to extract global features. Thirdly, an attribute area weight module is proposed to address the issue that different pedestrian attributes focus on different person regions. By localizing regions related to attributes, it reduces information redundancy. Finally, this method achieved 95.63% Rank-1 accuracy and 88.29% mAP on Market-1501 dataset, 90.13% Rank-1 accuracy and 79.85% mAP on DukeMTMC-reID dataset and 77.21% Rank-1 accuracy and 60.34% mAP on Occluded-Market dataset.
期刊介绍:
Displays is the international journal covering the research and development of display technology, its effective presentation and perception of information, and applications and systems including display-human interface.
Technical papers on practical developments in Displays technology provide an effective channel to promote greater understanding and cross-fertilization across the diverse disciplines of the Displays community. Original research papers solving ergonomics issues at the display-human interface advance effective presentation of information. Tutorial papers covering fundamentals intended for display technologies and human factor engineers new to the field will also occasionally featured.