Person re-identification based on Multi-feature Fusion to Enhance Pedestrian Features

IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
Yushan Chen , Guofeng Zou , Zhiwei Huang , Guizhen Chen , Bin Hu
{"title":"Person re-identification based on Multi-feature Fusion to Enhance Pedestrian Features","authors":"Yushan Chen ,&nbsp;Guofeng Zou ,&nbsp;Zhiwei Huang ,&nbsp;Guizhen Chen ,&nbsp;Bin Hu","doi":"10.1016/j.displa.2025.103187","DOIUrl":null,"url":null,"abstract":"<div><div>Person re-identification (person re-ID) is one of the important contents of joint intelligent analysis based on surveillance video, which plays an important role in maintaining social public safety. The key challenge of person re-ID is to address the problem of large intra-class variations among the same person and small inter-class variations between different persons. To solve this problem, we propose a Person Re-identification Network Based on Multi-feature Fusion to Enhance Pedestrian Features (MFEFNet). This network, through global, attribute, and local branches, leverages the complementary information between different levels of pedestrian features, thereby enhancing the accuracy of person re-ID. Firstly, this network leverages the stability of attribute features to reduce intra-class variations and the sensitivity of local features to increase inter-class differences. Secondly, a self-attention fusion module is proposed to address the issue of small receptive fields caused by residual structures, thereby enhancing the ability to extract global features. Thirdly, an attribute area weight module is proposed to address the issue that different pedestrian attributes focus on different person regions. By localizing regions related to attributes, it reduces information redundancy. Finally, this method achieved 95.63% Rank-1 accuracy and 88.29% mAP on Market-1501 dataset, 90.13% Rank-1 accuracy and 79.85% mAP on DukeMTMC-reID dataset and 77.21% Rank-1 accuracy and 60.34% mAP on Occluded-Market dataset.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"91 ","pages":"Article 103187"},"PeriodicalIF":3.4000,"publicationDate":"2025-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Displays","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0141938225002240","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

Abstract

Person re-identification (person re-ID) is one of the important contents of joint intelligent analysis based on surveillance video, which plays an important role in maintaining social public safety. The key challenge of person re-ID is to address the problem of large intra-class variations among the same person and small inter-class variations between different persons. To solve this problem, we propose a Person Re-identification Network Based on Multi-feature Fusion to Enhance Pedestrian Features (MFEFNet). This network, through global, attribute, and local branches, leverages the complementary information between different levels of pedestrian features, thereby enhancing the accuracy of person re-ID. Firstly, this network leverages the stability of attribute features to reduce intra-class variations and the sensitivity of local features to increase inter-class differences. Secondly, a self-attention fusion module is proposed to address the issue of small receptive fields caused by residual structures, thereby enhancing the ability to extract global features. Thirdly, an attribute area weight module is proposed to address the issue that different pedestrian attributes focus on different person regions. By localizing regions related to attributes, it reduces information redundancy. Finally, this method achieved 95.63% Rank-1 accuracy and 88.29% mAP on Market-1501 dataset, 90.13% Rank-1 accuracy and 79.85% mAP on DukeMTMC-reID dataset and 77.21% Rank-1 accuracy and 60.34% mAP on Occluded-Market dataset.
基于多特征融合增强行人特征的人再识别
人员再识别(Person re-ID)是基于监控视频的联合智能分析的重要内容之一,对维护社会公共安全具有重要作用。人员重新识别的关键挑战是解决同一个人之间的大阶级内部差异和不同人之间的小阶级之间差异的问题。为了解决这一问题,我们提出了一种基于多特征融合增强行人特征的人再识别网络(MFEFNet)。该网络通过全局分支、属性分支和局部分支,利用不同层次行人特征之间的互补信息,从而提高了人员重新识别的准确性。首先,该网络利用属性特征的稳定性来减少类内变化,利用局部特征的敏感性来增加类间差异。其次,提出了自关注融合模块,解决了残馀结构导致的接收野较小的问题,提高了提取全局特征的能力;再次,针对不同行人属性集中在不同人群区域的问题,提出了属性面积权重模块;通过定位与属性相关的区域,减少信息冗余。最后,该方法在Market-1501数据集上的Rank-1准确率为95.63%,mAP为88.29%;在DukeMTMC-reID数据集上的Rank-1准确率为90.13%,mAP为79.85%;在closed - market数据集上的Rank-1准确率为77.21%,mAP为60.34%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Displays
Displays 工程技术-工程:电子与电气
CiteScore
4.60
自引率
25.60%
发文量
138
审稿时长
92 days
期刊介绍: Displays is the international journal covering the research and development of display technology, its effective presentation and perception of information, and applications and systems including display-human interface. Technical papers on practical developments in Displays technology provide an effective channel to promote greater understanding and cross-fertilization across the diverse disciplines of the Displays community. Original research papers solving ergonomics issues at the display-human interface advance effective presentation of information. Tutorial papers covering fundamentals intended for display technologies and human factor engineers new to the field will also occasionally featured.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信