Cross-modal person re-identification using fused local effective features and multi-scale features

Lihui Lu, Rifan Wang, Zhencong Chen, Jiaqi Chen
{"title":"Cross-modal person re-identification using fused local effective features and multi-scale features","authors":"Lihui Lu, Rifan Wang, Zhencong Chen, Jiaqi Chen","doi":"10.1177/01423312241266275","DOIUrl":null,"url":null,"abstract":"The main research objective of cross-modal person re-identification is to retrieve matching images of the same person from image repositories in both modalities, given visible light or infrared images of individuals. Due to the significant modality gap between pedestrian images, the task of person re-identification faces considerable challenges. To address this issue, a method is proposed that utilizes the fusion of local effective features and multi-scale features. First, images are transformed into pseudo-infrared images through data augmentation and then a dual-stream network is designed using ResNet50_IBN for feature extraction. Subsequently, pedestrian features extracted from different layers are fused at multiple scales to alleviate feature loss caused during the convolution process. Finally, the model is supervised using global features and local effective features to address issues related to cluttered backgrounds and varying pedestrian positions in images. The proposed method is experimentally validated on the current mainstream cross-modal person re-identification datasets SYSU-MM01 and RegDB, demonstrating improvements in Rank-1 and mAP metrics compared to current state-of-the-art algorithms.","PeriodicalId":507087,"journal":{"name":"Transactions of the Institute of Measurement and Control","volume":"31 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Transactions of the Institute of Measurement and Control","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1177/01423312241266275","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The main research objective of cross-modal person re-identification is to retrieve matching images of the same person from image repositories in both modalities, given visible light or infrared images of individuals. Due to the significant modality gap between pedestrian images, the task of person re-identification faces considerable challenges. To address this issue, a method is proposed that utilizes the fusion of local effective features and multi-scale features. First, images are transformed into pseudo-infrared images through data augmentation and then a dual-stream network is designed using ResNet50_IBN for feature extraction. Subsequently, pedestrian features extracted from different layers are fused at multiple scales to alleviate feature loss caused during the convolution process. Finally, the model is supervised using global features and local effective features to address issues related to cluttered backgrounds and varying pedestrian positions in images. The proposed method is experimentally validated on the current mainstream cross-modal person re-identification datasets SYSU-MM01 and RegDB, demonstrating improvements in Rank-1 and mAP metrics compared to current state-of-the-art algorithms.
利用融合的局部有效特征和多尺度特征进行跨模态人员再识别
跨模态人员再识别的主要研究目标是,在给定个人的可见光或红外图像的情况下,从两种模态的图像库中检索出匹配的同一人的图像。由于行人图像之间存在巨大的模态差距,人员再识别任务面临着相当大的挑战。为了解决这个问题,我们提出了一种利用局部有效特征和多尺度特征融合的方法。首先,通过数据增强将图像转换为伪红外图像,然后使用 ResNet50_IBN 设计双流网络进行特征提取。随后,对从不同层提取的行人特征进行多尺度融合,以减少卷积过程中造成的特征损失。最后,利用全局特征和局部有效特征对模型进行监督,以解决与杂乱背景和图像中行人位置变化相关的问题。所提出的方法在当前主流的跨模态人物再识别数据集 SYSU-MM01 和 RegDB 上进行了实验验证,与当前最先进的算法相比,Rank-1 和 mAP 指标均有所提高。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信