改进的Res2Net人物再识别模型

Zongjing Cao, H. Lee
{"title":"改进的Res2Net人物再识别模型","authors":"Zongjing Cao, H. Lee","doi":"10.1109/CogMI48466.2019.00041","DOIUrl":null,"url":null,"abstract":"Person re-identification has become a very popular research topic in the computer vision community owing to its numerous applications and growing importance in visual surveillance. Person re-identification remains challenging due to occlusion, illumination and significant intra-class variations across different cameras. In this paper, we propose a multi-task network base on an improved Res2Net model that simultaneously computes the identification loss and verification loss of two pedestrian images. Given a pair of pedestrian images, the system predicts the identities of the two input images and whether they belong to the same identity. In order to obtain deeper feature information of pedestrians, we propose to use the latest Res2Net model for feature extraction of each input image. Experiments on several large-scale person re-identification benchmark datasets demonstrate the accuracy of our approach. For example, rank-1 accuracies are 83.18% (+1.38) and 93.14% (+0.84) for the DukeMTMC and Market-1501 datasets, respectively. The proposed method shows encouraging improvements compared with state-of-the-art methods.","PeriodicalId":116160,"journal":{"name":"2019 IEEE First International Conference on Cognitive Machine Intelligence (CogMI)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Improved Res2Net Model for Person re-Identification\",\"authors\":\"Zongjing Cao, H. Lee\",\"doi\":\"10.1109/CogMI48466.2019.00041\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Person re-identification has become a very popular research topic in the computer vision community owing to its numerous applications and growing importance in visual surveillance. Person re-identification remains challenging due to occlusion, illumination and significant intra-class variations across different cameras. In this paper, we propose a multi-task network base on an improved Res2Net model that simultaneously computes the identification loss and verification loss of two pedestrian images. Given a pair of pedestrian images, the system predicts the identities of the two input images and whether they belong to the same identity. In order to obtain deeper feature information of pedestrians, we propose to use the latest Res2Net model for feature extraction of each input image. Experiments on several large-scale person re-identification benchmark datasets demonstrate the accuracy of our approach. For example, rank-1 accuracies are 83.18% (+1.38) and 93.14% (+0.84) for the DukeMTMC and Market-1501 datasets, respectively. The proposed method shows encouraging improvements compared with state-of-the-art methods.\",\"PeriodicalId\":116160,\"journal\":{\"name\":\"2019 IEEE First International Conference on Cognitive Machine Intelligence (CogMI)\",\"volume\":\"48 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-10-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 IEEE First International Conference on Cognitive Machine Intelligence (CogMI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CogMI48466.2019.00041\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE First International Conference on Cognitive Machine Intelligence (CogMI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CogMI48466.2019.00041","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

人的再识别由于其在视觉监控中的应用越来越广泛,已成为计算机视觉界的一个热门研究课题。由于遮挡、照明和不同相机的显著类内差异,人员重新识别仍然具有挑战性。本文提出了一种基于改进的Res2Net模型的多任务网络,该网络可以同时计算两幅行人图像的识别损失和验证损失。给定一对行人图像,系统预测两个输入图像的身份以及它们是否属于同一身份。为了获得更深层次的行人特征信息,我们建议使用最新的Res2Net模型对每个输入图像进行特征提取。在几个大规模的人再识别基准数据集上的实验证明了我们的方法的准确性。例如,DukeMTMC和Market-1501数据集的rank-1准确率分别为83.18%(+1.38)和93.14%(+0.84)。与最先进的方法相比,所提出的方法显示出令人鼓舞的改进。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Improved Res2Net Model for Person re-Identification
Person re-identification has become a very popular research topic in the computer vision community owing to its numerous applications and growing importance in visual surveillance. Person re-identification remains challenging due to occlusion, illumination and significant intra-class variations across different cameras. In this paper, we propose a multi-task network base on an improved Res2Net model that simultaneously computes the identification loss and verification loss of two pedestrian images. Given a pair of pedestrian images, the system predicts the identities of the two input images and whether they belong to the same identity. In order to obtain deeper feature information of pedestrians, we propose to use the latest Res2Net model for feature extraction of each input image. Experiments on several large-scale person re-identification benchmark datasets demonstrate the accuracy of our approach. For example, rank-1 accuracies are 83.18% (+1.38) and 93.14% (+0.84) for the DukeMTMC and Market-1501 datasets, respectively. The proposed method shows encouraging improvements compared with state-of-the-art methods.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信