Qingwei Tang, Pu Yan, Jie Chen, Hui Shao, Fuyu Wang, G. Wang
{"title":"基于多尺度全局特征和权重驱动部分特征的人物再识别","authors":"Qingwei Tang, Pu Yan, Jie Chen, Hui Shao, Fuyu Wang, G. Wang","doi":"10.3233/aic-210258","DOIUrl":null,"url":null,"abstract":"Person re-identification (ReID) is a crucial task in identifying pedestrians of interest across multiple surveillance camera views. ReID methods in recent years have shown that using global features or part features of the pedestrian is extremely effective, but many models do not have further design models to make more reasonable use of global and part features. A new model is proposed to use global features more rationally and extract more fine-grained part features. Specifically, our model captures global features by using a multi-scale attention global feature extraction module, and we design a new context-based adaptive part feature extraction module to consider continuity between different body parts of pedestrians. In addition, we have added additional enhancement modules to the model to enhance its performance. Experiments show that our model achieves competitive results on the Market1501, Dukemtmc-ReID, and MSMT17 datasets. The ablation experiments demonstrate the effectiveness of each module of our model. The code of our model is available at: https://github.com/davidtqw/Person-Re-Identification.","PeriodicalId":50835,"journal":{"name":"AI Communications","volume":"27 1","pages":"207-223"},"PeriodicalIF":1.4000,"publicationDate":"2022-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Person re-identification based on multi-scale global feature and weight-driven part feature\",\"authors\":\"Qingwei Tang, Pu Yan, Jie Chen, Hui Shao, Fuyu Wang, G. Wang\",\"doi\":\"10.3233/aic-210258\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Person re-identification (ReID) is a crucial task in identifying pedestrians of interest across multiple surveillance camera views. ReID methods in recent years have shown that using global features or part features of the pedestrian is extremely effective, but many models do not have further design models to make more reasonable use of global and part features. A new model is proposed to use global features more rationally and extract more fine-grained part features. Specifically, our model captures global features by using a multi-scale attention global feature extraction module, and we design a new context-based adaptive part feature extraction module to consider continuity between different body parts of pedestrians. In addition, we have added additional enhancement modules to the model to enhance its performance. Experiments show that our model achieves competitive results on the Market1501, Dukemtmc-ReID, and MSMT17 datasets. The ablation experiments demonstrate the effectiveness of each module of our model. The code of our model is available at: https://github.com/davidtqw/Person-Re-Identification.\",\"PeriodicalId\":50835,\"journal\":{\"name\":\"AI Communications\",\"volume\":\"27 1\",\"pages\":\"207-223\"},\"PeriodicalIF\":1.4000,\"publicationDate\":\"2022-08-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"AI Communications\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.3233/aic-210258\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"AI Communications","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.3233/aic-210258","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Person re-identification based on multi-scale global feature and weight-driven part feature
Person re-identification (ReID) is a crucial task in identifying pedestrians of interest across multiple surveillance camera views. ReID methods in recent years have shown that using global features or part features of the pedestrian is extremely effective, but many models do not have further design models to make more reasonable use of global and part features. A new model is proposed to use global features more rationally and extract more fine-grained part features. Specifically, our model captures global features by using a multi-scale attention global feature extraction module, and we design a new context-based adaptive part feature extraction module to consider continuity between different body parts of pedestrians. In addition, we have added additional enhancement modules to the model to enhance its performance. Experiments show that our model achieves competitive results on the Market1501, Dukemtmc-ReID, and MSMT17 datasets. The ablation experiments demonstrate the effectiveness of each module of our model. The code of our model is available at: https://github.com/davidtqw/Person-Re-Identification.
期刊介绍:
AI Communications is a journal on artificial intelligence (AI) which has a close relationship to EurAI (European Association for Artificial Intelligence, formerly ECCAI). It covers the whole AI community: Scientific institutions as well as commercial and industrial companies.
AI Communications aims to enhance contacts and information exchange between AI researchers and developers, and to provide supranational information to those concerned with AI and advanced information processing. AI Communications publishes refereed articles concerning scientific and technical AI procedures, provided they are of sufficient interest to a large readership of both scientific and practical background. In addition it contains high-level background material, both at the technical level as well as the level of opinions, policies and news.