{"title":"交叉擦除增强网络对闭塞者的再识别","authors":"Yunzuo Zhang, Yuehui Yang, Weili Kang, Jiawen Zhen","doi":"10.1016/j.patrec.2025.04.015","DOIUrl":null,"url":null,"abstract":"<div><div>Occluded person re-identification is one of the most challenging tasks in safety monitoring. Most existing methods for occluded person re-identification rely on external auxiliary models, which cannot handle non-target pedestrian occlusions and ignore the contextual information of pedestrian images. To address the above issues, we propose a cross-erasure enhanced network (CENet) for occluded person re-identification. To be specific, we propose a feature map cross-erasure module (FMCM) that can simulate obstacle occlusion and non-target pedestrian occlusion in real scenes by erasing feature maps. Meanwhile, we design an occluded-aware mixed attention module (OMAM), which empowers the network to efficiently capture features from non-occluded areas. Finally, we propose a full-view enhancement module (FEM) to extract discriminative features of pedestrian images by parsing the contextual information of the images. Comprehensive experimental outcomes on both occluded and holistic datasets affirm the effectiveness of our method.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"193 ","pages":"Pages 108-114"},"PeriodicalIF":3.9000,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Cross-erasure enhanced network for occluded person re-identification\",\"authors\":\"Yunzuo Zhang, Yuehui Yang, Weili Kang, Jiawen Zhen\",\"doi\":\"10.1016/j.patrec.2025.04.015\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Occluded person re-identification is one of the most challenging tasks in safety monitoring. Most existing methods for occluded person re-identification rely on external auxiliary models, which cannot handle non-target pedestrian occlusions and ignore the contextual information of pedestrian images. To address the above issues, we propose a cross-erasure enhanced network (CENet) for occluded person re-identification. To be specific, we propose a feature map cross-erasure module (FMCM) that can simulate obstacle occlusion and non-target pedestrian occlusion in real scenes by erasing feature maps. Meanwhile, we design an occluded-aware mixed attention module (OMAM), which empowers the network to efficiently capture features from non-occluded areas. Finally, we propose a full-view enhancement module (FEM) to extract discriminative features of pedestrian images by parsing the contextual information of the images. Comprehensive experimental outcomes on both occluded and holistic datasets affirm the effectiveness of our method.</div></div>\",\"PeriodicalId\":54638,\"journal\":{\"name\":\"Pattern Recognition Letters\",\"volume\":\"193 \",\"pages\":\"Pages 108-114\"},\"PeriodicalIF\":3.9000,\"publicationDate\":\"2025-04-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Pattern Recognition Letters\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0167865525001485\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pattern Recognition Letters","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167865525001485","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Cross-erasure enhanced network for occluded person re-identification
Occluded person re-identification is one of the most challenging tasks in safety monitoring. Most existing methods for occluded person re-identification rely on external auxiliary models, which cannot handle non-target pedestrian occlusions and ignore the contextual information of pedestrian images. To address the above issues, we propose a cross-erasure enhanced network (CENet) for occluded person re-identification. To be specific, we propose a feature map cross-erasure module (FMCM) that can simulate obstacle occlusion and non-target pedestrian occlusion in real scenes by erasing feature maps. Meanwhile, we design an occluded-aware mixed attention module (OMAM), which empowers the network to efficiently capture features from non-occluded areas. Finally, we propose a full-view enhancement module (FEM) to extract discriminative features of pedestrian images by parsing the contextual information of the images. Comprehensive experimental outcomes on both occluded and holistic datasets affirm the effectiveness of our method.
期刊介绍:
Pattern Recognition Letters aims at rapid publication of concise articles of a broad interest in pattern recognition.
Subject areas include all the current fields of interest represented by the Technical Committees of the International Association of Pattern Recognition, and other developing themes involving learning and recognition.