Xiangzeng Liu;Jianfeng Guo;Hao Chen;Qiguang Miao;Yue Xi;Ruyi Liu
{"title":"自适应闭塞感知网络在闭塞人群再识别中的应用","authors":"Xiangzeng Liu;Jianfeng Guo;Hao Chen;Qiguang Miao;Yue Xi;Ruyi Liu","doi":"10.1109/TCSVT.2024.3524555","DOIUrl":null,"url":null,"abstract":"Occluded person re-identification (ReID) is a challenging task due to some of the essential features are interfered by obstacles or other pedestrians. Multi-granularity local feature extraction and recognition can effectively improve the accuracy of ReID under occlusion. However, manual segmentation methods for local features can lead to feature misalignment. Feature alignment based on pose estimation often ignores non-body details (e.g., handbags, backpacks, etc.) while increasing the complexity of the model. To address the above challenges, we propose a novel Adaptive Occlusion-Aware Network (AOANet), which mainly consists of two modules, the Adaptive Position Extractor (APE) and the Occlusion Awareness Module (OAM). In order to adaptively extract distinguishing features of body parts, APE optimizes the representation of multi-granularity features by the guidance of attention mechanism and keypoint features. To further perceive the occluded region, the OAM is developed by adaptively calculating the occlusion weights for body parts. These weights can lead to highlighting the non-occluded parts and suppressing the occluded parts, which in turn improves the accuracy in the occluded situation. Extensive experimental results confirm the advantages of our method on the MSMT17, DukeMTMC-reID, Market-1501, Occluded-Duke and Occluded-ReID datasets. The comparative results demonstrate that our method outperforms comparable methods. Especially on the Occluded-Duke dataset, our method achieved 70.6% mAP and 81.2% Rank-1 accuracy.","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"35 5","pages":"5067-5077"},"PeriodicalIF":8.3000,"publicationDate":"2024-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Adaptive Occlusion-Aware Network for Occluded Person Re-Identification\",\"authors\":\"Xiangzeng Liu;Jianfeng Guo;Hao Chen;Qiguang Miao;Yue Xi;Ruyi Liu\",\"doi\":\"10.1109/TCSVT.2024.3524555\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Occluded person re-identification (ReID) is a challenging task due to some of the essential features are interfered by obstacles or other pedestrians. Multi-granularity local feature extraction and recognition can effectively improve the accuracy of ReID under occlusion. However, manual segmentation methods for local features can lead to feature misalignment. Feature alignment based on pose estimation often ignores non-body details (e.g., handbags, backpacks, etc.) while increasing the complexity of the model. To address the above challenges, we propose a novel Adaptive Occlusion-Aware Network (AOANet), which mainly consists of two modules, the Adaptive Position Extractor (APE) and the Occlusion Awareness Module (OAM). In order to adaptively extract distinguishing features of body parts, APE optimizes the representation of multi-granularity features by the guidance of attention mechanism and keypoint features. To further perceive the occluded region, the OAM is developed by adaptively calculating the occlusion weights for body parts. These weights can lead to highlighting the non-occluded parts and suppressing the occluded parts, which in turn improves the accuracy in the occluded situation. Extensive experimental results confirm the advantages of our method on the MSMT17, DukeMTMC-reID, Market-1501, Occluded-Duke and Occluded-ReID datasets. The comparative results demonstrate that our method outperforms comparable methods. Especially on the Occluded-Duke dataset, our method achieved 70.6% mAP and 81.2% Rank-1 accuracy.\",\"PeriodicalId\":13082,\"journal\":{\"name\":\"IEEE Transactions on Circuits and Systems for Video Technology\",\"volume\":\"35 5\",\"pages\":\"5067-5077\"},\"PeriodicalIF\":8.3000,\"publicationDate\":\"2024-12-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Circuits and Systems for Video Technology\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10819458/\",\"RegionNum\":1,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Circuits and Systems for Video Technology","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10819458/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
Adaptive Occlusion-Aware Network for Occluded Person Re-Identification
Occluded person re-identification (ReID) is a challenging task due to some of the essential features are interfered by obstacles or other pedestrians. Multi-granularity local feature extraction and recognition can effectively improve the accuracy of ReID under occlusion. However, manual segmentation methods for local features can lead to feature misalignment. Feature alignment based on pose estimation often ignores non-body details (e.g., handbags, backpacks, etc.) while increasing the complexity of the model. To address the above challenges, we propose a novel Adaptive Occlusion-Aware Network (AOANet), which mainly consists of two modules, the Adaptive Position Extractor (APE) and the Occlusion Awareness Module (OAM). In order to adaptively extract distinguishing features of body parts, APE optimizes the representation of multi-granularity features by the guidance of attention mechanism and keypoint features. To further perceive the occluded region, the OAM is developed by adaptively calculating the occlusion weights for body parts. These weights can lead to highlighting the non-occluded parts and suppressing the occluded parts, which in turn improves the accuracy in the occluded situation. Extensive experimental results confirm the advantages of our method on the MSMT17, DukeMTMC-reID, Market-1501, Occluded-Duke and Occluded-ReID datasets. The comparative results demonstrate that our method outperforms comparable methods. Especially on the Occluded-Duke dataset, our method achieved 70.6% mAP and 81.2% Rank-1 accuracy.
期刊介绍:
The IEEE Transactions on Circuits and Systems for Video Technology (TCSVT) is dedicated to covering all aspects of video technologies from a circuits and systems perspective. We encourage submissions of general, theoretical, and application-oriented papers related to image and video acquisition, representation, presentation, and display. Additionally, we welcome contributions in areas such as processing, filtering, and transforms; analysis and synthesis; learning and understanding; compression, transmission, communication, and networking; as well as storage, retrieval, indexing, and search. Furthermore, papers focusing on hardware and software design and implementation are highly valued. Join us in advancing the field of video technology through innovative research and insights.