{"title":"长短期知识分解与整合,终身人再识别。","authors":"Kunlun Xu,Zichen Liu,Xu Zou,Yuxin Peng,Jiahuan Zhou","doi":"10.1109/tpami.2025.3572468","DOIUrl":null,"url":null,"abstract":"Lifelong person re-identification (LReID) aims to learn from streaming data sources step by step, which suffers from the catastrophic forgetting problem. In this paper, we investigate the exemplar-free LReID setting where no previous exemplar is available during the new step training. Existing exemplar-free LReID methods primarily adopt knowledge distillation to transfer knowledge from an old model to a new one without selection, inevitably introducing erroneous and detrimental information that hinders new knowledge learning. Furthermore, not all critical knowledge can be transferred due to the absence of old data, leading to the permanent loss of undistilled knowledge. To address these limitations, we propose a novel exemplar-free LReID method named Long Short-Term Knowledge Decomposition and Consolidation (LSTKC++). Specifically, an old knowledge rectification mechanism is developed to rectify the old model predictions based on new data annotations, ensuring correct knowledge transfer. Besides, a long-term knowledge consolidation strategy is designed, which first estimates the degree of old knowledge forgetting by leveraging the output difference between the old and new models. Then, a knowledge-guided parameter fusion strategy is developed to balance new and old knowledge, improving long-term knowledge retention. Upon these designs, considering LReID models tend to be biased on the latest seen domains, the fusion weights generated by this process often lead to sub-optimal knowledge balancing. To settle this, we further propose to decompose a single old model into two parts: a long-term old model containing multi-domain knowledge and a short-term model focusing on the latest short-term old knowledge. Then, the incoming new data are explored as an unbiased reference to adjust the old models' fusion weight to achieve backward optimization. Furthermore, an extended complementary knowledge rectification mechanism is developed to mine and retain the correct knowledge in the decomposed models. Extensive experimental results demonstrate that LSTKC++ significantly outperforms state-of-the-art methods by large margins. The code is available at https://github.com/zhoujiahuan1991/LSTKC +.","PeriodicalId":13426,"journal":{"name":"IEEE Transactions on Pattern Analysis and Machine Intelligence","volume":"33 1","pages":""},"PeriodicalIF":18.6000,"publicationDate":"2025-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Long Short-Term Knowledge Decomposition and Consolidation for Lifelong Person Re-Identification.\",\"authors\":\"Kunlun Xu,Zichen Liu,Xu Zou,Yuxin Peng,Jiahuan Zhou\",\"doi\":\"10.1109/tpami.2025.3572468\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Lifelong person re-identification (LReID) aims to learn from streaming data sources step by step, which suffers from the catastrophic forgetting problem. In this paper, we investigate the exemplar-free LReID setting where no previous exemplar is available during the new step training. Existing exemplar-free LReID methods primarily adopt knowledge distillation to transfer knowledge from an old model to a new one without selection, inevitably introducing erroneous and detrimental information that hinders new knowledge learning. Furthermore, not all critical knowledge can be transferred due to the absence of old data, leading to the permanent loss of undistilled knowledge. To address these limitations, we propose a novel exemplar-free LReID method named Long Short-Term Knowledge Decomposition and Consolidation (LSTKC++). Specifically, an old knowledge rectification mechanism is developed to rectify the old model predictions based on new data annotations, ensuring correct knowledge transfer. Besides, a long-term knowledge consolidation strategy is designed, which first estimates the degree of old knowledge forgetting by leveraging the output difference between the old and new models. Then, a knowledge-guided parameter fusion strategy is developed to balance new and old knowledge, improving long-term knowledge retention. Upon these designs, considering LReID models tend to be biased on the latest seen domains, the fusion weights generated by this process often lead to sub-optimal knowledge balancing. To settle this, we further propose to decompose a single old model into two parts: a long-term old model containing multi-domain knowledge and a short-term model focusing on the latest short-term old knowledge. Then, the incoming new data are explored as an unbiased reference to adjust the old models' fusion weight to achieve backward optimization. Furthermore, an extended complementary knowledge rectification mechanism is developed to mine and retain the correct knowledge in the decomposed models. Extensive experimental results demonstrate that LSTKC++ significantly outperforms state-of-the-art methods by large margins. The code is available at https://github.com/zhoujiahuan1991/LSTKC +.\",\"PeriodicalId\":13426,\"journal\":{\"name\":\"IEEE Transactions on Pattern Analysis and Machine Intelligence\",\"volume\":\"33 1\",\"pages\":\"\"},\"PeriodicalIF\":18.6000,\"publicationDate\":\"2025-05-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Pattern Analysis and Machine Intelligence\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1109/tpami.2025.3572468\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Pattern Analysis and Machine Intelligence","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1109/tpami.2025.3572468","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Long Short-Term Knowledge Decomposition and Consolidation for Lifelong Person Re-Identification.
Lifelong person re-identification (LReID) aims to learn from streaming data sources step by step, which suffers from the catastrophic forgetting problem. In this paper, we investigate the exemplar-free LReID setting where no previous exemplar is available during the new step training. Existing exemplar-free LReID methods primarily adopt knowledge distillation to transfer knowledge from an old model to a new one without selection, inevitably introducing erroneous and detrimental information that hinders new knowledge learning. Furthermore, not all critical knowledge can be transferred due to the absence of old data, leading to the permanent loss of undistilled knowledge. To address these limitations, we propose a novel exemplar-free LReID method named Long Short-Term Knowledge Decomposition and Consolidation (LSTKC++). Specifically, an old knowledge rectification mechanism is developed to rectify the old model predictions based on new data annotations, ensuring correct knowledge transfer. Besides, a long-term knowledge consolidation strategy is designed, which first estimates the degree of old knowledge forgetting by leveraging the output difference between the old and new models. Then, a knowledge-guided parameter fusion strategy is developed to balance new and old knowledge, improving long-term knowledge retention. Upon these designs, considering LReID models tend to be biased on the latest seen domains, the fusion weights generated by this process often lead to sub-optimal knowledge balancing. To settle this, we further propose to decompose a single old model into two parts: a long-term old model containing multi-domain knowledge and a short-term model focusing on the latest short-term old knowledge. Then, the incoming new data are explored as an unbiased reference to adjust the old models' fusion weight to achieve backward optimization. Furthermore, an extended complementary knowledge rectification mechanism is developed to mine and retain the correct knowledge in the decomposed models. Extensive experimental results demonstrate that LSTKC++ significantly outperforms state-of-the-art methods by large margins. The code is available at https://github.com/zhoujiahuan1991/LSTKC +.
期刊介绍:
The IEEE Transactions on Pattern Analysis and Machine Intelligence publishes articles on all traditional areas of computer vision and image understanding, all traditional areas of pattern analysis and recognition, and selected areas of machine intelligence, with a particular emphasis on machine learning for pattern analysis. Areas such as techniques for visual search, document and handwriting analysis, medical image analysis, video and image sequence analysis, content-based retrieval of image and video, face and gesture recognition and relevant specialized hardware and/or software architectures are also covered.