长短期知识分解与整合,终身人再识别。

IF 18.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Kunlun Xu,Zichen Liu,Xu Zou,Yuxin Peng,Jiahuan Zhou
{"title":"长短期知识分解与整合,终身人再识别。","authors":"Kunlun Xu,Zichen Liu,Xu Zou,Yuxin Peng,Jiahuan Zhou","doi":"10.1109/tpami.2025.3572468","DOIUrl":null,"url":null,"abstract":"Lifelong person re-identification (LReID) aims to learn from streaming data sources step by step, which suffers from the catastrophic forgetting problem. In this paper, we investigate the exemplar-free LReID setting where no previous exemplar is available during the new step training. Existing exemplar-free LReID methods primarily adopt knowledge distillation to transfer knowledge from an old model to a new one without selection, inevitably introducing erroneous and detrimental information that hinders new knowledge learning. Furthermore, not all critical knowledge can be transferred due to the absence of old data, leading to the permanent loss of undistilled knowledge. To address these limitations, we propose a novel exemplar-free LReID method named Long Short-Term Knowledge Decomposition and Consolidation (LSTKC++). Specifically, an old knowledge rectification mechanism is developed to rectify the old model predictions based on new data annotations, ensuring correct knowledge transfer. Besides, a long-term knowledge consolidation strategy is designed, which first estimates the degree of old knowledge forgetting by leveraging the output difference between the old and new models. Then, a knowledge-guided parameter fusion strategy is developed to balance new and old knowledge, improving long-term knowledge retention. Upon these designs, considering LReID models tend to be biased on the latest seen domains, the fusion weights generated by this process often lead to sub-optimal knowledge balancing. To settle this, we further propose to decompose a single old model into two parts: a long-term old model containing multi-domain knowledge and a short-term model focusing on the latest short-term old knowledge. Then, the incoming new data are explored as an unbiased reference to adjust the old models' fusion weight to achieve backward optimization. Furthermore, an extended complementary knowledge rectification mechanism is developed to mine and retain the correct knowledge in the decomposed models. Extensive experimental results demonstrate that LSTKC++ significantly outperforms state-of-the-art methods by large margins. The code is available at https://github.com/zhoujiahuan1991/LSTKC +.","PeriodicalId":13426,"journal":{"name":"IEEE Transactions on Pattern Analysis and Machine Intelligence","volume":"33 1","pages":""},"PeriodicalIF":18.6000,"publicationDate":"2025-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Long Short-Term Knowledge Decomposition and Consolidation for Lifelong Person Re-Identification.\",\"authors\":\"Kunlun Xu,Zichen Liu,Xu Zou,Yuxin Peng,Jiahuan Zhou\",\"doi\":\"10.1109/tpami.2025.3572468\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Lifelong person re-identification (LReID) aims to learn from streaming data sources step by step, which suffers from the catastrophic forgetting problem. In this paper, we investigate the exemplar-free LReID setting where no previous exemplar is available during the new step training. Existing exemplar-free LReID methods primarily adopt knowledge distillation to transfer knowledge from an old model to a new one without selection, inevitably introducing erroneous and detrimental information that hinders new knowledge learning. Furthermore, not all critical knowledge can be transferred due to the absence of old data, leading to the permanent loss of undistilled knowledge. To address these limitations, we propose a novel exemplar-free LReID method named Long Short-Term Knowledge Decomposition and Consolidation (LSTKC++). Specifically, an old knowledge rectification mechanism is developed to rectify the old model predictions based on new data annotations, ensuring correct knowledge transfer. Besides, a long-term knowledge consolidation strategy is designed, which first estimates the degree of old knowledge forgetting by leveraging the output difference between the old and new models. Then, a knowledge-guided parameter fusion strategy is developed to balance new and old knowledge, improving long-term knowledge retention. Upon these designs, considering LReID models tend to be biased on the latest seen domains, the fusion weights generated by this process often lead to sub-optimal knowledge balancing. To settle this, we further propose to decompose a single old model into two parts: a long-term old model containing multi-domain knowledge and a short-term model focusing on the latest short-term old knowledge. Then, the incoming new data are explored as an unbiased reference to adjust the old models' fusion weight to achieve backward optimization. Furthermore, an extended complementary knowledge rectification mechanism is developed to mine and retain the correct knowledge in the decomposed models. Extensive experimental results demonstrate that LSTKC++ significantly outperforms state-of-the-art methods by large margins. The code is available at https://github.com/zhoujiahuan1991/LSTKC +.\",\"PeriodicalId\":13426,\"journal\":{\"name\":\"IEEE Transactions on Pattern Analysis and Machine Intelligence\",\"volume\":\"33 1\",\"pages\":\"\"},\"PeriodicalIF\":18.6000,\"publicationDate\":\"2025-05-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Pattern Analysis and Machine Intelligence\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1109/tpami.2025.3572468\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Pattern Analysis and Machine Intelligence","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1109/tpami.2025.3572468","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

终身人再识别(LReID)旨在从流数据源中逐步学习,这一过程存在灾难性的遗忘问题。在本文中,我们研究了无范例LReID设置,即在新步训练中没有可用的先前范例。现有的无范例LReID方法主要采用知识蒸馏的方法,不经选择将旧模型中的知识转移到新模型中,不可避免地引入了错误和有害的信息,阻碍了新知识的学习。此外,由于缺乏旧数据,并非所有关键知识都可以转移,从而导致未提炼知识的永久丢失。为了解决这些限制,我们提出了一种新的无范例LReID方法,称为长短期知识分解和整合(lstkc++)。具体而言,开发了一种旧知识纠偏机制,根据新的数据注释对旧的模型预测进行纠偏,确保正确的知识转移。此外,设计了一种长期知识巩固策略,该策略首先利用新旧模型的输出差来估计旧知识遗忘的程度。然后,提出了一种以知识为导向的参数融合策略,平衡新旧知识,提高知识的长期留存率。在这些设计中,考虑到LReID模型往往偏向于最新看到的域,该过程产生的融合权重通常会导致次优的知识平衡。为了解决这个问题,我们进一步提出将单个旧模型分解为两个部分:包含多领域知识的长期旧模型和关注最新短期旧知识的短期模型。然后,探索输入的新数据作为无偏参考,调整旧模型的融合权重,实现后向优化。在此基础上,提出了一种扩展的互补知识纠错机制,以挖掘和保留分解模型中正确的知识。大量的实验结果表明,lstkc++明显优于最先进的方法。代码可在https://github.com/zhoujiahuan1991/LSTKC +上获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Long Short-Term Knowledge Decomposition and Consolidation for Lifelong Person Re-Identification.
Lifelong person re-identification (LReID) aims to learn from streaming data sources step by step, which suffers from the catastrophic forgetting problem. In this paper, we investigate the exemplar-free LReID setting where no previous exemplar is available during the new step training. Existing exemplar-free LReID methods primarily adopt knowledge distillation to transfer knowledge from an old model to a new one without selection, inevitably introducing erroneous and detrimental information that hinders new knowledge learning. Furthermore, not all critical knowledge can be transferred due to the absence of old data, leading to the permanent loss of undistilled knowledge. To address these limitations, we propose a novel exemplar-free LReID method named Long Short-Term Knowledge Decomposition and Consolidation (LSTKC++). Specifically, an old knowledge rectification mechanism is developed to rectify the old model predictions based on new data annotations, ensuring correct knowledge transfer. Besides, a long-term knowledge consolidation strategy is designed, which first estimates the degree of old knowledge forgetting by leveraging the output difference between the old and new models. Then, a knowledge-guided parameter fusion strategy is developed to balance new and old knowledge, improving long-term knowledge retention. Upon these designs, considering LReID models tend to be biased on the latest seen domains, the fusion weights generated by this process often lead to sub-optimal knowledge balancing. To settle this, we further propose to decompose a single old model into two parts: a long-term old model containing multi-domain knowledge and a short-term model focusing on the latest short-term old knowledge. Then, the incoming new data are explored as an unbiased reference to adjust the old models' fusion weight to achieve backward optimization. Furthermore, an extended complementary knowledge rectification mechanism is developed to mine and retain the correct knowledge in the decomposed models. Extensive experimental results demonstrate that LSTKC++ significantly outperforms state-of-the-art methods by large margins. The code is available at https://github.com/zhoujiahuan1991/LSTKC +.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
28.40
自引率
3.00%
发文量
885
审稿时长
8.5 months
期刊介绍: The IEEE Transactions on Pattern Analysis and Machine Intelligence publishes articles on all traditional areas of computer vision and image understanding, all traditional areas of pattern analysis and recognition, and selected areas of machine intelligence, with a particular emphasis on machine learning for pattern analysis. Areas such as techniques for visual search, document and handwriting analysis, medical image analysis, video and image sequence analysis, content-based retrieval of image and video, face and gesture recognition and relevant specialized hardware and/or software architectures are also covered.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信