{"title":"通过多层次对比约束增强无遗忘的少拍分类功能","authors":"Bingzhi Chen, Haoming Zhou, Yishu Liu, Biqing Zeng, Jiahui Pan, Guangming Lu","doi":"arxiv-2409.11286","DOIUrl":null,"url":null,"abstract":"Most recent few-shot learning approaches are based on meta-learning with\nepisodic training. However, prior studies encounter two crucial problems: (1)\n\\textit{the presence of inductive bias}, and (2) \\textit{the occurrence of\ncatastrophic forgetting}. In this paper, we propose a novel Multi-Level\nContrastive Constraints (MLCC) framework, that jointly integrates\nwithin-episode learning and across-episode learning into a unified interactive\nlearning paradigm to solve these issues. Specifically, we employ a space-aware\ninteraction modeling scheme to explore the correct inductive paradigms for each\nclass between within-episode similarity/dis-similarity distributions.\nAdditionally, with the aim of better utilizing former prior knowledge, a\ncross-stage distribution adaption strategy is designed to align the\nacross-episode distributions from different time stages, thus reducing the\nsemantic gap between existing and past prediction distribution. Extensive\nexperiments on multiple few-shot datasets demonstrate the consistent\nsuperiority of MLCC approach over the existing state-of-the-art baselines.","PeriodicalId":501480,"journal":{"name":"arXiv - CS - Multimedia","volume":"201 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Enhancing Few-Shot Classification without Forgetting through Multi-Level Contrastive Constraints\",\"authors\":\"Bingzhi Chen, Haoming Zhou, Yishu Liu, Biqing Zeng, Jiahui Pan, Guangming Lu\",\"doi\":\"arxiv-2409.11286\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Most recent few-shot learning approaches are based on meta-learning with\\nepisodic training. However, prior studies encounter two crucial problems: (1)\\n\\\\textit{the presence of inductive bias}, and (2) \\\\textit{the occurrence of\\ncatastrophic forgetting}. In this paper, we propose a novel Multi-Level\\nContrastive Constraints (MLCC) framework, that jointly integrates\\nwithin-episode learning and across-episode learning into a unified interactive\\nlearning paradigm to solve these issues. Specifically, we employ a space-aware\\ninteraction modeling scheme to explore the correct inductive paradigms for each\\nclass between within-episode similarity/dis-similarity distributions.\\nAdditionally, with the aim of better utilizing former prior knowledge, a\\ncross-stage distribution adaption strategy is designed to align the\\nacross-episode distributions from different time stages, thus reducing the\\nsemantic gap between existing and past prediction distribution. Extensive\\nexperiments on multiple few-shot datasets demonstrate the consistent\\nsuperiority of MLCC approach over the existing state-of-the-art baselines.\",\"PeriodicalId\":501480,\"journal\":{\"name\":\"arXiv - CS - Multimedia\",\"volume\":\"201 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Multimedia\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.11286\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Multimedia","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11286","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Enhancing Few-Shot Classification without Forgetting through Multi-Level Contrastive Constraints
Most recent few-shot learning approaches are based on meta-learning with
episodic training. However, prior studies encounter two crucial problems: (1)
\textit{the presence of inductive bias}, and (2) \textit{the occurrence of
catastrophic forgetting}. In this paper, we propose a novel Multi-Level
Contrastive Constraints (MLCC) framework, that jointly integrates
within-episode learning and across-episode learning into a unified interactive
learning paradigm to solve these issues. Specifically, we employ a space-aware
interaction modeling scheme to explore the correct inductive paradigms for each
class between within-episode similarity/dis-similarity distributions.
Additionally, with the aim of better utilizing former prior knowledge, a
cross-stage distribution adaption strategy is designed to align the
across-episode distributions from different time stages, thus reducing the
semantic gap between existing and past prediction distribution. Extensive
experiments on multiple few-shot datasets demonstrate the consistent
superiority of MLCC approach over the existing state-of-the-art baselines.