基于因果关系的领域泛化对比增量学习框架

IF 6.6 1区 计算机科学 Q1 Multidisciplinary
Xin Wang;Qingjie Zhao;Lei Wang;Wangwang Liu
{"title":"基于因果关系的领域泛化对比增量学习框架","authors":"Xin Wang;Qingjie Zhao;Lei Wang;Wangwang Liu","doi":"10.26599/TST.2024.9010072","DOIUrl":null,"url":null,"abstract":"Learning domain-invariant feature representations is critical to alleviate the distribution differences between training and testing domains. The existing mainstream domain generalization approaches primarily pursue to align the across-domain distributions to extract the transferable feature representations. However, these representations may be insufficient and unstable. Moreover, these networks may also undergo catastrophic forgetting because the previous learned knowledge is replaced by the new learned knowledge. To cope with these issues, we propose a novel causality-based contrastive incremental learning model for domain generalization, which mainly includes three components: (1) intra-domain causal factorization, (2) inter-domain Mahalanobis similarity metric, and (3) contrastive knowledge distillation. The model extracts intra and inter domain-invariant knowledge to improve model generalization. Specifically, we first introduce a causal factori-zation to extract intra-domain invariant knowledge. Then, we design a Mahalanobis similarity metric to extract common inter-domain invariant knowledge. Finally, we propose a contrastive knowledge distillation with exponential moving average to distill model parameters in a smooth way to preserve the previous learned knowledge and mitigate model forgetting. Extensive experiments on several domain generalization benchmarks prove that our model achieves the state-of-the-art results, which sufficiently show the effectiveness of our model.","PeriodicalId":48690,"journal":{"name":"Tsinghua Science and Technology","volume":"30 4","pages":"1636-1647"},"PeriodicalIF":6.6000,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10908663","citationCount":"0","resultStr":"{\"title\":\"Causality-Based Contrastive Incremental Learning Framework for Domain Generalization\",\"authors\":\"Xin Wang;Qingjie Zhao;Lei Wang;Wangwang Liu\",\"doi\":\"10.26599/TST.2024.9010072\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Learning domain-invariant feature representations is critical to alleviate the distribution differences between training and testing domains. The existing mainstream domain generalization approaches primarily pursue to align the across-domain distributions to extract the transferable feature representations. However, these representations may be insufficient and unstable. Moreover, these networks may also undergo catastrophic forgetting because the previous learned knowledge is replaced by the new learned knowledge. To cope with these issues, we propose a novel causality-based contrastive incremental learning model for domain generalization, which mainly includes three components: (1) intra-domain causal factorization, (2) inter-domain Mahalanobis similarity metric, and (3) contrastive knowledge distillation. The model extracts intra and inter domain-invariant knowledge to improve model generalization. Specifically, we first introduce a causal factori-zation to extract intra-domain invariant knowledge. Then, we design a Mahalanobis similarity metric to extract common inter-domain invariant knowledge. Finally, we propose a contrastive knowledge distillation with exponential moving average to distill model parameters in a smooth way to preserve the previous learned knowledge and mitigate model forgetting. Extensive experiments on several domain generalization benchmarks prove that our model achieves the state-of-the-art results, which sufficiently show the effectiveness of our model.\",\"PeriodicalId\":48690,\"journal\":{\"name\":\"Tsinghua Science and Technology\",\"volume\":\"30 4\",\"pages\":\"1636-1647\"},\"PeriodicalIF\":6.6000,\"publicationDate\":\"2025-03-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10908663\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Tsinghua Science and Technology\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10908663/\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"Multidisciplinary\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Tsinghua Science and Technology","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10908663/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Multidisciplinary","Score":null,"Total":0}
引用次数: 0

摘要

学习域不变特征表示对于缓解训练域和测试域之间的分布差异至关重要。现有的主流领域泛化方法主要是通过对齐跨领域分布来提取可转移的特征表示。然而,这些陈述可能是不充分和不稳定的。此外,这些网络也可能发生灾难性的遗忘,因为以前学到的知识被新的知识所取代。为了解决这些问题,我们提出了一种新的基于因果关系的领域泛化对比增量学习模型,该模型主要包括三个部分:(1)领域内因果分解,(2)领域间Mahalanobis相似性度量和(3)对比知识蒸馏。该模型提取域内和域间不变知识,提高模型的泛化能力。具体而言,我们首先引入因果分解来提取域内不变知识。然后,我们设计了一个Mahalanobis相似度度量来提取共同的域间不变知识。最后,我们提出了一种与指数移动平均相比较的知识蒸馏方法,以平滑的方式提取模型参数,以保留先前学习的知识并减轻模型遗忘。在多个领域泛化基准上的大量实验证明,我们的模型达到了最先进的结果,充分显示了我们模型的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Causality-Based Contrastive Incremental Learning Framework for Domain Generalization
Learning domain-invariant feature representations is critical to alleviate the distribution differences between training and testing domains. The existing mainstream domain generalization approaches primarily pursue to align the across-domain distributions to extract the transferable feature representations. However, these representations may be insufficient and unstable. Moreover, these networks may also undergo catastrophic forgetting because the previous learned knowledge is replaced by the new learned knowledge. To cope with these issues, we propose a novel causality-based contrastive incremental learning model for domain generalization, which mainly includes three components: (1) intra-domain causal factorization, (2) inter-domain Mahalanobis similarity metric, and (3) contrastive knowledge distillation. The model extracts intra and inter domain-invariant knowledge to improve model generalization. Specifically, we first introduce a causal factori-zation to extract intra-domain invariant knowledge. Then, we design a Mahalanobis similarity metric to extract common inter-domain invariant knowledge. Finally, we propose a contrastive knowledge distillation with exponential moving average to distill model parameters in a smooth way to preserve the previous learned knowledge and mitigate model forgetting. Extensive experiments on several domain generalization benchmarks prove that our model achieves the state-of-the-art results, which sufficiently show the effectiveness of our model.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Tsinghua Science and Technology
Tsinghua Science and Technology COMPUTER SCIENCE, INFORMATION SYSTEMSCOMPU-COMPUTER SCIENCE, SOFTWARE ENGINEERING
CiteScore
10.20
自引率
10.60%
发文量
2340
期刊介绍: Tsinghua Science and Technology (Tsinghua Sci Technol) started publication in 1996. It is an international academic journal sponsored by Tsinghua University and is published bimonthly. This journal aims at presenting the up-to-date scientific achievements in computer science, electronic engineering, and other IT fields. Contributions all over the world are welcome.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信