探索领域泛化的不变性问题。

IF 13.7
Shanshan Wang;Houmeng He;Xun Yang;Zhipu Liu;Yuanhong Zhong;Xingyi Zhang;Meng Wang
{"title":"探索领域泛化的不变性问题。","authors":"Shanshan Wang;Houmeng He;Xun Yang;Zhipu Liu;Yuanhong Zhong;Xingyi Zhang;Meng Wang","doi":"10.1109/TIP.2025.3568747","DOIUrl":null,"url":null,"abstract":"Domain generalization (DG) aims to solve the problem of significant performance degradation when target domain data collected from the Out-Of-Distribution (<italic>O.O.D</i>). Previous efforts try to exploit invariant features in the source domain through CNN networks. However, inspired by causal mechanisms, we find that the complex spurious-invariant information is still hidden in this view invariant features, and the impact of domain and class discrepancies on extracting invariance has not been effectively mitigated. To alleviate these issues, we propose a self-weighted multi-view mining invariance domain generalization framework (SMIDG). On the one hand, to make up for the insufficiency of traditional single-view convolutional feature extraction networks, we propose to mine features from another frequency view and use the self-adaptive adversarial masks to eliminate some spurious correlations, ensuring causal invariance in the coarse-grained generalization. However, due to inconsistencies in discriminative information between inter-domain and intra-domain samples, as well as inter-class and intra-class samples, the coarse-grained elimination of spurious associations does not fully resolve this issue. On the other hand, we also consider the fine-grained generalization from two aspects. Firstly, to better tackle the domain discrepancies, we propose a novel progressive contrastive learning strategy that learns the underlying specific features of samples while gradually mitigating domain discrepancies, thereby ensuring domain invariance in fine-grained generalization. Secondly, due to the issue of feature inconsistency, we adopt a self-adaptive hard sample mining method with information gain to ensure that the model pays more attention on hard disentangled samples, thus maintaining feature invariance. Extensive experiments on five benchmark datasets demonstrate that our method outperforms state-of-the-art approaches. Our code is available at <uri>https://github.com/bihhm/SMIDG</uri>","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"3336-3351"},"PeriodicalIF":13.7000,"publicationDate":"2025-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Exploring Invariance Matters for Domain Generalization\",\"authors\":\"Shanshan Wang;Houmeng He;Xun Yang;Zhipu Liu;Yuanhong Zhong;Xingyi Zhang;Meng Wang\",\"doi\":\"10.1109/TIP.2025.3568747\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Domain generalization (DG) aims to solve the problem of significant performance degradation when target domain data collected from the Out-Of-Distribution (<italic>O.O.D</i>). Previous efforts try to exploit invariant features in the source domain through CNN networks. However, inspired by causal mechanisms, we find that the complex spurious-invariant information is still hidden in this view invariant features, and the impact of domain and class discrepancies on extracting invariance has not been effectively mitigated. To alleviate these issues, we propose a self-weighted multi-view mining invariance domain generalization framework (SMIDG). On the one hand, to make up for the insufficiency of traditional single-view convolutional feature extraction networks, we propose to mine features from another frequency view and use the self-adaptive adversarial masks to eliminate some spurious correlations, ensuring causal invariance in the coarse-grained generalization. However, due to inconsistencies in discriminative information between inter-domain and intra-domain samples, as well as inter-class and intra-class samples, the coarse-grained elimination of spurious associations does not fully resolve this issue. On the other hand, we also consider the fine-grained generalization from two aspects. Firstly, to better tackle the domain discrepancies, we propose a novel progressive contrastive learning strategy that learns the underlying specific features of samples while gradually mitigating domain discrepancies, thereby ensuring domain invariance in fine-grained generalization. Secondly, due to the issue of feature inconsistency, we adopt a self-adaptive hard sample mining method with information gain to ensure that the model pays more attention on hard disentangled samples, thus maintaining feature invariance. Extensive experiments on five benchmark datasets demonstrate that our method outperforms state-of-the-art approaches. Our code is available at <uri>https://github.com/bihhm/SMIDG</uri>\",\"PeriodicalId\":94032,\"journal\":{\"name\":\"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society\",\"volume\":\"34 \",\"pages\":\"3336-3351\"},\"PeriodicalIF\":13.7000,\"publicationDate\":\"2025-03-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/11005696/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/11005696/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

领域泛化(DG)旨在解决目标领域数据从Out-Of-Distribution (O.O.D)中收集时性能显著下降的问题。以前的努力试图通过CNN网络在源域中利用不变性特征。然而,受因果机制的启发,我们发现这种视图不变性特征中仍然隐藏着复杂的伪不变性信息,并且领域和类差异对不变性提取的影响没有得到有效缓解。为了缓解这些问题,我们提出了一种自加权多视图挖掘不变性域泛化框架(SMIDG)。一方面,为了弥补传统的单视图卷积特征提取网络的不足,我们提出从另一个频率视图挖掘特征,并使用自适应对抗掩模消除一些虚假关联,保证粗粒度泛化中的因果不变性。然而,由于域间样本和域内样本、类间样本和类内样本的判别信息不一致,粗粒度消除虚假关联并不能完全解决这一问题。另一方面,我们也从两个方面考虑细粒度泛化。首先,为了更好地解决领域差异,我们提出了一种新的渐进式对比学习策略,该策略在学习样本的潜在特定特征的同时逐渐减轻领域差异,从而确保细粒度泛化中的领域不变性。其次,针对特征不一致的问题,采用带信息增益的自适应硬样本挖掘方法,确保模型更加关注硬解纠缠样本,从而保持特征不变性。在五个基准数据集上进行的大量实验表明,我们的方法优于最先进的方法。我们的代码可在https://github.com/bihhm/SMIDG上获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Exploring Invariance Matters for Domain Generalization
Domain generalization (DG) aims to solve the problem of significant performance degradation when target domain data collected from the Out-Of-Distribution (O.O.D). Previous efforts try to exploit invariant features in the source domain through CNN networks. However, inspired by causal mechanisms, we find that the complex spurious-invariant information is still hidden in this view invariant features, and the impact of domain and class discrepancies on extracting invariance has not been effectively mitigated. To alleviate these issues, we propose a self-weighted multi-view mining invariance domain generalization framework (SMIDG). On the one hand, to make up for the insufficiency of traditional single-view convolutional feature extraction networks, we propose to mine features from another frequency view and use the self-adaptive adversarial masks to eliminate some spurious correlations, ensuring causal invariance in the coarse-grained generalization. However, due to inconsistencies in discriminative information between inter-domain and intra-domain samples, as well as inter-class and intra-class samples, the coarse-grained elimination of spurious associations does not fully resolve this issue. On the other hand, we also consider the fine-grained generalization from two aspects. Firstly, to better tackle the domain discrepancies, we propose a novel progressive contrastive learning strategy that learns the underlying specific features of samples while gradually mitigating domain discrepancies, thereby ensuring domain invariance in fine-grained generalization. Secondly, due to the issue of feature inconsistency, we adopt a self-adaptive hard sample mining method with information gain to ensure that the model pays more attention on hard disentangled samples, thus maintaining feature invariance. Extensive experiments on five benchmark datasets demonstrate that our method outperforms state-of-the-art approaches. Our code is available at https://github.com/bihhm/SMIDG
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信