Stable Learning via Triplex Learning

Shuai Yang;Tingting Jiang;Qianlong Dang;Lichuan Gu;Xindong Wu
{"title":"Stable Learning via Triplex Learning","authors":"Shuai Yang;Tingting Jiang;Qianlong Dang;Lichuan Gu;Xindong Wu","doi":"10.1109/TAI.2024.3404411","DOIUrl":null,"url":null,"abstract":"Stable learning aims to learn a model that generalizes well to arbitrary unseen target domain by leveraging a single source domain. Recent advances in stable learning have focused on balancing the distribution of confounders for each feature to eliminate spurious correlations. However, previous studies treat all features equally without considering the difficulties of confounder balancing associated with different features, and regard irrelevant features as confounders, deteriorating generalization performance. To tackle these issues, this article proposes a novel triplex learning (TriL) based stable learning algorithm, which performs sample reweighting, causal feature selection, and representation learning to remove spurious correlations. Specifically, first, TriL adaptively assigns weights to the confounder balancing term of each feature in accordance with the difficulties of confounder balancing, and aligns the confounder distribution of each feature by learning a group of sample weights. Second, TriL integrates the sample weights into a weighted cross-entropy model to compute causal effects of features for excluding irrelevant features from the confounder set. Finally, TriL relearns a set of sample weights and uses them to guide a new supervised dual-autoencoder containing two classifiers to learn feature representations. TriL forces the results of two classifiers to remain consistent for removing spurious correlations by using a cross-classifier consistency regularization. Extensive experiments on synthetic and two real-world datasets show the superiority of TriL compared with seven methods.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on artificial intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10536730/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Stable learning aims to learn a model that generalizes well to arbitrary unseen target domain by leveraging a single source domain. Recent advances in stable learning have focused on balancing the distribution of confounders for each feature to eliminate spurious correlations. However, previous studies treat all features equally without considering the difficulties of confounder balancing associated with different features, and regard irrelevant features as confounders, deteriorating generalization performance. To tackle these issues, this article proposes a novel triplex learning (TriL) based stable learning algorithm, which performs sample reweighting, causal feature selection, and representation learning to remove spurious correlations. Specifically, first, TriL adaptively assigns weights to the confounder balancing term of each feature in accordance with the difficulties of confounder balancing, and aligns the confounder distribution of each feature by learning a group of sample weights. Second, TriL integrates the sample weights into a weighted cross-entropy model to compute causal effects of features for excluding irrelevant features from the confounder set. Finally, TriL relearns a set of sample weights and uses them to guide a new supervised dual-autoencoder containing two classifiers to learn feature representations. TriL forces the results of two classifiers to remain consistent for removing spurious correlations by using a cross-classifier consistency regularization. Extensive experiments on synthetic and two real-world datasets show the superiority of TriL compared with seven methods.
通过三重学习进行稳定学习
稳定学习的目的是通过利用单个源域,学习一个能很好地泛化到任意未见目标域的模型。稳定学习的最新进展主要集中在平衡每个特征的混杂因素分布,以消除虚假相关性。然而,以往的研究对所有特征一视同仁,没有考虑到与不同特征相关的混杂因素平衡困难,并将无关特征视为混杂因素,从而降低了泛化性能。为了解决这些问题,本文提出了一种新颖的基于三重学习(TriL)的稳定学习算法,该算法执行样本重权、因果特征选择和表征学习以消除虚假相关。具体来说,首先,TriL 会根据混杂因素平衡的难易程度自适应地为每个特征的混杂因素平衡项分配权重,并通过学习一组样本权重来调整每个特征的混杂因素分布。其次,TriL 将样本权重整合到加权交叉熵模型中,以计算特征的因果效应,从而从混杂因素集中排除无关特征。最后,TriL 重新学习一组样本权重,并利用它们来指导包含两个分类器的新监督双自动编码器学习特征表征。TriL 通过使用跨分类器一致性正则化,强制两个分类器的结果保持一致,以消除虚假相关性。在合成数据集和两个真实世界数据集上进行的大量实验表明,与七种方法相比,TriL 更具优势。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
7.70
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信