Bilateral co-transfer for unsupervised domain adaptation

Fuxiang Huang , Jingru Fu , Lei Zhang
{"title":"Bilateral co-transfer for unsupervised domain adaptation","authors":"Fuxiang Huang ,&nbsp;Jingru Fu ,&nbsp;Lei Zhang","doi":"10.1016/j.jai.2023.11.003","DOIUrl":null,"url":null,"abstract":"<div><p>Labeled data scarcity of an interested domain is often a serious problem in machine learning. Leveraging the labeled data from other semantic-related yet co-variate shifted source domain to facilitate the interested domain is a consensus. In order to solve the domain shift between domains and reduce the learning ambiguity, unsupervised domain adaptation (UDA) greatly promotes the transferability of model parameters. However, the dilemma of over-fitting (negative transfer) and under-fitting (under-adaptation) is always an overlooked challenge and potential risk. In this paper, we rethink the shallow learning paradigm and this intractable over/under-fitting problem, and propose a safer UDA model, coined as Bilateral Co-Transfer (BCT), which is essentially beyond previous well-known unilateral transfer. With bilateral co-transfer between domains, the risk of over/under-fitting is therefore largely reduced. Technically, the proposed BCT is a symmetrical structure, with joint distribution discrepancy (JDD) modeled for domain alignment and category discrimination. Specifically, a symmetrical bilateral transfer (SBT) loss between source and target domains is proposed under the philosophy of <em>mutual checks and balances</em>. First, each target sample is represented by source samples with low-rankness constraint in a common subspace, such that the most informative and transferable source data can be used to alleviate negative transfer. Second, each source sample is symmetrically and sparsely represented by target samples, such that the most reliable target samples can be exploited to tackle under-adaptation. Experiments on various benchmarks show that our BCT outperforms many previous outstanding work.</p></div>","PeriodicalId":100755,"journal":{"name":"Journal of Automation and Intelligence","volume":"2 4","pages":"Pages 204-217"},"PeriodicalIF":0.0000,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949855423000485/pdfft?md5=fb9ddfc4aec3f40f01af3c9921fe8657&pid=1-s2.0-S2949855423000485-main.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Automation and Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2949855423000485","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Labeled data scarcity of an interested domain is often a serious problem in machine learning. Leveraging the labeled data from other semantic-related yet co-variate shifted source domain to facilitate the interested domain is a consensus. In order to solve the domain shift between domains and reduce the learning ambiguity, unsupervised domain adaptation (UDA) greatly promotes the transferability of model parameters. However, the dilemma of over-fitting (negative transfer) and under-fitting (under-adaptation) is always an overlooked challenge and potential risk. In this paper, we rethink the shallow learning paradigm and this intractable over/under-fitting problem, and propose a safer UDA model, coined as Bilateral Co-Transfer (BCT), which is essentially beyond previous well-known unilateral transfer. With bilateral co-transfer between domains, the risk of over/under-fitting is therefore largely reduced. Technically, the proposed BCT is a symmetrical structure, with joint distribution discrepancy (JDD) modeled for domain alignment and category discrimination. Specifically, a symmetrical bilateral transfer (SBT) loss between source and target domains is proposed under the philosophy of mutual checks and balances. First, each target sample is represented by source samples with low-rankness constraint in a common subspace, such that the most informative and transferable source data can be used to alleviate negative transfer. Second, each source sample is symmetrically and sparsely represented by target samples, such that the most reliable target samples can be exploited to tackle under-adaptation. Experiments on various benchmarks show that our BCT outperforms many previous outstanding work.

无监督领域适应的双边共同转移
在机器学习中,感兴趣领域的标签数据稀缺往往是一个严重的问题。利用其他语义相关但共变异的源领域的标签数据来促进感兴趣领域的学习是一种共识。为了解决域之间的域转移并减少学习的模糊性,无监督域适应(UDA)极大地促进了模型参数的可转移性。然而,过拟合(负迁移)和欠拟合(欠适应)的两难问题始终是一个被忽视的挑战和潜在风险。在本文中,我们重新思考了浅层学习范式和这一棘手的过拟合/欠拟合问题,并提出了一种更安全的 UDA 模型,被称为双边协同转移(BCT),它在本质上超越了以往众所周知的单边转移。有了域间的双边共同转移,过拟合/欠拟合的风险就大大降低了。从技术上讲,拟议的 BCT 是一种对称结构,其联合分布差异(JDD)模型用于域对齐和类别区分。具体来说,在相互制衡的理念下,提出了源域和目标域之间的对称双边转移(SBT)损失。首先,每个目标样本都由共同子空间中具有低秩约束的源样本来表示,这样就可以利用信息量最大、可转移性最强的源数据来减轻负转移。其次,每个源样本都由目标样本对称、稀疏地表示,这样就可以利用最可靠的目标样本来解决适应不足的问题。在各种基准上的实验表明,我们的 BCT 优于之前的许多杰出成果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信