Class and Domain Low-rank Tensor Learning for Multi-source Domain Adaptation

IF 7.5 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Yuwu Lu , Huiling Fu , Zhihui Lai , Xuelong Li
{"title":"Class and Domain Low-rank Tensor Learning for Multi-source Domain Adaptation","authors":"Yuwu Lu ,&nbsp;Huiling Fu ,&nbsp;Zhihui Lai ,&nbsp;Xuelong Li","doi":"10.1016/j.patcog.2025.111675","DOIUrl":null,"url":null,"abstract":"<div><div>Multi-source unsupervised domain adaptation (MUDA) aims to transfer knowledge from multiple labeled source domains to an unlabeled target domain. A key challenge in MUDA is to minimize the distributional discrepancy between the source and target domains. While traditional methods typically merge source domains to reduce this discrepancy, they often overlook higher-order correlations and class-discriminative relationships across domains, which weakens the generalization and classification abilities of the model. To address these challenges, we propose a novel method called Class and Domain Low-rank Tensor Learning (CDLTL), which integrates domain-level alignment and class-level alignment into a unified framework. Specifically, CDLTL leverages a projection matrix to map data from both source and target domains into a shared subspace, enabling the reconstruction of target domain samples from the source data and thereby reducing domain discrepancies. By combining tensor learning with joint sparse and weighted low-rank constraints, CDLTL achieves domain-level alignment, allowing the model to capture complex higher-order correlations across multiple domains while preserving global structures within the data. CDLTL also takes into account the geometric structure of multiple source domains and preserves local structures through manifold learning. Additionally, CDLTL achieves class-level alignment through class-based low-rank constraints, which improve intra-class compactness and inter-class separability, thus boosting the discriminative ability and robustness of the model. Extensive experiments conducted across various visual domain adaptation tasks demonstrate that the proposed method outperforms some of the existing approaches.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"167 ","pages":"Article 111675"},"PeriodicalIF":7.5000,"publicationDate":"2025-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pattern Recognition","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0031320325003358","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Multi-source unsupervised domain adaptation (MUDA) aims to transfer knowledge from multiple labeled source domains to an unlabeled target domain. A key challenge in MUDA is to minimize the distributional discrepancy between the source and target domains. While traditional methods typically merge source domains to reduce this discrepancy, they often overlook higher-order correlations and class-discriminative relationships across domains, which weakens the generalization and classification abilities of the model. To address these challenges, we propose a novel method called Class and Domain Low-rank Tensor Learning (CDLTL), which integrates domain-level alignment and class-level alignment into a unified framework. Specifically, CDLTL leverages a projection matrix to map data from both source and target domains into a shared subspace, enabling the reconstruction of target domain samples from the source data and thereby reducing domain discrepancies. By combining tensor learning with joint sparse and weighted low-rank constraints, CDLTL achieves domain-level alignment, allowing the model to capture complex higher-order correlations across multiple domains while preserving global structures within the data. CDLTL also takes into account the geometric structure of multiple source domains and preserves local structures through manifold learning. Additionally, CDLTL achieves class-level alignment through class-based low-rank constraints, which improve intra-class compactness and inter-class separability, thus boosting the discriminative ability and robustness of the model. Extensive experiments conducted across various visual domain adaptation tasks demonstrate that the proposed method outperforms some of the existing approaches.
多源域自适应的类和域低秩张量学习
多源无监督域自适应(Multi-source unsupervised domain adaptation, MUDA)旨在将多个有标记的源域的知识转移到一个无标记的目标域。MUDA的一个关键挑战是最小化源域和目标域之间的分布差异。传统方法通常通过合并源域来降低这种差异,但往往忽略了域间的高阶相关性和类判别关系,从而削弱了模型的泛化和分类能力。为了解决这些挑战,我们提出了一种新的方法,称为类和领域低秩张量学习(CDLTL),它将领域级对齐和类级对齐集成到一个统一的框架中。具体来说,CDLTL利用投影矩阵将来自源域和目标域的数据映射到共享子空间,从而能够从源数据重建目标域样本,从而减少域差异。通过将张量学习与联合稀疏和加权低秩约束相结合,CDLTL实现了域级对齐,允许模型捕获跨多个域的复杂高阶相关性,同时保留数据中的全局结构。CDLTL还考虑了多个源域的几何结构,并通过流形学习保留了局部结构。此外,CDLTL通过基于类的低秩约束实现类级对齐,提高了类内紧密性和类间可分离性,增强了模型的判别能力和鲁棒性。在各种视觉域自适应任务中进行的大量实验表明,该方法优于现有的一些方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Pattern Recognition
Pattern Recognition 工程技术-工程:电子与电气
CiteScore
14.40
自引率
16.20%
发文量
683
审稿时长
5.6 months
期刊介绍: The field of Pattern Recognition is both mature and rapidly evolving, playing a crucial role in various related fields such as computer vision, image processing, text analysis, and neural networks. It closely intersects with machine learning and is being applied in emerging areas like biometrics, bioinformatics, multimedia data analysis, and data science. The journal Pattern Recognition, established half a century ago during the early days of computer science, has since grown significantly in scope and influence.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信