Tensorial multiview low-rank high-order graph learning for context-enhanced domain adaptation.

IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Chenyang Zhu, Lanlan Zhang, Weibin Luo, Guangqi Jiang, Qian Wang
{"title":"Tensorial multiview low-rank high-order graph learning for context-enhanced domain adaptation.","authors":"Chenyang Zhu, Lanlan Zhang, Weibin Luo, Guangqi Jiang, Qian Wang","doi":"10.1016/j.neunet.2024.106859","DOIUrl":null,"url":null,"abstract":"<p><p>Unsupervised Domain Adaptation (UDA) is a machine learning technique that facilitates knowledge transfer from a labeled source domain to an unlabeled target domain, addressing distributional discrepancies between these domains. Existing UDA methods often fail to effectively capture and utilize contextual relationships within the target domain. This research introduces a novel framework called Tensorial Multiview Low-Rank High-Order Graph Learning (MLRGL), which addresses these challenges by learning high-order graphs constrained by low-rank tensors to uncover contextual relations. The proposed framework ensures prediction consistency between randomly masked target images and their pseudo-labels by leveraging spatial context to generate multiview domain-invariant features through various augmented masking techniques. A high-order graph is constructed by combining Laplacian graphs to propagate these multiview features. Low-rank constraints are applied along both horizontal and vertical dimensions to better uncover inter-view and inter-class correlations among multiview features. This high-order graph is used to create an affinity matrix, mapping multiview features into a unified subspace. Prototype vectors and unsupervised clustering are then employed to calculate conditional probabilities for UDA tasks. We evaluated our approach using three different backbones across three benchmark datasets. The results demonstrate that the MLRGL framework outperforms current state-of-the-art methods in various UDA tasks. Additionally, our framework exhibits robustness to hyperparameter variations and demonstrates that multiview approaches outperform single-view solutions.</p>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":null,"pages":null},"PeriodicalIF":6.0000,"publicationDate":"2024-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1016/j.neunet.2024.106859","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Unsupervised Domain Adaptation (UDA) is a machine learning technique that facilitates knowledge transfer from a labeled source domain to an unlabeled target domain, addressing distributional discrepancies between these domains. Existing UDA methods often fail to effectively capture and utilize contextual relationships within the target domain. This research introduces a novel framework called Tensorial Multiview Low-Rank High-Order Graph Learning (MLRGL), which addresses these challenges by learning high-order graphs constrained by low-rank tensors to uncover contextual relations. The proposed framework ensures prediction consistency between randomly masked target images and their pseudo-labels by leveraging spatial context to generate multiview domain-invariant features through various augmented masking techniques. A high-order graph is constructed by combining Laplacian graphs to propagate these multiview features. Low-rank constraints are applied along both horizontal and vertical dimensions to better uncover inter-view and inter-class correlations among multiview features. This high-order graph is used to create an affinity matrix, mapping multiview features into a unified subspace. Prototype vectors and unsupervised clustering are then employed to calculate conditional probabilities for UDA tasks. We evaluated our approach using three different backbones across three benchmark datasets. The results demonstrate that the MLRGL framework outperforms current state-of-the-art methods in various UDA tasks. Additionally, our framework exhibits robustness to hyperparameter variations and demonstrates that multiview approaches outperform single-view solutions.

用于情境增强领域适应的张量多视图低阶高阶图学习。
无监督领域适应(UDA)是一种机器学习技术,它有助于将知识从有标签的源领域转移到无标签的目标领域,解决这些领域之间的分布差异问题。现有的 UDA 方法往往不能有效捕捉和利用目标域中的上下文关系。这项研究引入了一个名为张量多视图低阶高阶图学习(MLRGL)的新框架,通过学习受低阶张量约束的高阶图来揭示上下文关系,从而应对这些挑战。所提出的框架利用空间上下文,通过各种增强遮挡技术生成多视图域不变特征,从而确保随机遮挡的目标图像与其伪标签之间的预测一致性。通过结合拉普拉斯图构建高阶图来传播这些多视图特征。为了更好地揭示多视图特征之间的视图间和类间相关性,在水平和垂直维度上都应用了低秩约束。这种高阶图用于创建亲和矩阵,将多视角特征映射到统一的子空间中。然后利用原型向量和无监督聚类来计算 UDA 任务的条件概率。我们在三个基准数据集上使用三种不同的骨干对我们的方法进行了评估。结果表明,在各种 UDA 任务中,MLRGL 框架优于当前最先进的方法。此外,我们的框架对超参数变化具有鲁棒性,并证明多视角方法优于单视角解决方案。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Neural Networks
Neural Networks 工程技术-计算机:人工智能
CiteScore
13.90
自引率
7.70%
发文量
425
审稿时长
67 days
期刊介绍: Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信