多源域自适应的自适应硬度驱动增强和对齐策略

IF 8.9 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Yuxiang Yang;Xinyi Zeng;Pinxian Zeng;Chen Zu;Binyu Yan;Jiliu Zhou;Yan Wang
{"title":"多源域自适应的自适应硬度驱动增强和对齐策略","authors":"Yuxiang Yang;Xinyi Zeng;Pinxian Zeng;Chen Zu;Binyu Yan;Jiliu Zhou;Yan Wang","doi":"10.1109/TNNLS.2025.3565728","DOIUrl":null,"url":null,"abstract":"Multisource domain adaptation (MDA) aims to transfer knowledge from multiple labeled source domains to an unlabeled target domain. Nevertheless, traditional methods primarily focus on achieving interdomain alignment through sample-level constraints, such as maximum mean discrepancy (MMD), neglecting three pivotal aspects: 1) the potential of data augmentation; 2) the significance of intradomain alignment; and 3) the design of cluster-level constraints. In this article, we introduce a novel hardness-driven strategy for MDA tasks, named <inline-formula> <tex-math>$\\mathrm {A}^{3}\\mathrm {MDA}$ </tex-math></inline-formula>, which collectively considers these three aspects through adaptive hardness quantification and utilization in both data augmentation and domain alignment. To achieve this, <inline-formula> <tex-math>$\\mathrm {A}^{3}\\mathrm {MDA}$ </tex-math></inline-formula> progressively proposes three adaptive hardness measurements (AHMs), i.e., basic, smooth, and comparative AHMs, each incorporating distinct mechanisms for diverse scenarios. Specifically, basic AHM aims to gauge the instantaneous hardness for each source/target sample. Then, hardness values measured by smooth AHM will adaptively adjust the intensity level of strong data augmentation to maintain compatibility with the model’s generalization capacity. In contrast, comparative AHM is designed to facilitate cluster-level constraints. By leveraging hardness values as sample-specific weights, the traditional MMD is enhanced into a weighted-clustered variant, strengthening the robustness and precision of interdomain alignment. As for the often-neglected intradomain alignment, we adaptively construct a pseudo-contrastive matrix (PCM) by selecting harder samples based on the hardness rankings, enhancing the quality of pseudo-labels, and shaping a well-clustered target feature space. Experiments on multiple MDA benchmarks show that <inline-formula> <tex-math>$\\mathrm {A}^{3}\\mathrm {MDA}$ </tex-math></inline-formula> outperforms other methods.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"36 9","pages":"15963-15977"},"PeriodicalIF":8.9000,"publicationDate":"2025-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Adaptive Hardness-Driven Augmentation and Alignment Strategies for Multisource Domain Adaptations\",\"authors\":\"Yuxiang Yang;Xinyi Zeng;Pinxian Zeng;Chen Zu;Binyu Yan;Jiliu Zhou;Yan Wang\",\"doi\":\"10.1109/TNNLS.2025.3565728\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Multisource domain adaptation (MDA) aims to transfer knowledge from multiple labeled source domains to an unlabeled target domain. Nevertheless, traditional methods primarily focus on achieving interdomain alignment through sample-level constraints, such as maximum mean discrepancy (MMD), neglecting three pivotal aspects: 1) the potential of data augmentation; 2) the significance of intradomain alignment; and 3) the design of cluster-level constraints. In this article, we introduce a novel hardness-driven strategy for MDA tasks, named <inline-formula> <tex-math>$\\\\mathrm {A}^{3}\\\\mathrm {MDA}$ </tex-math></inline-formula>, which collectively considers these three aspects through adaptive hardness quantification and utilization in both data augmentation and domain alignment. To achieve this, <inline-formula> <tex-math>$\\\\mathrm {A}^{3}\\\\mathrm {MDA}$ </tex-math></inline-formula> progressively proposes three adaptive hardness measurements (AHMs), i.e., basic, smooth, and comparative AHMs, each incorporating distinct mechanisms for diverse scenarios. Specifically, basic AHM aims to gauge the instantaneous hardness for each source/target sample. Then, hardness values measured by smooth AHM will adaptively adjust the intensity level of strong data augmentation to maintain compatibility with the model’s generalization capacity. In contrast, comparative AHM is designed to facilitate cluster-level constraints. By leveraging hardness values as sample-specific weights, the traditional MMD is enhanced into a weighted-clustered variant, strengthening the robustness and precision of interdomain alignment. As for the often-neglected intradomain alignment, we adaptively construct a pseudo-contrastive matrix (PCM) by selecting harder samples based on the hardness rankings, enhancing the quality of pseudo-labels, and shaping a well-clustered target feature space. Experiments on multiple MDA benchmarks show that <inline-formula> <tex-math>$\\\\mathrm {A}^{3}\\\\mathrm {MDA}$ </tex-math></inline-formula> outperforms other methods.\",\"PeriodicalId\":13303,\"journal\":{\"name\":\"IEEE transactions on neural networks and learning systems\",\"volume\":\"36 9\",\"pages\":\"15963-15977\"},\"PeriodicalIF\":8.9000,\"publicationDate\":\"2025-03-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on neural networks and learning systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/11005486/\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on neural networks and learning systems","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/11005486/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

多源领域自适应(MDA)的目的是将知识从多个标记的源领域转移到未标记的目标领域。然而,传统方法主要侧重于通过样本水平约束(如最大平均差异(MMD))实现域间对齐,而忽略了三个关键方面:1)数据增强的潜力;2)域内对齐的意义;3)集群级约束的设计。在本文中,我们为MDA任务引入了一种新的硬度驱动策略,命名为$\ mathm {a}^{3}\ mathm {MDA}$,它通过自适应硬度量化和在数据增强和领域对齐中的应用,共同考虑了这三个方面。为了实现这一目标,$\mathrm {A}^{3}\mathrm {MDA}$逐步提出了三种自适应硬度测量(ahm),即基本硬度测量、平滑硬度测量和比较硬度测量,每种硬度测量都包含不同场景的不同机制。具体来说,基本AHM旨在测量每个源/目标样品的瞬时硬度。然后,平滑AHM测量的硬度值将自适应调整强数据增强的强度水平,以保持与模型泛化能力的兼容性。相比之下,比较AHM的设计是为了促进集群级约束。通过利用硬度值作为样本特定权重,传统的MMD被增强为加权聚类变体,增强了域间对准的鲁棒性和精度。对于常被忽略的域内对齐,我们根据硬度排序选择较硬的样本,提高伪标签的质量,形成聚类良好的目标特征空间,自适应构建伪对比矩阵(pseudo-contrastive matrix, PCM)。在多个MDA基准测试上的实验表明,$\mathrm {A}^{3}\mathrm {MDA}$优于其他方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Adaptive Hardness-Driven Augmentation and Alignment Strategies for Multisource Domain Adaptations
Multisource domain adaptation (MDA) aims to transfer knowledge from multiple labeled source domains to an unlabeled target domain. Nevertheless, traditional methods primarily focus on achieving interdomain alignment through sample-level constraints, such as maximum mean discrepancy (MMD), neglecting three pivotal aspects: 1) the potential of data augmentation; 2) the significance of intradomain alignment; and 3) the design of cluster-level constraints. In this article, we introduce a novel hardness-driven strategy for MDA tasks, named $\mathrm {A}^{3}\mathrm {MDA}$ , which collectively considers these three aspects through adaptive hardness quantification and utilization in both data augmentation and domain alignment. To achieve this, $\mathrm {A}^{3}\mathrm {MDA}$ progressively proposes three adaptive hardness measurements (AHMs), i.e., basic, smooth, and comparative AHMs, each incorporating distinct mechanisms for diverse scenarios. Specifically, basic AHM aims to gauge the instantaneous hardness for each source/target sample. Then, hardness values measured by smooth AHM will adaptively adjust the intensity level of strong data augmentation to maintain compatibility with the model’s generalization capacity. In contrast, comparative AHM is designed to facilitate cluster-level constraints. By leveraging hardness values as sample-specific weights, the traditional MMD is enhanced into a weighted-clustered variant, strengthening the robustness and precision of interdomain alignment. As for the often-neglected intradomain alignment, we adaptively construct a pseudo-contrastive matrix (PCM) by selecting harder samples based on the hardness rankings, enhancing the quality of pseudo-labels, and shaping a well-clustered target feature space. Experiments on multiple MDA benchmarks show that $\mathrm {A}^{3}\mathrm {MDA}$ outperforms other methods.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE transactions on neural networks and learning systems
IEEE transactions on neural networks and learning systems COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
CiteScore
23.80
自引率
9.60%
发文量
2102
审稿时长
3-8 weeks
期刊介绍: The focus of IEEE Transactions on Neural Networks and Learning Systems is to present scholarly articles discussing the theory, design, and applications of neural networks as well as other learning systems. The journal primarily highlights technical and scientific research in this domain.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信