Yuxiang Yang;Xinyi Zeng;Pinxian Zeng;Chen Zu;Binyu Yan;Jiliu Zhou;Yan Wang
{"title":"多源域自适应的自适应硬度驱动增强和对齐策略","authors":"Yuxiang Yang;Xinyi Zeng;Pinxian Zeng;Chen Zu;Binyu Yan;Jiliu Zhou;Yan Wang","doi":"10.1109/TNNLS.2025.3565728","DOIUrl":null,"url":null,"abstract":"Multisource domain adaptation (MDA) aims to transfer knowledge from multiple labeled source domains to an unlabeled target domain. Nevertheless, traditional methods primarily focus on achieving interdomain alignment through sample-level constraints, such as maximum mean discrepancy (MMD), neglecting three pivotal aspects: 1) the potential of data augmentation; 2) the significance of intradomain alignment; and 3) the design of cluster-level constraints. In this article, we introduce a novel hardness-driven strategy for MDA tasks, named <inline-formula> <tex-math>$\\mathrm {A}^{3}\\mathrm {MDA}$ </tex-math></inline-formula>, which collectively considers these three aspects through adaptive hardness quantification and utilization in both data augmentation and domain alignment. To achieve this, <inline-formula> <tex-math>$\\mathrm {A}^{3}\\mathrm {MDA}$ </tex-math></inline-formula> progressively proposes three adaptive hardness measurements (AHMs), i.e., basic, smooth, and comparative AHMs, each incorporating distinct mechanisms for diverse scenarios. Specifically, basic AHM aims to gauge the instantaneous hardness for each source/target sample. Then, hardness values measured by smooth AHM will adaptively adjust the intensity level of strong data augmentation to maintain compatibility with the model’s generalization capacity. In contrast, comparative AHM is designed to facilitate cluster-level constraints. By leveraging hardness values as sample-specific weights, the traditional MMD is enhanced into a weighted-clustered variant, strengthening the robustness and precision of interdomain alignment. As for the often-neglected intradomain alignment, we adaptively construct a pseudo-contrastive matrix (PCM) by selecting harder samples based on the hardness rankings, enhancing the quality of pseudo-labels, and shaping a well-clustered target feature space. Experiments on multiple MDA benchmarks show that <inline-formula> <tex-math>$\\mathrm {A}^{3}\\mathrm {MDA}$ </tex-math></inline-formula> outperforms other methods.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"36 9","pages":"15963-15977"},"PeriodicalIF":8.9000,"publicationDate":"2025-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Adaptive Hardness-Driven Augmentation and Alignment Strategies for Multisource Domain Adaptations\",\"authors\":\"Yuxiang Yang;Xinyi Zeng;Pinxian Zeng;Chen Zu;Binyu Yan;Jiliu Zhou;Yan Wang\",\"doi\":\"10.1109/TNNLS.2025.3565728\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Multisource domain adaptation (MDA) aims to transfer knowledge from multiple labeled source domains to an unlabeled target domain. Nevertheless, traditional methods primarily focus on achieving interdomain alignment through sample-level constraints, such as maximum mean discrepancy (MMD), neglecting three pivotal aspects: 1) the potential of data augmentation; 2) the significance of intradomain alignment; and 3) the design of cluster-level constraints. In this article, we introduce a novel hardness-driven strategy for MDA tasks, named <inline-formula> <tex-math>$\\\\mathrm {A}^{3}\\\\mathrm {MDA}$ </tex-math></inline-formula>, which collectively considers these three aspects through adaptive hardness quantification and utilization in both data augmentation and domain alignment. To achieve this, <inline-formula> <tex-math>$\\\\mathrm {A}^{3}\\\\mathrm {MDA}$ </tex-math></inline-formula> progressively proposes three adaptive hardness measurements (AHMs), i.e., basic, smooth, and comparative AHMs, each incorporating distinct mechanisms for diverse scenarios. Specifically, basic AHM aims to gauge the instantaneous hardness for each source/target sample. Then, hardness values measured by smooth AHM will adaptively adjust the intensity level of strong data augmentation to maintain compatibility with the model’s generalization capacity. In contrast, comparative AHM is designed to facilitate cluster-level constraints. By leveraging hardness values as sample-specific weights, the traditional MMD is enhanced into a weighted-clustered variant, strengthening the robustness and precision of interdomain alignment. As for the often-neglected intradomain alignment, we adaptively construct a pseudo-contrastive matrix (PCM) by selecting harder samples based on the hardness rankings, enhancing the quality of pseudo-labels, and shaping a well-clustered target feature space. Experiments on multiple MDA benchmarks show that <inline-formula> <tex-math>$\\\\mathrm {A}^{3}\\\\mathrm {MDA}$ </tex-math></inline-formula> outperforms other methods.\",\"PeriodicalId\":13303,\"journal\":{\"name\":\"IEEE transactions on neural networks and learning systems\",\"volume\":\"36 9\",\"pages\":\"15963-15977\"},\"PeriodicalIF\":8.9000,\"publicationDate\":\"2025-03-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on neural networks and learning systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/11005486/\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on neural networks and learning systems","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/11005486/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Adaptive Hardness-Driven Augmentation and Alignment Strategies for Multisource Domain Adaptations
Multisource domain adaptation (MDA) aims to transfer knowledge from multiple labeled source domains to an unlabeled target domain. Nevertheless, traditional methods primarily focus on achieving interdomain alignment through sample-level constraints, such as maximum mean discrepancy (MMD), neglecting three pivotal aspects: 1) the potential of data augmentation; 2) the significance of intradomain alignment; and 3) the design of cluster-level constraints. In this article, we introduce a novel hardness-driven strategy for MDA tasks, named $\mathrm {A}^{3}\mathrm {MDA}$ , which collectively considers these three aspects through adaptive hardness quantification and utilization in both data augmentation and domain alignment. To achieve this, $\mathrm {A}^{3}\mathrm {MDA}$ progressively proposes three adaptive hardness measurements (AHMs), i.e., basic, smooth, and comparative AHMs, each incorporating distinct mechanisms for diverse scenarios. Specifically, basic AHM aims to gauge the instantaneous hardness for each source/target sample. Then, hardness values measured by smooth AHM will adaptively adjust the intensity level of strong data augmentation to maintain compatibility with the model’s generalization capacity. In contrast, comparative AHM is designed to facilitate cluster-level constraints. By leveraging hardness values as sample-specific weights, the traditional MMD is enhanced into a weighted-clustered variant, strengthening the robustness and precision of interdomain alignment. As for the often-neglected intradomain alignment, we adaptively construct a pseudo-contrastive matrix (PCM) by selecting harder samples based on the hardness rankings, enhancing the quality of pseudo-labels, and shaping a well-clustered target feature space. Experiments on multiple MDA benchmarks show that $\mathrm {A}^{3}\mathrm {MDA}$ outperforms other methods.
期刊介绍:
The focus of IEEE Transactions on Neural Networks and Learning Systems is to present scholarly articles discussing the theory, design, and applications of neural networks as well as other learning systems. The journal primarily highlights technical and scientific research in this domain.