{"title":"一致性指导的多源自由领域适应性","authors":"","doi":"10.1016/j.engappai.2024.109497","DOIUrl":null,"url":null,"abstract":"<div><div>Deep neural networks suffer from severe performance degradation when facing a distribution shift between the labeled source domain and unlabeled target domain. Domain adaptation addresses this issue by aligning the feature distributions of both domains. Conventional methods assume that the labeled source samples are drawn from a single data distribution (domain) and can be fully accessed during training. However, in real applications, multiple source domains with different distributions often exist, and source samples may be unavailable due to privacy and storage constraints. To address multi-source and data-free challenges, Multi-Source-Free Domain Adaptation (MSFDA) uses only diverse pre-trained source models without requiring any source data. Most existing MSFDA methods adapt each source model to the target domain individually, making them ineffective in leveraging the complementary transferable knowledge from different source models. In this paper, we propose a novel COnsistency-guided multi-source-free Domain Adaptation (CODA) method, which leverages the label consistency criterion as a bridge to facilitate the cooperation among source models. CODA applies consistency regularization on the soft labels of weakly- and strongly-augmented target samples from each pair of source models, allowing them to supervise each other. To achieve high-quality pseudo-labels, CODA also performs a consistency-based denoising to unify the pseudo-labels from different source models. Finally, CODA optimally combines different source models by maximizing the mutual information of the predictions of the resulting target model. Extensive experiments on four benchmark datasets demonstrate the effectiveness of CODA compared to the state-of-the-art methods.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":7.5000,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Consistency-guided Multi-Source-Free Domain Adaptation\",\"authors\":\"\",\"doi\":\"10.1016/j.engappai.2024.109497\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Deep neural networks suffer from severe performance degradation when facing a distribution shift between the labeled source domain and unlabeled target domain. Domain adaptation addresses this issue by aligning the feature distributions of both domains. Conventional methods assume that the labeled source samples are drawn from a single data distribution (domain) and can be fully accessed during training. However, in real applications, multiple source domains with different distributions often exist, and source samples may be unavailable due to privacy and storage constraints. To address multi-source and data-free challenges, Multi-Source-Free Domain Adaptation (MSFDA) uses only diverse pre-trained source models without requiring any source data. Most existing MSFDA methods adapt each source model to the target domain individually, making them ineffective in leveraging the complementary transferable knowledge from different source models. In this paper, we propose a novel COnsistency-guided multi-source-free Domain Adaptation (CODA) method, which leverages the label consistency criterion as a bridge to facilitate the cooperation among source models. CODA applies consistency regularization on the soft labels of weakly- and strongly-augmented target samples from each pair of source models, allowing them to supervise each other. To achieve high-quality pseudo-labels, CODA also performs a consistency-based denoising to unify the pseudo-labels from different source models. Finally, CODA optimally combines different source models by maximizing the mutual information of the predictions of the resulting target model. Extensive experiments on four benchmark datasets demonstrate the effectiveness of CODA compared to the state-of-the-art methods.</div></div>\",\"PeriodicalId\":50523,\"journal\":{\"name\":\"Engineering Applications of Artificial Intelligence\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":7.5000,\"publicationDate\":\"2024-10-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Engineering Applications of Artificial Intelligence\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0952197624016555\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Engineering Applications of Artificial Intelligence","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0952197624016555","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
Deep neural networks suffer from severe performance degradation when facing a distribution shift between the labeled source domain and unlabeled target domain. Domain adaptation addresses this issue by aligning the feature distributions of both domains. Conventional methods assume that the labeled source samples are drawn from a single data distribution (domain) and can be fully accessed during training. However, in real applications, multiple source domains with different distributions often exist, and source samples may be unavailable due to privacy and storage constraints. To address multi-source and data-free challenges, Multi-Source-Free Domain Adaptation (MSFDA) uses only diverse pre-trained source models without requiring any source data. Most existing MSFDA methods adapt each source model to the target domain individually, making them ineffective in leveraging the complementary transferable knowledge from different source models. In this paper, we propose a novel COnsistency-guided multi-source-free Domain Adaptation (CODA) method, which leverages the label consistency criterion as a bridge to facilitate the cooperation among source models. CODA applies consistency regularization on the soft labels of weakly- and strongly-augmented target samples from each pair of source models, allowing them to supervise each other. To achieve high-quality pseudo-labels, CODA also performs a consistency-based denoising to unify the pseudo-labels from different source models. Finally, CODA optimally combines different source models by maximizing the mutual information of the predictions of the resulting target model. Extensive experiments on four benchmark datasets demonstrate the effectiveness of CODA compared to the state-of-the-art methods.
期刊介绍:
Artificial Intelligence (AI) is pivotal in driving the fourth industrial revolution, witnessing remarkable advancements across various machine learning methodologies. AI techniques have become indispensable tools for practicing engineers, enabling them to tackle previously insurmountable challenges. Engineering Applications of Artificial Intelligence serves as a global platform for the swift dissemination of research elucidating the practical application of AI methods across all engineering disciplines. Submitted papers are expected to present novel aspects of AI utilized in real-world engineering applications, validated using publicly available datasets to ensure the replicability of research outcomes. Join us in exploring the transformative potential of AI in engineering.