Zhongyi Wen,Qiang Li,Yatong Wang,Huaizong Shao,Guoming Sun
{"title":"FGPLFA:无源无监督域自适应的细粒度伪标记和特征对齐。","authors":"Zhongyi Wen,Qiang Li,Yatong Wang,Huaizong Shao,Guoming Sun","doi":"10.1109/tnnls.2025.3616236","DOIUrl":null,"url":null,"abstract":"Source-free unsupervised domain adaptation (SFUDA) aims to improve performance in unlabeled target domain data without accessing source domain data. This is crucial in scenarios with data-sharing restrictions due to privacy or compliance constraints. Existing SFUDA approaches often rely on pseudo-labeling techniques based on entropy or confidence metrics. These often overlook fine-grained data features, resulting in noisy pseudo-labels that degrade model performance. To overcome this limitation, we develop a new method called fine-grained pseudo-labeling and feature alignment (FGPLFA) to enhance SFUDA's performance. FGPLFA starts with a gradient-based metric that integrates insights from both model knowledge and data features, creating a more reliable sample metric. To enhance fine granularity, the fine-grained pseudo-labeling (FGPL) module was introduced. This module clusters data based on the magnitude and direction of gradients, allowing for dataset partitioning into subsets at the sample level. The subsets are pseudo-labeled with category-specificity and domain specificity, establishing a multilevel granularity structure that reduces noisy pseudo-labels. Subsequently, the mean-covariance adjustment feature alignment (MCAFA) method was introduced. Features from the subsets are aligned in a specified sequence, enhancing model adaptability in the target domain. Extensive experiments conducted across multiple datasets validate the superiority of FGPLFA.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"53 1","pages":""},"PeriodicalIF":8.9000,"publicationDate":"2025-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"FGPLFA: Fine-Grained Pseudo-Labeling and Feature Alignment for Source-Free Unsupervised Domain Adaptation.\",\"authors\":\"Zhongyi Wen,Qiang Li,Yatong Wang,Huaizong Shao,Guoming Sun\",\"doi\":\"10.1109/tnnls.2025.3616236\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Source-free unsupervised domain adaptation (SFUDA) aims to improve performance in unlabeled target domain data without accessing source domain data. This is crucial in scenarios with data-sharing restrictions due to privacy or compliance constraints. Existing SFUDA approaches often rely on pseudo-labeling techniques based on entropy or confidence metrics. These often overlook fine-grained data features, resulting in noisy pseudo-labels that degrade model performance. To overcome this limitation, we develop a new method called fine-grained pseudo-labeling and feature alignment (FGPLFA) to enhance SFUDA's performance. FGPLFA starts with a gradient-based metric that integrates insights from both model knowledge and data features, creating a more reliable sample metric. To enhance fine granularity, the fine-grained pseudo-labeling (FGPL) module was introduced. This module clusters data based on the magnitude and direction of gradients, allowing for dataset partitioning into subsets at the sample level. The subsets are pseudo-labeled with category-specificity and domain specificity, establishing a multilevel granularity structure that reduces noisy pseudo-labels. Subsequently, the mean-covariance adjustment feature alignment (MCAFA) method was introduced. Features from the subsets are aligned in a specified sequence, enhancing model adaptability in the target domain. Extensive experiments conducted across multiple datasets validate the superiority of FGPLFA.\",\"PeriodicalId\":13303,\"journal\":{\"name\":\"IEEE transactions on neural networks and learning systems\",\"volume\":\"53 1\",\"pages\":\"\"},\"PeriodicalIF\":8.9000,\"publicationDate\":\"2025-10-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on neural networks and learning systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1109/tnnls.2025.3616236\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on neural networks and learning systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1109/tnnls.2025.3616236","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
FGPLFA: Fine-Grained Pseudo-Labeling and Feature Alignment for Source-Free Unsupervised Domain Adaptation.
Source-free unsupervised domain adaptation (SFUDA) aims to improve performance in unlabeled target domain data without accessing source domain data. This is crucial in scenarios with data-sharing restrictions due to privacy or compliance constraints. Existing SFUDA approaches often rely on pseudo-labeling techniques based on entropy or confidence metrics. These often overlook fine-grained data features, resulting in noisy pseudo-labels that degrade model performance. To overcome this limitation, we develop a new method called fine-grained pseudo-labeling and feature alignment (FGPLFA) to enhance SFUDA's performance. FGPLFA starts with a gradient-based metric that integrates insights from both model knowledge and data features, creating a more reliable sample metric. To enhance fine granularity, the fine-grained pseudo-labeling (FGPL) module was introduced. This module clusters data based on the magnitude and direction of gradients, allowing for dataset partitioning into subsets at the sample level. The subsets are pseudo-labeled with category-specificity and domain specificity, establishing a multilevel granularity structure that reduces noisy pseudo-labels. Subsequently, the mean-covariance adjustment feature alignment (MCAFA) method was introduced. Features from the subsets are aligned in a specified sequence, enhancing model adaptability in the target domain. Extensive experiments conducted across multiple datasets validate the superiority of FGPLFA.
期刊介绍:
The focus of IEEE Transactions on Neural Networks and Learning Systems is to present scholarly articles discussing the theory, design, and applications of neural networks as well as other learning systems. The journal primarily highlights technical and scientific research in this domain.