Wei Wang;Hanyang Li;Cong Wang;Chao Huang;Zhengming Ding;Feiping Nie;Xiaochun Cao
{"title":"Deep Label Propagation With Nuclear Norm Maximization for Visual Domain Adaptation","authors":"Wei Wang;Hanyang Li;Cong Wang;Chao Huang;Zhengming Ding;Feiping Nie;Xiaochun Cao","doi":"10.1109/TIP.2025.3533199","DOIUrl":null,"url":null,"abstract":"Domain adaptation aims to leverage abundant label information from a source domain to an unlabeled target domain with two different distributions. Existing methods usually rely on a classifier to generate high-quality pseudo-labels for the target domain, facilitating the learning of discriminative features. Label propagation (LP), as an effective classifier, propagates labels from the source domain to the target domain by designing a smooth function over a similarity graph, which represents structural relationships among data points in feature space. However, LP has not been thoroughly explored in deep neural network-based domain adaptation approaches. Additionally, the probability labels generated by LP are low-confident and LP is sensitive to class imbalance problem. To address these problems, we propose a novel approach for domain adaptation named deep label propagation with nuclear norm maximization (DLP-NNM). Specifically, we employ the constraint of nuclear norm maximization to enhance both label confidence and class diversity in LP and propose an efficient algorithm to solve the corresponding optimization problem. Subsequently, we utilize the proposed LP to guide the classifier layer in a deep discriminative adaptation network using the cross-entropy loss. As such, the network could produce more reliable predictions for the target domain, thereby facilitating more effective discriminative feature learning. Extensive experimental results on three cross-domain benchmark datasets demonstrate that the proposed DLP-NNM surpasses existing state-of-the-art domain adaptation approaches.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"1246-1258"},"PeriodicalIF":0.0000,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10857978/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Domain adaptation aims to leverage abundant label information from a source domain to an unlabeled target domain with two different distributions. Existing methods usually rely on a classifier to generate high-quality pseudo-labels for the target domain, facilitating the learning of discriminative features. Label propagation (LP), as an effective classifier, propagates labels from the source domain to the target domain by designing a smooth function over a similarity graph, which represents structural relationships among data points in feature space. However, LP has not been thoroughly explored in deep neural network-based domain adaptation approaches. Additionally, the probability labels generated by LP are low-confident and LP is sensitive to class imbalance problem. To address these problems, we propose a novel approach for domain adaptation named deep label propagation with nuclear norm maximization (DLP-NNM). Specifically, we employ the constraint of nuclear norm maximization to enhance both label confidence and class diversity in LP and propose an efficient algorithm to solve the corresponding optimization problem. Subsequently, we utilize the proposed LP to guide the classifier layer in a deep discriminative adaptation network using the cross-entropy loss. As such, the network could produce more reliable predictions for the target domain, thereby facilitating more effective discriminative feature learning. Extensive experimental results on three cross-domain benchmark datasets demonstrate that the proposed DLP-NNM surpasses existing state-of-the-art domain adaptation approaches.