{"title":"DATR: Unsupervised Domain Adaptive Detection Transformer With Dataset-Level Adaptation and Prototypical Alignment","authors":"Liang Chen;Jianhong Han;Yupei Wang","doi":"10.1109/TIP.2025.3527370","DOIUrl":null,"url":null,"abstract":"With the success of the DEtection TRansformer (DETR), numerous researchers have explored its effectiveness in addressing unsupervised domain adaptation tasks. Existing methods leverage carefully designed feature alignment techniques to align the backbone or encoder, yielding promising results. However, effectively aligning instance-level features within the unique decoder structure of the detector has largely been neglected. Related techniques primarily align instance-level features in a class-agnostic manner, overlooking distinctions between features from different categories, which results in only limited improvements. Furthermore, the scope of current alignment modules in the decoder is often restricted to a limited batch of images, failing to capture the dataset-level cues, thereby severely constraining the detector’s generalization ability to the target domain. To this end, we introduce a strong DETR-based detector named Domain Adaptive detection TRansformer (DATR) for unsupervised domain adaptation of object detection. First, we propose the Class-wise Prototypes Alignment (CPA) module, which effectively aligns cross-domain features in a class-aware manner by bridging the gap between the object detection task and the domain adaptation task. Then, the designed Dataset-level Alignment Scheme (DAS) explicitly guides the detector to achieve global representation and enhance inter-class distinguishability of instance-level features across the entire dataset, which spans both domains, by leveraging contrastive learning. Moreover, DATR incorporates a mean-teacher-based self-training framework, utilizing pseudo-labels generated by the teacher model to further mitigate domain bias. Extensive experimental results demonstrate superior performance and generalization capabilities of our proposed DATR in multiple domain adaptation scenarios. Code is released at <uri>https://github.com/h751410234/DATR</uri>.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"982-994"},"PeriodicalIF":0.0000,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10841964/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
With the success of the DEtection TRansformer (DETR), numerous researchers have explored its effectiveness in addressing unsupervised domain adaptation tasks. Existing methods leverage carefully designed feature alignment techniques to align the backbone or encoder, yielding promising results. However, effectively aligning instance-level features within the unique decoder structure of the detector has largely been neglected. Related techniques primarily align instance-level features in a class-agnostic manner, overlooking distinctions between features from different categories, which results in only limited improvements. Furthermore, the scope of current alignment modules in the decoder is often restricted to a limited batch of images, failing to capture the dataset-level cues, thereby severely constraining the detector’s generalization ability to the target domain. To this end, we introduce a strong DETR-based detector named Domain Adaptive detection TRansformer (DATR) for unsupervised domain adaptation of object detection. First, we propose the Class-wise Prototypes Alignment (CPA) module, which effectively aligns cross-domain features in a class-aware manner by bridging the gap between the object detection task and the domain adaptation task. Then, the designed Dataset-level Alignment Scheme (DAS) explicitly guides the detector to achieve global representation and enhance inter-class distinguishability of instance-level features across the entire dataset, which spans both domains, by leveraging contrastive learning. Moreover, DATR incorporates a mean-teacher-based self-training framework, utilizing pseudo-labels generated by the teacher model to further mitigate domain bias. Extensive experimental results demonstrate superior performance and generalization capabilities of our proposed DATR in multiple domain adaptation scenarios. Code is released at https://github.com/h751410234/DATR.