Mengyuan Guan, Suncheng Xiang, Ting Liu, Yuzhuo Fu
{"title":"CDTNET: Cross-Domain Transformer Based on Attributes for Person Re-Identification","authors":"Mengyuan Guan, Suncheng Xiang, Ting Liu, Yuzhuo Fu","doi":"10.1109/ICMEW56448.2022.9859330","DOIUrl":null,"url":null,"abstract":"Unsupervised Domain Adaptation (UDA) Person reidentification (ReID) strives towards fine-tuning the model trained on a labelled source-domain dataset to a target-domain dataset, which has grown by leaps and bounds due to the advancement of deep convolution neural network (CNN). However, traditional CNN-based methods mainly focus on learning small discriminative features in local pedestrian region, which fails to exploit the potential of rich structural patterns and suffers from information loss on details caused by convolution operators. To tackle the challenge, this work attempts to exploit the valuable fine-grained attributes based on Transformers. Inspired by this, we propose a Cross-Domain Transformer network CDTnet to enhance the robust feature learning in connection with pedestrian attributes. As far as we are aware, we are among the first attempt to adopt a pure transformer for cross-domain ReID research. All-inclusive experiments conducted on several ReID benchmarks demonstrate that our method can reach a comparable yield with reference to the state-of-the-arts.","PeriodicalId":106759,"journal":{"name":"2022 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICMEW56448.2022.9859330","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Unsupervised Domain Adaptation (UDA) Person reidentification (ReID) strives towards fine-tuning the model trained on a labelled source-domain dataset to a target-domain dataset, which has grown by leaps and bounds due to the advancement of deep convolution neural network (CNN). However, traditional CNN-based methods mainly focus on learning small discriminative features in local pedestrian region, which fails to exploit the potential of rich structural patterns and suffers from information loss on details caused by convolution operators. To tackle the challenge, this work attempts to exploit the valuable fine-grained attributes based on Transformers. Inspired by this, we propose a Cross-Domain Transformer network CDTnet to enhance the robust feature learning in connection with pedestrian attributes. As far as we are aware, we are among the first attempt to adopt a pure transformer for cross-domain ReID research. All-inclusive experiments conducted on several ReID benchmarks demonstrate that our method can reach a comparable yield with reference to the state-of-the-arts.