CDTNET:基于属性的跨域人员再识别转换器

Mengyuan Guan, Suncheng Xiang, Ting Liu, Yuzhuo Fu
{"title":"CDTNET:基于属性的跨域人员再识别转换器","authors":"Mengyuan Guan, Suncheng Xiang, Ting Liu, Yuzhuo Fu","doi":"10.1109/ICMEW56448.2022.9859330","DOIUrl":null,"url":null,"abstract":"Unsupervised Domain Adaptation (UDA) Person reidentification (ReID) strives towards fine-tuning the model trained on a labelled source-domain dataset to a target-domain dataset, which has grown by leaps and bounds due to the advancement of deep convolution neural network (CNN). However, traditional CNN-based methods mainly focus on learning small discriminative features in local pedestrian region, which fails to exploit the potential of rich structural patterns and suffers from information loss on details caused by convolution operators. To tackle the challenge, this work attempts to exploit the valuable fine-grained attributes based on Transformers. Inspired by this, we propose a Cross-Domain Transformer network CDTnet to enhance the robust feature learning in connection with pedestrian attributes. As far as we are aware, we are among the first attempt to adopt a pure transformer for cross-domain ReID research. All-inclusive experiments conducted on several ReID benchmarks demonstrate that our method can reach a comparable yield with reference to the state-of-the-arts.","PeriodicalId":106759,"journal":{"name":"2022 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"CDTNET: Cross-Domain Transformer Based on Attributes for Person Re-Identification\",\"authors\":\"Mengyuan Guan, Suncheng Xiang, Ting Liu, Yuzhuo Fu\",\"doi\":\"10.1109/ICMEW56448.2022.9859330\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Unsupervised Domain Adaptation (UDA) Person reidentification (ReID) strives towards fine-tuning the model trained on a labelled source-domain dataset to a target-domain dataset, which has grown by leaps and bounds due to the advancement of deep convolution neural network (CNN). However, traditional CNN-based methods mainly focus on learning small discriminative features in local pedestrian region, which fails to exploit the potential of rich structural patterns and suffers from information loss on details caused by convolution operators. To tackle the challenge, this work attempts to exploit the valuable fine-grained attributes based on Transformers. Inspired by this, we propose a Cross-Domain Transformer network CDTnet to enhance the robust feature learning in connection with pedestrian attributes. As far as we are aware, we are among the first attempt to adopt a pure transformer for cross-domain ReID research. All-inclusive experiments conducted on several ReID benchmarks demonstrate that our method can reach a comparable yield with reference to the state-of-the-arts.\",\"PeriodicalId\":106759,\"journal\":{\"name\":\"2022 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)\",\"volume\":\"53 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-07-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICMEW56448.2022.9859330\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICMEW56448.2022.9859330","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

无监督域自适应(UDA)人再识别(ReID)致力于将在标记的源域数据集上训练的模型微调到目标域数据集,由于深度卷积神经网络(CNN)的进步,该模型得到了突飞猛进的发展。然而,传统的基于cnn的方法主要集中在学习局部行人区域的小特征,无法挖掘丰富结构模式的潜力,并且存在卷积算子导致细节信息丢失的问题。为了应对这一挑战,本工作试图利用基于transformer的有价值的细粒度属性。受此启发,我们提出了一种跨域变压器网络CDTnet来增强与行人属性相关的鲁棒特征学习。据我们所知,我们是最早尝试采用纯变压器进行跨域ReID研究的公司之一。在几个ReID基准上进行的全面实验表明,我们的方法可以达到与最先进的产量相当的产量。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
CDTNET: Cross-Domain Transformer Based on Attributes for Person Re-Identification
Unsupervised Domain Adaptation (UDA) Person reidentification (ReID) strives towards fine-tuning the model trained on a labelled source-domain dataset to a target-domain dataset, which has grown by leaps and bounds due to the advancement of deep convolution neural network (CNN). However, traditional CNN-based methods mainly focus on learning small discriminative features in local pedestrian region, which fails to exploit the potential of rich structural patterns and suffers from information loss on details caused by convolution operators. To tackle the challenge, this work attempts to exploit the valuable fine-grained attributes based on Transformers. Inspired by this, we propose a Cross-Domain Transformer network CDTnet to enhance the robust feature learning in connection with pedestrian attributes. As far as we are aware, we are among the first attempt to adopt a pure transformer for cross-domain ReID research. All-inclusive experiments conducted on several ReID benchmarks demonstrate that our method can reach a comparable yield with reference to the state-of-the-arts.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信