{"title":"用于无监督领域适应性人员再识别的深度相互提炼技术","authors":"Xingyu Gao;Zhenyu Chen;Jianze Wei;Rubo Wang;Zhijun Zhao","doi":"10.1109/TMM.2024.3459637","DOIUrl":null,"url":null,"abstract":"Unsupervised domain adaptation person re-identification (UDA person re-ID) aims at transferring the knowledge on the source domain with expensive manual annotation to the unlabeled target domain. Most of the recent papers leverage pseudo-labels for the target images to accomplish this task. However, the noise in the generated labels hinders the identification system from learning discriminative features. To address this problem, we propose a deep mutual distillation (DMD) to generate reliable pseudo-labels for UDA person re-ID. The proposed DMD applies two parallel branches for feature extraction, and each branch serves as the teacher of the other to generate pseudo-labels for its training. This mutually reinforcing optimization framework enhances the reliability of pseudo-labels, improving the identification performance. In addition, we present a bilateral graph representation (BGR) to describe the pedestrian images. BGR mimics the person re-identification of the human to aggregate the identity features according to the visual similarity and attribute consistency. Experimental results on Market-1501 and Duke demonstrate the effectiveness and generalization of the proposed method.","PeriodicalId":13273,"journal":{"name":"IEEE Transactions on Multimedia","volume":"27 ","pages":"1059-1071"},"PeriodicalIF":8.4000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Deep Mutual Distillation for Unsupervised Domain Adaptation Person Re-Identification\",\"authors\":\"Xingyu Gao;Zhenyu Chen;Jianze Wei;Rubo Wang;Zhijun Zhao\",\"doi\":\"10.1109/TMM.2024.3459637\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Unsupervised domain adaptation person re-identification (UDA person re-ID) aims at transferring the knowledge on the source domain with expensive manual annotation to the unlabeled target domain. Most of the recent papers leverage pseudo-labels for the target images to accomplish this task. However, the noise in the generated labels hinders the identification system from learning discriminative features. To address this problem, we propose a deep mutual distillation (DMD) to generate reliable pseudo-labels for UDA person re-ID. The proposed DMD applies two parallel branches for feature extraction, and each branch serves as the teacher of the other to generate pseudo-labels for its training. This mutually reinforcing optimization framework enhances the reliability of pseudo-labels, improving the identification performance. In addition, we present a bilateral graph representation (BGR) to describe the pedestrian images. BGR mimics the person re-identification of the human to aggregate the identity features according to the visual similarity and attribute consistency. Experimental results on Market-1501 and Duke demonstrate the effectiveness and generalization of the proposed method.\",\"PeriodicalId\":13273,\"journal\":{\"name\":\"IEEE Transactions on Multimedia\",\"volume\":\"27 \",\"pages\":\"1059-1071\"},\"PeriodicalIF\":8.4000,\"publicationDate\":\"2024-09-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Multimedia\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10678811/\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Multimedia","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10678811/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
Deep Mutual Distillation for Unsupervised Domain Adaptation Person Re-Identification
Unsupervised domain adaptation person re-identification (UDA person re-ID) aims at transferring the knowledge on the source domain with expensive manual annotation to the unlabeled target domain. Most of the recent papers leverage pseudo-labels for the target images to accomplish this task. However, the noise in the generated labels hinders the identification system from learning discriminative features. To address this problem, we propose a deep mutual distillation (DMD) to generate reliable pseudo-labels for UDA person re-ID. The proposed DMD applies two parallel branches for feature extraction, and each branch serves as the teacher of the other to generate pseudo-labels for its training. This mutually reinforcing optimization framework enhances the reliability of pseudo-labels, improving the identification performance. In addition, we present a bilateral graph representation (BGR) to describe the pedestrian images. BGR mimics the person re-identification of the human to aggregate the identity features according to the visual similarity and attribute consistency. Experimental results on Market-1501 and Duke demonstrate the effectiveness and generalization of the proposed method.
期刊介绍:
The IEEE Transactions on Multimedia delves into diverse aspects of multimedia technology and applications, covering circuits, networking, signal processing, systems, software, and systems integration. The scope aligns with the Fields of Interest of the sponsors, ensuring a comprehensive exploration of research in multimedia.