{"title":"保留身份风格迁移的跨域人物再认同","authors":"Shixing Chen, Caojin Zhang, Mingtao Dong, Chengcui Zhang","doi":"10.1109/MIPR51284.2021.00008","DOIUrl":null,"url":null,"abstract":"Although great successes have been achieved recently in person re-identification (re-ID), there are still two major obstacles restricting its real-world performance: large variety of camera styles and a limited number of samples for each identity. In this paper, we propose an efficient and scalable framework for cross-domain re-ID tasks. Single-model style transfer and pairwise comparison are seamlessly integrated in our framework through adversarial training. Moreover, we propose a novel identity-preserving loss to replace the content loss in style transfer and mathematically show that its minimization guarantees that the generated images have identical conditional distributions (conditioned on identity) as the real ones, which is critical for cross-domain person re-ID. Our model achieved state-of-the-art results in challenging cross-domain re-ID tasks.","PeriodicalId":139543,"journal":{"name":"2021 IEEE 4th International Conference on Multimedia Information Processing and Retrieval (MIPR)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Cross-domain Person Re-Identification with Identity-preserving Style Transfer\",\"authors\":\"Shixing Chen, Caojin Zhang, Mingtao Dong, Chengcui Zhang\",\"doi\":\"10.1109/MIPR51284.2021.00008\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Although great successes have been achieved recently in person re-identification (re-ID), there are still two major obstacles restricting its real-world performance: large variety of camera styles and a limited number of samples for each identity. In this paper, we propose an efficient and scalable framework for cross-domain re-ID tasks. Single-model style transfer and pairwise comparison are seamlessly integrated in our framework through adversarial training. Moreover, we propose a novel identity-preserving loss to replace the content loss in style transfer and mathematically show that its minimization guarantees that the generated images have identical conditional distributions (conditioned on identity) as the real ones, which is critical for cross-domain person re-ID. Our model achieved state-of-the-art results in challenging cross-domain re-ID tasks.\",\"PeriodicalId\":139543,\"journal\":{\"name\":\"2021 IEEE 4th International Conference on Multimedia Information Processing and Retrieval (MIPR)\",\"volume\":\"16 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE 4th International Conference on Multimedia Information Processing and Retrieval (MIPR)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/MIPR51284.2021.00008\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 4th International Conference on Multimedia Information Processing and Retrieval (MIPR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MIPR51284.2021.00008","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Cross-domain Person Re-Identification with Identity-preserving Style Transfer
Although great successes have been achieved recently in person re-identification (re-ID), there are still two major obstacles restricting its real-world performance: large variety of camera styles and a limited number of samples for each identity. In this paper, we propose an efficient and scalable framework for cross-domain re-ID tasks. Single-model style transfer and pairwise comparison are seamlessly integrated in our framework through adversarial training. Moreover, we propose a novel identity-preserving loss to replace the content loss in style transfer and mathematically show that its minimization guarantees that the generated images have identical conditional distributions (conditioned on identity) as the real ones, which is critical for cross-domain person re-ID. Our model achieved state-of-the-art results in challenging cross-domain re-ID tasks.