{"title":"Person Re-identification Method Based on Cross-domain Image Style Conversion","authors":"Chenkui Wang, Yuelin Chen, Xiaodong Cai, Wenjing Tian","doi":"10.1145/3408127.3408169","DOIUrl":null,"url":null,"abstract":"The existing cross-domain person re-identification technology has a large difference between the source domain and the target domain, and the performance of the model trained in single-domain is significantly reduced when directly used on another domain. In this paper, a person image style conversion method is used to increase the sample diversity, which narrows the difference between the source domain and the target domain sample. By Cycle Generative Adversarial Networks, the similarity of the source domain image combined with the image style of another domain is closer to the similarity of the target domain data, and the sample diversity is enhanced before the unconverted, which reduces the inter-domain difference of the sample and makes the model have better generalization ability. The experimental results show that after translating the styles of the Market-1501, DukeMTMC-reID and MSMT17 datasets, and then extracting global features through ResNet-50, the accuracy of cross-domain re-identification is significantly improved. At the same time, the model can achieve better re-recognition results without incorporating the style of the target domain dataset or when there are fewer target datasets, which is better than other cross-domain pedestrian re-recognition methods that currently perform well.","PeriodicalId":383401,"journal":{"name":"Proceedings of the 2020 4th International Conference on Digital Signal Processing","volume":"217 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2020 4th International Conference on Digital Signal Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3408127.3408169","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The existing cross-domain person re-identification technology has a large difference between the source domain and the target domain, and the performance of the model trained in single-domain is significantly reduced when directly used on another domain. In this paper, a person image style conversion method is used to increase the sample diversity, which narrows the difference between the source domain and the target domain sample. By Cycle Generative Adversarial Networks, the similarity of the source domain image combined with the image style of another domain is closer to the similarity of the target domain data, and the sample diversity is enhanced before the unconverted, which reduces the inter-domain difference of the sample and makes the model have better generalization ability. The experimental results show that after translating the styles of the Market-1501, DukeMTMC-reID and MSMT17 datasets, and then extracting global features through ResNet-50, the accuracy of cross-domain re-identification is significantly improved. At the same time, the model can achieve better re-recognition results without incorporating the style of the target domain dataset or when there are fewer target datasets, which is better than other cross-domain pedestrian re-recognition methods that currently perform well.