{"title":"Domain transfer for person re-identification","authors":"Ryan Layne, Timothy M. Hospedales, S. Gong","doi":"10.1145/2510650.2510658","DOIUrl":null,"url":null,"abstract":"Automatic person re-identification in is a crucial capability underpinning many applications in public space video surveillance. It is challenging due to intra-class variation in person appearance when observed in different views, together with limited inter-class variability. Various recent approaches have made great progress in re-identification performance using discriminative learning techniques. However, these approaches are fundamentally limited by the requirement of extensive annotated training data for every pair of views. For practical re-identification, this is an unreasonable assumption, as annotating extensive volumes of data for every pair of cameras to be re-identified may be impossible or prohibitively expensive.\n In this paper we move toward relaxing this strong assumption by investigating flexible multi-source transfer of re-identification models across camera pairs. Specifically, we show how to leverage prior re-identification models learned for a set of source view pairs (domains), and flexibly combine these to obtain good re-identification performance in a target view pair (domain) with greatly reduced training data requirements in the target domain.","PeriodicalId":360789,"journal":{"name":"ACM/IEEE international workshop on Analysis and Retrieval of Tracked Events and Motion in Imagery Stream","volume":"49 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"30","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM/IEEE international workshop on Analysis and Retrieval of Tracked Events and Motion in Imagery Stream","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2510650.2510658","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 30
Abstract
Automatic person re-identification in is a crucial capability underpinning many applications in public space video surveillance. It is challenging due to intra-class variation in person appearance when observed in different views, together with limited inter-class variability. Various recent approaches have made great progress in re-identification performance using discriminative learning techniques. However, these approaches are fundamentally limited by the requirement of extensive annotated training data for every pair of views. For practical re-identification, this is an unreasonable assumption, as annotating extensive volumes of data for every pair of cameras to be re-identified may be impossible or prohibitively expensive.
In this paper we move toward relaxing this strong assumption by investigating flexible multi-source transfer of re-identification models across camera pairs. Specifically, we show how to leverage prior re-identification models learned for a set of source view pairs (domains), and flexibly combine these to obtain good re-identification performance in a target view pair (domain) with greatly reduced training data requirements in the target domain.