Q. Hong, N. N. Tuan, T. T. Quang, Dung Nguyen Tien, C. Le
{"title":"Deep spatio-temporal network for accurate person re-identification","authors":"Q. Hong, N. N. Tuan, T. T. Quang, Dung Nguyen Tien, C. Le","doi":"10.1109/INFOC.2017.8001673","DOIUrl":null,"url":null,"abstract":"Feature extraction is one of two core tasks of a person re-identification besides metric learning. Building an effective feature extractor is the common goal of any research in the field. In this work, we propose a deep spatio-temporal network model which consists of a VGG-16 as a spatial feature extractor and a GRU network as an image sequence descriptor. Two temporal pooling techniques are investigated to produce compact yet discriminative sequence-level representation from a sequence of arbitrary length. To highlight the effectiveness of the final sequence-level feature set, we use a cosine distance metric learning to find an accurate probe-gallery pair. Experimental results on the ilIDS-VID and PRID 2011 dataset show that our method is slightly better on one dataset and significantly better on the other than state-of-the-art ones.","PeriodicalId":109602,"journal":{"name":"2017 International Conference on Information and Communications (ICIC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 International Conference on Information and Communications (ICIC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/INFOC.2017.8001673","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Feature extraction is one of two core tasks of a person re-identification besides metric learning. Building an effective feature extractor is the common goal of any research in the field. In this work, we propose a deep spatio-temporal network model which consists of a VGG-16 as a spatial feature extractor and a GRU network as an image sequence descriptor. Two temporal pooling techniques are investigated to produce compact yet discriminative sequence-level representation from a sequence of arbitrary length. To highlight the effectiveness of the final sequence-level feature set, we use a cosine distance metric learning to find an accurate probe-gallery pair. Experimental results on the ilIDS-VID and PRID 2011 dataset show that our method is slightly better on one dataset and significantly better on the other than state-of-the-art ones.