Riku Matsumoto, Hiroki Yoshimura, Masashi Nishiyama, Y. Iwai
{"title":"Feature extraction using gaze of participants for classifying gender of pedestrians in images","authors":"Riku Matsumoto, Hiroki Yoshimura, Masashi Nishiyama, Y. Iwai","doi":"10.1109/ICIP.2017.8296942","DOIUrl":null,"url":null,"abstract":"Human participants look at informative regions when attempting to identify the gender of a pedestrian in images. In our preliminary experiment, participants mainly looked at the head and chest regions when classifying gender in these images. Thus, we hypothesized that the regions in which participants gaze locations were clustered would contain discriminative features for a gender classifier. In this paper, we discuss how to reveal and use gaze locations for the gender classification of pedestrian images. Our method acquired the distribution of gaze locations from various participants while they manually classified gender. We termed this distribution a gaze map. To extract discriminative features, we assigned large weights to regions with clusters of gaze locations in the gaze map. Our experiments show that this gaze-based feature extraction method significantly improved the performance of gender classification when combined with either a deep learning or a metric learning classifier.","PeriodicalId":229602,"journal":{"name":"2017 IEEE International Conference on Image Processing (ICIP)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 IEEE International Conference on Image Processing (ICIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICIP.2017.8296942","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
Human participants look at informative regions when attempting to identify the gender of a pedestrian in images. In our preliminary experiment, participants mainly looked at the head and chest regions when classifying gender in these images. Thus, we hypothesized that the regions in which participants gaze locations were clustered would contain discriminative features for a gender classifier. In this paper, we discuss how to reveal and use gaze locations for the gender classification of pedestrian images. Our method acquired the distribution of gaze locations from various participants while they manually classified gender. We termed this distribution a gaze map. To extract discriminative features, we assigned large weights to regions with clusters of gaze locations in the gaze map. Our experiments show that this gaze-based feature extraction method significantly improved the performance of gender classification when combined with either a deep learning or a metric learning classifier.