MAED '12Pub Date : 2012-11-02DOI: 10.1145/2390832.2390838
A. Loos
{"title":"Identification of great apes using gabor features and locality preserving projections","authors":"A. Loos","doi":"10.1145/2390832.2390838","DOIUrl":"https://doi.org/10.1145/2390832.2390838","url":null,"abstract":"In the ongoing biodiversity crisis many species, particularly primates like chimpanzees for instance are threatened and need to be protected. Often, autonomous monitoring techniques using remote camera devices are used to estimate the remaining population sizes. Unfortunately, the manual analysis of the resulting video material is very tedious and time consuming. To reduce the burden of time consuming routine work, researches have recently started to use computer vision algorithms to identify individuals. In this paper we present an approach for automatic face identification for primates, especially chimpanzees. We successfully combine Gabor features with Locality Preserving Projections (LPP). As classifier we use a new method called Sparse Representation Classification (SRC). In two experiments we show that our approach outperforms a recently published algorithm for face recognition of Great Apes. We also compare our algorithm to other state-of-the-art face recognition algorithms using three methods for feature-space transformation and two different classification approaches, namely SRC and an enhanced version called Robust Sparse Coding (RSC). Our approach not only outperforms the other algorithms for full-frontal faces but is also more invariant to pose changes. For our experiments we use two publicly available, real-world databases of captive and free-living chimpanzees from the zoo of Leipzig, Germany and the Tai National Park, Africa, respectively. Even though both datasets are very challenging due to difficult lighting conditions, non-cooperative subjects, various pose changes and even partial occlusion, the achieved recognition rates are very promising and therefore our approach has the potential to open up new ways in effective biodiversity conservation management.","PeriodicalId":173175,"journal":{"name":"MAED '12","volume":"296 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114846720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
MAED '12Pub Date : 2012-11-02DOI: 10.1145/2390832.2390835
Nathan Graves, S. Newsam
{"title":"Visibility cameras: where and how to look","authors":"Nathan Graves, S. Newsam","doi":"10.1145/2390832.2390835","DOIUrl":"https://doi.org/10.1145/2390832.2390835","url":null,"abstract":"This paper investigates image processing and pattern recognition techniques to estimate light extinction based on the visual content of images from static cameras. We propose two predictive models that incorporate multiple scene regions into the estimation: regression trees and multivariate linear regression. Incorporating multiple regions is important since regions at different distances are effective for estimating light extinction under different visibility regimes. We evaluate our models using a sizable dataset of images and ground truth light extinction values from a visibility camera system in Phoenix, Arizona.","PeriodicalId":173175,"journal":{"name":"MAED '12","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130166877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}