Jonathan D. Rymel, John-Paul Renno, D. Greenhill, J. Orwell, Graeme A. Jones
{"title":"Adaptive eigen-backgrounds for object detection","authors":"Jonathan D. Rymel, John-Paul Renno, D. Greenhill, J. Orwell, Graeme A. Jones","doi":"10.1109/ICIP.2004.1421436","DOIUrl":null,"url":null,"abstract":"Most tracking algorithms detect moving objects by comparing incoming images against a reference frame. Crucially, this reference image must adapt continuously to the current lighting conditions if objects are to be accurately differentiated. In this work, a novel appearance model method is presented based on the eigen-background approach. The image can be efficiently represented by a set of appearance models with few significant dimensions. Rather than accumulating the necessarily enormous training set to generate the eigen model, the described technique builds and adapts the eigen-model online evolving both the parameters and number of significant dimension. For each incoming image, a reference frame may be efficiently hypothesized from a subsample of the incoming pixels. A comparative evaluation that measures segmentation accuracy using large amounts of manually derived ground truth is presented.","PeriodicalId":184798,"journal":{"name":"2004 International Conference on Image Processing, 2004. ICIP '04.","volume":"27 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2004-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"54","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2004 International Conference on Image Processing, 2004. ICIP '04.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICIP.2004.1421436","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 54
Abstract
Most tracking algorithms detect moving objects by comparing incoming images against a reference frame. Crucially, this reference image must adapt continuously to the current lighting conditions if objects are to be accurately differentiated. In this work, a novel appearance model method is presented based on the eigen-background approach. The image can be efficiently represented by a set of appearance models with few significant dimensions. Rather than accumulating the necessarily enormous training set to generate the eigen model, the described technique builds and adapts the eigen-model online evolving both the parameters and number of significant dimension. For each incoming image, a reference frame may be efficiently hypothesized from a subsample of the incoming pixels. A comparative evaluation that measures segmentation accuracy using large amounts of manually derived ground truth is presented.