{"title":"Neuromorphic Bayesian Surprise for Far-Range Event Detection","authors":"Randolph Voorhies, Lior Elazary, L. Itti","doi":"10.1109/AVSS.2012.49","DOIUrl":null,"url":null,"abstract":"In this paper we address the problem of detecting small, rare events in very high resolution, far-field video streams. Rather than learning color distributions for individual pixels, our method utilizes a uniquely structured network of Bayesian learning units which compute a combined measure of \"surprise\" across multiple spatial and temporal scales on various visual features. The features used, as well as the learning rules for these units are derived from recent work in computational neuroscience. We test the system extensively on both real and virtual data, and show that it out-performs a standard foreground/background segmentation approach as well as a standard visual saliency algorithm.","PeriodicalId":275325,"journal":{"name":"2012 IEEE Ninth International Conference on Advanced Video and Signal-Based Surveillance","volume":"376 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 IEEE Ninth International Conference on Advanced Video and Signal-Based Surveillance","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AVSS.2012.49","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
In this paper we address the problem of detecting small, rare events in very high resolution, far-field video streams. Rather than learning color distributions for individual pixels, our method utilizes a uniquely structured network of Bayesian learning units which compute a combined measure of "surprise" across multiple spatial and temporal scales on various visual features. The features used, as well as the learning rules for these units are derived from recent work in computational neuroscience. We test the system extensively on both real and virtual data, and show that it out-performs a standard foreground/background segmentation approach as well as a standard visual saliency algorithm.