{"title":"Video Modeling by Spatio-Temporal Resampling and Bayesian Fusion","authors":"Yunfei Zheng, Xin Li","doi":"10.1109/ICIP.2007.4379607","DOIUrl":null,"url":null,"abstract":"In this paper, we propose an empirical Bayesian approach toward video modeling and demonstrate its application in multiframe image restoration. Based on our previous work on spatio-temporall adaptive localized learning (STALL), we introduce a new concept of spatio-temporal resampling to facilitate the task of video modeling. Resampling produces a redundant representation of video signals with distributed spatio-temporal characteristics. When combined with STALL model, we show how to probabilistically combine the linear regression results of resampled video signals under a Bayesian framework. Such empirical Bayesian approach opens the door to develop a whole new class of video processing algorithms without explicit motion estimation or segmentation. The potential of our distributed video model is justified by considering its application into two multiframe image restoration tasks: repair damaged blocks and remove impulse noise.","PeriodicalId":131177,"journal":{"name":"2007 IEEE International Conference on Image Processing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2007-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2007 IEEE International Conference on Image Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICIP.2007.4379607","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
In this paper, we propose an empirical Bayesian approach toward video modeling and demonstrate its application in multiframe image restoration. Based on our previous work on spatio-temporall adaptive localized learning (STALL), we introduce a new concept of spatio-temporal resampling to facilitate the task of video modeling. Resampling produces a redundant representation of video signals with distributed spatio-temporal characteristics. When combined with STALL model, we show how to probabilistically combine the linear regression results of resampled video signals under a Bayesian framework. Such empirical Bayesian approach opens the door to develop a whole new class of video processing algorithms without explicit motion estimation or segmentation. The potential of our distributed video model is justified by considering its application into two multiframe image restoration tasks: repair damaged blocks and remove impulse noise.