{"title":"Distributed compressed video sensing based on convolutional sparse coding","authors":"Tomohito Mizokami, Y. Kuroki","doi":"10.1109/ISPACS48206.2019.8986358","DOIUrl":null,"url":null,"abstract":"This paper discusses a Distributed Compressed Video Sensing (DCVS) framework using Convolutional Sparse Coding (CSC). CSC is a technique to represent a signal as convolutions of filters and corresponding coefficients. Conventional block based DCVS methods divide a given video sequence into key and non-key frames. The key frames are decoded independently like still images, and the non-key frames use Side Information (SI) generated with previously decoded key frames. The sparse dictionaries of the non-key frames are designed with the SIs. However, in CSC based methods, a non-key frame can use the dictionary of the nearest key frame in the temporal domain since the dictionary filters, namely features, are robuster against motions than those of block based methods.","PeriodicalId":6765,"journal":{"name":"2019 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS)","volume":"20 1","pages":"1-2"},"PeriodicalIF":0.0000,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISPACS48206.2019.8986358","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This paper discusses a Distributed Compressed Video Sensing (DCVS) framework using Convolutional Sparse Coding (CSC). CSC is a technique to represent a signal as convolutions of filters and corresponding coefficients. Conventional block based DCVS methods divide a given video sequence into key and non-key frames. The key frames are decoded independently like still images, and the non-key frames use Side Information (SI) generated with previously decoded key frames. The sparse dictionaries of the non-key frames are designed with the SIs. However, in CSC based methods, a non-key frame can use the dictionary of the nearest key frame in the temporal domain since the dictionary filters, namely features, are robuster against motions than those of block based methods.