{"title":"Multi-view video codec using compressive sensing for wireless video sensor networks","authors":"V. Angayarkanni, S. Radha, V. Akshaya","doi":"10.1504/IJMC.2019.10016171","DOIUrl":null,"url":null,"abstract":"In monitoring applications, different views are needed to be captured by multi-view video sensor nodes for understanding the scene clearly. These multi-view sequences have large volume of redundant data which affects the storage, transmission, bandwidth and lifetime of wireless video sensor nodes. A low complex coding technique is required for addressing these issues and for processing multi-view sensor data. Hence, in this paper, a framework on CS-based multi-view video codec using frame approximation technique (CMVC-FAT) is proposed. Quantisation with entropy coding based on frame skipping is adopted for achieving efficient video compression. For better prediction of skipped frame at receiver, a frame approximation technique (FAT) algorithm is proposed. Simulation results reveal that CMVC-FAT framework outperforms the existing method with achievement of 86.5% reduction in time and bits. Also, it shows 83.75% reduction in transmission energy compared with raw frame.","PeriodicalId":433337,"journal":{"name":"Int. J. Mob. Commun.","volume":"16 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Int. J. Mob. Commun.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1504/IJMC.2019.10016171","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
In monitoring applications, different views are needed to be captured by multi-view video sensor nodes for understanding the scene clearly. These multi-view sequences have large volume of redundant data which affects the storage, transmission, bandwidth and lifetime of wireless video sensor nodes. A low complex coding technique is required for addressing these issues and for processing multi-view sensor data. Hence, in this paper, a framework on CS-based multi-view video codec using frame approximation technique (CMVC-FAT) is proposed. Quantisation with entropy coding based on frame skipping is adopted for achieving efficient video compression. For better prediction of skipped frame at receiver, a frame approximation technique (FAT) algorithm is proposed. Simulation results reveal that CMVC-FAT framework outperforms the existing method with achievement of 86.5% reduction in time and bits. Also, it shows 83.75% reduction in transmission energy compared with raw frame.