Pudi Raj Bhagath, J. Mukherjee, Sudipta Mukopadhayay
{"title":"采用深度卷积神经网络解码器的无反馈信道分布式视频编码低复杂度编码器","authors":"Pudi Raj Bhagath, J. Mukherjee, Sudipta Mukopadhayay","doi":"10.1145/3009977.3009986","DOIUrl":null,"url":null,"abstract":"We propose a very low complexity encoder for feedback-channel-free distributed video coding (DVC) applications using deep convolutional neural network (CNN) at the decoder side. Deep CNN on super resolution uses low resolution (LR) images with 25% pixels information of high resolution (HR) image to super resolve it by the factor 2. Instead we train the network with 50% of noisy Wyner-Ziv (WZ) pixels to get full original WZ frame. So at the decoder, deep CNN reconstructs the original WZ image from 50% noisy WZ pixels. These noisy samples are obtained from the iterative algorithm called DLRTex. At the encoder side we compute local rank transform (LRT) of WZ frames for alternate pixels instead of all to reduce bit rate and complexity. These local rank transformed values are merged and their rank positions in the WZ frame are entropy coded using MQ-coder. In addition, average intensity values of each block of WZ frame are also transmitted to assist motion estimation. At the decoder, side information (SI) is generated by implementing motion estimation and compensation in LRT domain. The DLRTex algorithm is executed on SI using LRT to get the 50% noisy WZ pixels which are used in reconstructing full WZ frame. We compare our results with pixel domain DVC approaches and show that the coding efficiency of our codec is better than pixel domain distributed video coders based on low-density parity check and accumulate (LDPCA) or turbo codes. We also derive the complexity of our encoder interms of number of operations and prove that its complexity is very less compared to the LDPCA based methods.","PeriodicalId":93806,"journal":{"name":"Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing","volume":"15 4 1","pages":"44:1-44:7"},"PeriodicalIF":0.0000,"publicationDate":"2016-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Low complexity encoder for feedback-channel-free distributed video coding using deep convolutional neural networks at the decoder\",\"authors\":\"Pudi Raj Bhagath, J. Mukherjee, Sudipta Mukopadhayay\",\"doi\":\"10.1145/3009977.3009986\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We propose a very low complexity encoder for feedback-channel-free distributed video coding (DVC) applications using deep convolutional neural network (CNN) at the decoder side. Deep CNN on super resolution uses low resolution (LR) images with 25% pixels information of high resolution (HR) image to super resolve it by the factor 2. Instead we train the network with 50% of noisy Wyner-Ziv (WZ) pixels to get full original WZ frame. So at the decoder, deep CNN reconstructs the original WZ image from 50% noisy WZ pixels. These noisy samples are obtained from the iterative algorithm called DLRTex. At the encoder side we compute local rank transform (LRT) of WZ frames for alternate pixels instead of all to reduce bit rate and complexity. These local rank transformed values are merged and their rank positions in the WZ frame are entropy coded using MQ-coder. In addition, average intensity values of each block of WZ frame are also transmitted to assist motion estimation. At the decoder, side information (SI) is generated by implementing motion estimation and compensation in LRT domain. The DLRTex algorithm is executed on SI using LRT to get the 50% noisy WZ pixels which are used in reconstructing full WZ frame. We compare our results with pixel domain DVC approaches and show that the coding efficiency of our codec is better than pixel domain distributed video coders based on low-density parity check and accumulate (LDPCA) or turbo codes. We also derive the complexity of our encoder interms of number of operations and prove that its complexity is very less compared to the LDPCA based methods.\",\"PeriodicalId\":93806,\"journal\":{\"name\":\"Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing\",\"volume\":\"15 4 1\",\"pages\":\"44:1-44:7\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-12-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3009977.3009986\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3009977.3009986","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Low complexity encoder for feedback-channel-free distributed video coding using deep convolutional neural networks at the decoder
We propose a very low complexity encoder for feedback-channel-free distributed video coding (DVC) applications using deep convolutional neural network (CNN) at the decoder side. Deep CNN on super resolution uses low resolution (LR) images with 25% pixels information of high resolution (HR) image to super resolve it by the factor 2. Instead we train the network with 50% of noisy Wyner-Ziv (WZ) pixels to get full original WZ frame. So at the decoder, deep CNN reconstructs the original WZ image from 50% noisy WZ pixels. These noisy samples are obtained from the iterative algorithm called DLRTex. At the encoder side we compute local rank transform (LRT) of WZ frames for alternate pixels instead of all to reduce bit rate and complexity. These local rank transformed values are merged and their rank positions in the WZ frame are entropy coded using MQ-coder. In addition, average intensity values of each block of WZ frame are also transmitted to assist motion estimation. At the decoder, side information (SI) is generated by implementing motion estimation and compensation in LRT domain. The DLRTex algorithm is executed on SI using LRT to get the 50% noisy WZ pixels which are used in reconstructing full WZ frame. We compare our results with pixel domain DVC approaches and show that the coding efficiency of our codec is better than pixel domain distributed video coders based on low-density parity check and accumulate (LDPCA) or turbo codes. We also derive the complexity of our encoder interms of number of operations and prove that its complexity is very less compared to the LDPCA based methods.