2019 Data Compression Conference (DCC)最新文献

筛选
英文 中文
M to 1 Joint Source-Channel Coding of Gaussian Sources via Dichotomy of the Input Space Based on Deep Learning 基于深度学习的输入空间二分类高斯源M to 1联合信道编码
2019 Data Compression Conference (DCC) Pub Date : 2019-03-26 DOI: 10.1109/DCC.2019.00057
Yashas Malur Saidutta, A. Abdi, F. Fekri
{"title":"M to 1 Joint Source-Channel Coding of Gaussian Sources via Dichotomy of the Input Space Based on Deep Learning","authors":"Yashas Malur Saidutta, A. Abdi, F. Fekri","doi":"10.1109/DCC.2019.00057","DOIUrl":"https://doi.org/10.1109/DCC.2019.00057","url":null,"abstract":"In this paper, we propose a deep neural network framework for Joint Source-Channel Coding of an m dimensional i.i.d. Gaussian source for transmission over a single additive white Gaussian noise channel with no delay. The framework employs two neural encoder-decoder pairs that learn to split the input signal space into two disjoint support sets. The encoder and the decoder are jointly trained to minimize the mean square error subject to a power constraint on the signal transmitted across the channel. The proposed method achieves results as good as the state of the art for m=3,4 and is easily extendable to higher dimensions. The trained model, we discovered, assigns almost equal probability to the disjoint support sets. The results show that the scheme performance is within 1.9dB of the Shannon optimal limit over a wide range of Channel Signal to Noise Ratios (CSNR) from 0dB to 30dB for various values of m. The method is also robust, i.e. employing a model trained at CSNR+/-5dB is only 0.6dB worse than a model trained specifically for that CSNR.","PeriodicalId":167723,"journal":{"name":"2019 Data Compression Conference (DCC)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126808335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Super-Ray Based Low Rank Approximation for Light Field Compression 基于超射线的低秩近似光场压缩
2019 Data Compression Conference (DCC) Pub Date : 2019-03-01 DOI: 10.1109/DCC.2019.00045
E. Dib, Mikael Le Pendu, Xiaoran Jiang, C. Guillemot
{"title":"Super-Ray Based Low Rank Approximation for Light Field Compression","authors":"E. Dib, Mikael Le Pendu, Xiaoran Jiang, C. Guillemot","doi":"10.1109/DCC.2019.00045","DOIUrl":"https://doi.org/10.1109/DCC.2019.00045","url":null,"abstract":"We describe a local low rank approximation method based on super-rays for light field compression. Super-rays can be seen as a set of super-pixels that are coherent across all light field views. A super-ray based disparity estimation method is proposed using a low rank prior, in order to be able to align all the super-pixels forming each super-ray. A dedicated super-ray construction method is described that constrains the super-pixels forming a given super-ray to be all of the same shape and size, dealing with occlusions. This constraint is needed so that the super-rays can be used as a support of angular dimensionality reduction based on low rank matrix approximation. A low rank matrix approximation is then computed on the disparity compensated super-rays using a singular value decomposition (SVD). A coding algorithm is then described for the different components of the resulting low rank approximation. Experimental results show performance gains compared with two reference light field coding schemes (HEVC-based scheme and JPEG-Pleno VM 1.1).","PeriodicalId":167723,"journal":{"name":"2019 Data Compression Conference (DCC)","volume":"139 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127472851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Lossless Compression of Light Fields Using Multi-reference Minimum Rate Predictors 使用多参考最小速率预测器的光场无损压缩
2019 Data Compression Conference (DCC) Pub Date : 2019-03-01 DOI: 10.1109/DCC.2019.00049
João M. Santos, P. Assunção, L. Cruz, Luis M. N. Tavora, R. Fonseca-Pinto, S. Faria
{"title":"Lossless Compression of Light Fields Using Multi-reference Minimum Rate Predictors","authors":"João M. Santos, P. Assunção, L. Cruz, Luis M. N. Tavora, R. Fonseca-Pinto, S. Faria","doi":"10.1109/DCC.2019.00049","DOIUrl":"https://doi.org/10.1109/DCC.2019.00049","url":null,"abstract":"This paper presents a method to improve the lossless compression efficiency of light field encoding based on Minimum Rate Predictors (MRP). The proposed method relies on the use of multiple references, either micro-images or sub-aperture images, to provide a richer set of correlated pixels for prediction. The results show better compression ratios than conventional versions of MRP, for both representation formats (micro-image and sub-aperture image arrays), achieving gains ranging from 16.9% to 21.2%. Furthermore, it is also shown that the proposed method consistently outperforms the state-of-the-art lossless encoders HEVC and JPEG-LS.","PeriodicalId":167723,"journal":{"name":"2019 Data Compression Conference (DCC)","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122143806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Vectorizing Fast Compression 矢量化快速压缩
2019 Data Compression Conference (DCC) Pub Date : 2019-03-01 DOI: 10.1109/DCC.2019.00110
Roy Oursler, Gregory Tucker
{"title":"Vectorizing Fast Compression","authors":"Roy Oursler, Gregory Tucker","doi":"10.1109/DCC.2019.00110","DOIUrl":"https://doi.org/10.1109/DCC.2019.00110","url":null,"abstract":"Vectorization is a useful tool for CPU performance improvement, but is difficult to apply due to the data access patterns vector instructions require. General data compression is one application where vectorization has been difficult to apply. We demonstrate how vectorization can be employed for a high speed Deflate implementation which could be extended to other common compression standards.","PeriodicalId":167723,"journal":{"name":"2019 Data Compression Conference (DCC)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128386992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dv2v: A Dynamic Variable-to-Variable Compressor Dv2v:动态变量对变量压缩器
2019 Data Compression Conference (DCC) Pub Date : 2019-03-01 DOI: 10.1109/DCC.2019.00016
N. Brisaboa, A. Fariña, Adrián Gómez-Brandón, G. Navarro, Tirso V. Rodeiro
{"title":"Dv2v: A Dynamic Variable-to-Variable Compressor","authors":"N. Brisaboa, A. Fariña, Adrián Gómez-Brandón, G. Navarro, Tirso V. Rodeiro","doi":"10.1109/DCC.2019.00016","DOIUrl":"https://doi.org/10.1109/DCC.2019.00016","url":null,"abstract":"We present D-v2v, a new dynamic (one-pass) variable-to-variable compressor. Variable-to-variable compression aims at using a modeler that gathers variable-length input symbols and a variable-length statistical coder that assigns shorter codewords to the more frequent symbols. In D-v2v, we process the input text word-wise to gather variable-length symbols that can be either terminals (new words) or non-terminals, subsequences of words seen before in the input text. Those input symbols are set in a vocabulary that is kept sorted by frequency. Therefore, those symbols can be easily encoded with dense codes. Our D-v2v permits real-time transmission of data, i.e. compression/transmission can begin as soon as data become available. Our experiments show thatD-v2vis able to overcome the compression ratios of the v2vDC, the state-of-the-art semi-static variable-to-variable compressor, and to almost reach p7zip values. It also draws a competitive performance at both compression and decompression.","PeriodicalId":167723,"journal":{"name":"2019 Data Compression Conference (DCC)","volume":"142 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128595639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spike Coding: Towards Lossy Compression for Dynamic Vision Sensor 脉冲编码:动态视觉传感器的有损压缩
2019 Data Compression Conference (DCC) Pub Date : 2019-03-01 DOI: 10.1109/DCC.2019.00084
Yihua Fu, Jianing Li, Siwei Dong, Yonghong Tian, Tiejun Huang
{"title":"Spike Coding: Towards Lossy Compression for Dynamic Vision Sensor","authors":"Yihua Fu, Jianing Li, Siwei Dong, Yonghong Tian, Tiejun Huang","doi":"10.1109/DCC.2019.00084","DOIUrl":"https://doi.org/10.1109/DCC.2019.00084","url":null,"abstract":"Dynamic vision sensor (DVS) as a bio-inspired camera, has shown great advantages in high dynamic range (HDR) and high temporal resolution (us) in vision tasks. However, how to lossy compress asynchronous spikes for meeting the demand of large-scale transmission and storage meanwhile maintaining the analysis performance still remains open. Towards this end, this paper proposes a lossy spike coding framework for DVS.","PeriodicalId":167723,"journal":{"name":"2019 Data Compression Conference (DCC)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129992803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A CU Split Early Termination Algorithm Based KNN for 360-Degree Video 基于KNN的360度视频CU分割提前终止算法
2019 Data Compression Conference (DCC) Pub Date : 2019-03-01 DOI: 10.1109/DCC.2019.00106
Zhi Liu, Peiran Song, Mengmeng Zhang
{"title":"A CU Split Early Termination Algorithm Based KNN for 360-Degree Video","authors":"Zhi Liu, Peiran Song, Mengmeng Zhang","doi":"10.1109/DCC.2019.00106","DOIUrl":"https://doi.org/10.1109/DCC.2019.00106","url":null,"abstract":"360-degree video has been developed rapidly and widely used in recent years.360Lib is a test platform (software) for JVET panoramic video, hosted by Subversion code. The 360Lib in this article is integrated into the HM.At this stage, coding 360 degrees video needs to convert video according to ERP, EAP, CMP and other formats, then encode with HEVC, and then reverse transform to 360 degree video.Since 360lib just transforms the projection format, it still uses HEVC to code, and does not use the 360-degree video’s characteristics. Based on the characteristics of 360lib, this paper proposes a A CU Split Early Termination Algorithm. Experimental results show that the proposed fast algorithm provides an average time reduction rate of 32% compared to the reference HM-16.16+360lib4.0, with only 1.3% BD-rate increase.","PeriodicalId":167723,"journal":{"name":"2019 Data Compression Conference (DCC)","volume":"211 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124730755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
FastIntra360: A Fast Intra-Prediction Technique for 360-Degrees Video Coding FastIntra360: 360度视频编码的快速内预测技术
2019 Data Compression Conference (DCC) Pub Date : 2019-03-01 DOI: 10.1109/DCC.2019.00117
Iago Storch, B. Zatt, L. Agostini, L. Cruz, D. Palomino
{"title":"FastIntra360: A Fast Intra-Prediction Technique for 360-Degrees Video Coding","authors":"Iago Storch, B. Zatt, L. Agostini, L. Cruz, D. Palomino","doi":"10.1109/DCC.2019.00117","DOIUrl":"https://doi.org/10.1109/DCC.2019.00117","url":null,"abstract":"360-degrees videos represent a whole sphere and enable the user to feel as if he is inside the scene. These videos demand more data than conventional videos to be represented, therefore they also must be compressed to be handled properly. However, current video coding standards only process rectangular videos, thus 360 videos must be represented in a flat fashion to be encoded. There are several projections to perform this and the currently most used one is the equirectangular projection (ERP), which transforms each parallel from the sphere into a row of the rectangle, resulting in a faithful representation of the equatorial area, and a stretched representation of the polar regions. This stretching in the polar regions tends to impact the behavior of intra-frame prediction, which is used to exploit the spatial redundancies in each frame. Therefore, this paper proposes FastIntra360 to accelerate the encoding of 360 videos. FastIntra360 is implemented in HEVC video coding standard [1], which is a recently established standard and poses high computational demand. During the development of FastIntra360, a set of videos were encoded and the behavior of the intra-prediction throughout the frame was extracted. Then, a statistical analysis was conducted over such data and it concluded that when encoding the polar regions of the frame, the prediction modes which exploit horizontal directions are selected more frequently than the remaining modes, whereas in the center of the frame all prediction modes present similar occurrence rates. FastIntra360 exploits this behavior to reduce the number of prediction modes evaluated in different regions of the frame to accelerate the encoding. FastIntra360 is developed in two variants: one considering three bands and other considering five bands, where each band is a horizontal stripe of the frame. Each band divides the frame samples into three or five stripes and performs the statistical analysis over these stripes individually. Both implementations were evaluated and compared against the HEVC Test Model version 16.16 (HM-16.16) according to time reduction and coding efficiency (considering BD-BR), where BD-BR represents the bitrate increase of the proposed technique. Experimental results showed that both implementations present good performance, reaching up to 16.5% complexity reduction with negligible BD-BR, that is, they present considerable complexity reduction whereas posing no harm to the video quality.","PeriodicalId":167723,"journal":{"name":"2019 Data Compression Conference (DCC)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127110662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Compact Representations of Dynamic Video Background Using Motion Sprites 使用运动精灵的动态视频背景的紧凑表示
2019 Data Compression Conference (DCC) Pub Date : 2019-03-01 DOI: 10.1109/DCC.2019.00052
Solomon Garber, Aaditya (Adi) Prakash, Ryan Marcus, Antonella DiLillo, J. Storer
{"title":"Compact Representations of Dynamic Video Background Using Motion Sprites","authors":"Solomon Garber, Aaditya (Adi) Prakash, Ryan Marcus, Antonella DiLillo, J. Storer","doi":"10.1109/DCC.2019.00052","DOIUrl":"https://doi.org/10.1109/DCC.2019.00052","url":null,"abstract":"We present a method to extend the idea of sprite coding to videos containing a wide variety of naturally occurring background motion, which could potentially be incorporated into existing and future video standards. The existing MPEG-4 part 2 standard, now almost 20 years old, provides the ability to store objects in separate layers, and includes a sprite mode where the background layer is generated by cropping a still image based on frame-wide global motion parameters, but videos containing more general background motion cannot be effectively encoded with sprite mode. We propose a perceptually motivated lossy compression algorithm, where oscillatory background motion can be compactly encoded. Our model achieves a low bit rate by referencing a time-invariant representation of the optical flow with only a few added parameters per frame. At very low bit rates, our technique can provide dynamic backgrounds at a visual quality that may not be achievable by traditional methods which are known to produce unacceptable blocking and ringing artifacts.","PeriodicalId":167723,"journal":{"name":"2019 Data Compression Conference (DCC)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127404183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The Bit Allocation Method Based on Inter-View Dependency for Multi-View Texture Video Coding 基于视点间依赖的多视点纹理视频编码位分配方法
2019 Data Compression Conference (DCC) Pub Date : 2019-03-01 DOI: 10.1109/DCC.2019.00051
Tiansong Li, Li Yu, Shengju Yu, Yamei Chen
{"title":"The Bit Allocation Method Based on Inter-View Dependency for Multi-View Texture Video Coding","authors":"Tiansong Li, Li Yu, Shengju Yu, Yamei Chen","doi":"10.1109/DCC.2019.00051","DOIUrl":"https://doi.org/10.1109/DCC.2019.00051","url":null,"abstract":"Multi-view texture video coding is very important, we propose a bit allocation method based on view layer and a bitrate decision method for P-frame of the dependent view (DV). First of all, considering that the distortion in the base view (BV) is directly transmitted to the DV by inter-view skip mode, the RD model of the DV is improved based on the inter view dependency. In this paper, a precise power model is derived based on our joint RD model to represent the target bitrates relationship between the BV and the DV. Then, since the P frame in the DV (P-DV) is mainly predicted from the corresponding I frame in the BV (I-BV) by inter-view prediction, the constant proportional relationship between the P-DV and the I-BV is discovered in this paper. Based on this discovery, a novel linear model is built to assign the target bitrates of the P-DV. Extensive experimental results exhibit that the proposed scheme provides a better RD performance than the state-of-the-art algorithms.","PeriodicalId":167723,"journal":{"name":"2019 Data Compression Conference (DCC)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127498271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信